Technology
Interdisciplinary human expertise
is the backbone of Musimap's technology

Our algorithm applies fifty-five weighted variables (e.g. complex moods, voice families, contextual keywords, special facts, etc.) to each music unit (e.g. tracks, genres, labels) so as to model the world’s discography as a multi-layered system of crossed influences derived from our extensive granular database. Our four interrelated APIs (data, media, simil/recommendation, and mapping) scan this proprietary database in a fraction of a second and match the data with any client’s catalogue to allow unprecedented relevance in music recommendation and the creation of advanced music applications.

developers.musimap.net

Interdisciplinary human expertise is the backbone of Musimap’s technology. With its roots in a PhD on musical morphing and the consumption of cultural products back in 1986, our technology has been developed by thirty renowned musicologists, sociologists, psychologists, and philosophers from all over Europe. Among these researchers are Joseph Kerman, Peter Szendy, Jerrold Levinson, Violaine Prince and Daniel Levitin. Thirty key music experts and artists also contributed to refining the technology such as Gilles Peterson (Worldwide), Karl Bartos (Kraftwerk), Olivier Cachin, Jean Marc Lederman (The Weathermen) or Pierre Bartholomée. Over the past 26 years the algorithm has been continuously improved to meet current and future technological standard and application requirements. 2.2 million euro has already been invested in R&D and the company has completed beta testing.

Musimap has evolved into a self-learning, context-aware neuronal music network. Owning a rapidly growing and uniquely advanced music database exceeding 2 billion data points, 700 million relations, 50 million tracks from 4 million artists, 11 thousand keywords, 400 genres, 390 complex moods, and 100 contexts, the company is quickly establishing itself as the leading emotion-sensitive cognitive music intelligence platform. It does notably close the gap between real time contextual inputs (psychological, physical, social) and accurate personalized recommendation music applications.

Technological Layers

1.
Musicological
System
2.
Semantic Intelligence
Based On Advanced
Lexicology
5.
Metadata
Enrichment
6.
Collaborative
Optimization
System
9.
Behavior
& Listening Habits
10.
Sensory
Inputs
3.
Collaborative
Filtering
4.
Synaptic
Web Mining
7.
Socio-Psychological
User Profiling
8.
Social Data
Mining
12.
Electro-acoustic
Signal Analysis
11.
Geo-Localization,
Contextualization
& Movement
Detection