Tuesday, November 28, 2017

Neuroinformatics meets music: Do you actually like that song or do you merely think you like it?

As a neuroinformatics researcher, my vision is to bridge consumers wearing mobile EEG devices with online music recommendation services. My research group is already working in that direction, decoding brainwaves while listening to music in order to predict “Like” ratings and then submit the feedback to music streaming services.

As an example, this past September we were asked to conduct a music-EEG experiment for the ‘Music Freedom’ service TV promo campaign of TELIA, Norway’s mobile network operator. By decoding the brainwaves of three famous Norwegian artists, our task was to reveal their likeness ratings for songs of various music genres that they would listen to.

 

TELIA had launched their novel ‘Music Freedom’ service last summer; they provided free data to customers who stream music to their mobiles using music recommendation services such as Spotify, Tidal and Beat. These music streaming services are now the “killer-apps” of a digital ecosystem where all customers are connected to the internet at any time and from any place using their mobile devices.

The production took place in Norway and it all had to happen fast. It was only two weeks before entering the shooting studio when director Christian Holm-Glad had contacted us saying, “Dimitrios, we want the real thing; we want to film an actual experiment, similar to what you describe in your paper” he said, as he was explaining the campaign concept.

TELIA’s advertising agency, Nord DDB Oslo, had come up with an innovative idea. Their concept was to harness brain science to demonstrate the contrast between literally liking a specific music genre versus simply thinking of liking it!

The collaboration was amazing. Despite the fact that many of us were coming from different sectors and backgrounds, we shared a common view on future forms of digital music. The music pros felt perfectly familiar with the idea of decoding brainwaves in order to retrieve accurate ratings of user preferences during music listening. They could visualize alongside with us the prospect of feeding these ratings to music streaming services and of dynamically created playlists that would match user’s “brain-taste”.

After several exhausting days of hard-work with Christian and his team followed, I was finally on my way back from Oslo to Thessaloniki in Greece. Reaching the airport, my mind was generating scenarios in the future with wearable devices smoothly interfacing with online services and artificial intelligence agents proactively anticipating and facilitating people’s desires and tastes.

Hey Siri, what’s the weather like in Thessaloniki?” I asked my phone’s A.I. assistant, as I was approaching the plane.

And then it struck me!

Hey Dimitrios, I have a new song for you today. Would you like to listen to it?

 

Dr. Dimitrios A. Adamos is a senior teaching and research fellow at the School of Music Studies, Aristotle University of Thessaloniki (AUTh) and a member of the Neuroinformatics.GRoup.

To learn more:



All Hypnosis Feeds

via SharpBrains https://sharpbrains.com

No comments:

Post a Comment