Q&A: Marcus Bergström, CEO of Vionlabs

Marcus Bergström, CEO of Vionlabs, talks about how the user experience is key to service providers and the ways they can differentiate themselves from the pack by creating personalised recommendations that move beyond simple metadata.

As content and app line-ups converge, what do service providers need to do to differentiate their service through the user experience?

High-quality programming alone is not enough to keep consumers subscribed to a video streaming service. The reality is that today’s audiences are overwhelmed with an abundance of content and it’s  often incredibly difficult for them to find what they want to watch. Forward-thinking service providers understand that in order to stand out from the crowd, they need to evolve the user experience, not just launch different versions of a Netflix-UI. Users today are tired of scrolling through an endless list of content lanes. Differentiation doesn’t come by launching the perfect content lane, it’s about rethinking the UX to make sure your service sticks out from the crowd and becomes the “go-to” service. Examples of this can be personalised channels that’s tailored to your consumption behavior at that time and day, mood channels curated based on emotional understanding of content etc.

How important is personalisation of the experience and what needs to be put in place to deliver truly compelling personalisation?

Personalisation is an incredibly important part of the user experience. The video streaming market is reaching a point of saturation and the services that offer greater personalisation are the ones that will ensure their longevity. If you look at music streaming services, like Spotify, the reason they are so popular is because they curate individual recommendations and playlists for each user. Video streaming services are yet to offer that level of nuance, which is why viewers are spending almost an hour a day searching for content.

The reason why audiences spend so much time searching for something to watch is because many streaming services are using content discovery systems which often provide simplistic and inaccurate recommendations. These content discovery systems rely on metadata, which broadly labels content based on data points such as genre, the actors starring in it, or best-case scenario: a few manually created keywords.

In order to take personalisation to the next level, streaming providers need to harness AI and machine learning technologies to analyse the audio and video file itself and gain a deep understanding of the content in a scalable way. Content analysis based on AI and machine learning can have different neural networks identify patterns in colour, audio, pace, stress levels, positive/negative emotions, camera movements and many other characteristics. By doing this you can extract unique data points related to moods/emotions for every asset inside of your library – no more coverage problems. This data will then unlock a range of unique use-cases that will make your streaming service stand out.

Can personalisation be extended to tailoring a service based on an individual’s mood and other variables and what kind of metadata is required to deliver this kind of personalisation?

The type of content we watch often reflects how we feel in that particular moment. If you’ve had a long, stressful day at work you’re more likely to want to watch a light-hearted sitcom than a tense, fast-paced thriller. Therefore, it makes sense for streaming providers to group content by mood. It’s actually the emotional data of the content coming from the audio/video file itself which helps identify the mood of a film or TV show and which channels it belongs to.

How can this metadata be generated efficiently and what role can AI play?

Imagine 1000s of human beings watching every asset inside of your library at the same time, and then having each one giving you an extremely detailed and accurate analysis of that asset plus comparing this analysis with the same analysis done for all other assets, that’s the power of AI. This allows us to automatically generate mood labels, mood timeseries and mood values for every asset inside of a library.

This is sponsored content. 

 

Read Next