Breaking Down the Spotify Algorithm

With more than 350 million users worldwide Spotify definitely is one of, if not the most influential streaming service in today’s music landscape. It is especially popular for features such as the collaborative options in playlist curating and the yearly release of “Spotify Wrapped” but also for its extremely advanced personalization system that not only benefits users by creating an enjoyable experience and helps artists with growing their audience but also has a major positive impact on Spotify’s goals of user retention, a lot of time being spent using the platform and finally generating revenue.

Given the influence that Spotify has on today’s music industry it is useful for artists and anyone involved with them to understand how the the platform’s algorithms work together to create the final experience that we know. Of course, many details about this process remain a well-kept secret within Spotify’s inner circle but there are some main components that we can (assume to) understand.

In order to make the best suggestions possible, Spotify’s AI recommender system needs to understand both the content and the users that are involved in the recommendation. For that, they use different algorithms to generate item and user representations that then shape the recommendations made.

The item representations are simply the information that’s being collected about tracks and artists. It can be split into two categories: content-based and collaborative filtering.

The content-based part collects information on characteristics such as the sound, topics and reception by the outside world of a track. One source is the data that artists provide when pitching a track, for example the title, genre, language, mood, and style of their song, so if you’re an artist be sure to fill out the form as accurately and detailed as possible. Another source is the audio analysis that is done once the track has been uploaded to the platform. The exact ways this tool works are possibly one of Spotify’s biggest secrets but what is known about it already is pretty impressive. It’s capable of detecting whether a song has vocals and if so, how much there are, what energy level it has, as well as its danceability and vibe. Besides these audio features, it also analyzes the temporal structure of a song by dividing it into segments, such as sections (e.g. verse, chorus, bridge), bars and even tatums (a subdivision of the main beat). You can try this analysis out yourself by looking up a track with Spotify’s public Audio Features tool! Finally, NPL (Natural Language Processing) models conduct content analyses by scanning a track’s lyrics, titles of playlists that contain it, as well as web-crawled data, including blog posts and journalists’ opinions and descriptions of a track.

Audio analysis of “All Too Well (10 Minute Version) (Taylor’s Version) (From The Vault)” by Spotify Audio Features

To complete the item representations, collaborative filtering focuses on the track in its relation to other tracks by analyzing user-generated features. An easy but error-prone way to do this would be through consumption-based filtering, meaning the “other users with a similar taste also like” type of recommendation models that use users’ listening histories to match listeners with new contents. But with this being a rather unsustainable way of creating personalized experiences we can assume that Spotify doesn’t use this method at all or only in very limited ways and prefers the approach of organizational similarity. This means that songs are mostly considered similar if they appear on the same playlist which enables the algorithms to respect the possibility that a listener will listen to very different genres while still differentiating between them as users are unlikely to put a jazz ballad into the same playlist as a metal song. For this, Spotify analyzes around 700 million user-generated playlists that seem to have been created with a lot of effort and passion, which also provide the context in which the songs are similar.

Now that the database has information about the track/artist side, it also needs user representations in order to match listeners and songs. For that, algorithms keep track of active and passive user feedback and uses context to interpret activities. Active or explicit feedback weighs in more than passive feedback and includes information on saves, playlist adds, shares, follows and visits of artist and album pages. The passive or implicit feedback focuses on the lengths of the listening sessions, track playthroughs, and repeat listens but has less impact than the active component since a long listening session doesn’t necessarily mean greater enjoyment (like in study sessions where music is more of a background noise and not actively listened to). These two types of feedback aren’t just accepted as they are but the algorithms also uses context to interpret the activities: if a listener skips a song in the “What’s New” section they might just want to listen to the full piece later since they’re being confronted with a large amount of new songs but if a song is skipped on a studying playlist that’s typically played in the background the user probably really doesn’t want to listen to it. All of this feedback is then accumulated and combined with consumption contexts and more data such as demographics to generate a user profile that can look like this:

Source: Spotify R&D

Now that all information for both sides has been collected, it is time for the Spotify recommender system to play matchmaker between song and listener and combine all representations to recommend the right music at the right time to the right person. And with Spotify offering a broad range of recommendation features this is mediated by the recommender engine but each feature, like Daily Mix and Your Time Capsule, is said to have its own inner algorithms and logic that have tracking and user representations as their foundation but still function differently.

Spotify is working with an extremely intricate system, probably made up of hundreds of algorithms for all features, in order to introduce users to new music anytime anywhere, create perfect matches and thus also reaching their own business goals of user retention and revenue generation. While artists may only be able to have limited influence on how much and to whom their songs are recommended they definitely can (and should) make sure to have a well-maintained profile and provide as much of their own data as possible to be a bit closer to reaching the right audience. And if you’re ever not satisfied with the recommendations provided to you be sure to have a look at Fangirls World Tour’s Spotify playlists!

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close