8 minute read
Debunking The Music Algorithm
from January 2023
An anthropologist reveals how Spotify and other platforms decide what songs to recommend
By Kendall Polidori
Advertisement
Nick Seaver was working on his Ph.D. in anthropology when he began looking for the algorithm underlying music recommender systems. After more than a decade of study and interviews, he says there’s no such thing.
Instead, there are many algorithms. “This was a big moment for me in realizing that there are a lot of different ways to approach these systems,” he says. “There are different kinds of algorithms going on in these companies and lots of different recommendation products in any given music streaming service.”
While the systems operate as quickly and efficiently as they do because of artificial intelligence (AI), it’s the engineers—the humans behind these algorithms—who are consistently looking for ways to understand how a system should work on behalf of its users, says Seaver, now an assistant professor at Tufts University.
Their efforts matter because recommendation algorithms are everywhere: TV and movie streaming services, Instagram reel feeds, internet search results—and especially music platforms. It’s impossible to escape them, and no technology can dodge them.
But it’s tough to say exactly what forms the basis for the systems.
“When I asked people working in music recommendation why people like music, they barely ever answered me,” Seaver says. “They would sort of laugh and say, ‘Who knows?’ It’s very hard to say, and they were building [multiple systems], instead of one system, that embodied the theories that they thought were best fit.”
They built open-ended systems to capture whatever tastes they could, he continued. They were looking for patterns in whatever kinds of data they could find.
They update recommender systems in a never-ending process Seaver calls “continuous deployment.” That means at random times, multiple times a day, developers are pushing out new code. Users are never interacting with the same algorithm twice.
Over the years, Seaver has broadened his understanding of the systems by attending conferences and interviewing developers. He even interned at one company. He got a sense of how people in this field were thinking and what they were going to do next.
Recommender systems defined
Music recommender systems are based on people’s listening habits. The systems grab data anytime listeners take action on a music app. That goes beyond a user’s profile to include liked songs and saved albums. The systems record how long users listened to a song, what songs they’ve skipped and which genres they favor.
The systems use collaborative filtering to whittle down the data from user activity and reviews to make personalized recommendations, typically based on what other users with similar habits and tastes are listening to.
In October 2021, Luckbox reported on the music platform Spotify and its use of user data to recommend songs and artists. The article stated that “by tracking each song a user plays, what playlists they create or what podcast topics they choose, the platform can understand the mood or mindset of a user at any point in time.”
The article went on to say that “music consumption is personal and intimate, and the ability to understand users’ behavioral data is key for creating Spotify’s personal profiles.” (Read more in Spotify Knows More About You Than Your Taste in Music on luckboxmagazine.com).
Platforms like Spotify use multiple recommender systems for different recommender categories, including Daily Mixes, Discover Weekly, Artist Radios, Jump Back In, Made For You, Your Top Mixes, Fresh Finds and Uniquely Yours, Seaver says.
“There’s tons of data you can put in there,” he notes. “Did you turn up the volume when a certain song came on? Did you skip a song? Maybe that’s a sign you didn’t like it. Do [platforms] keep showing you this song at the top of a playlist and you never click on it? These things can be implicit signals of your preferences that get used in these systems as guesses about what you like and don’t like.”
Although the systems are not precisely the same on different platforms, they share the goal of making accurate predictions.
AI timeline
The earliest noted work in AI began in the mid-20th century, according to multiple sources. AI research flourished from 1957 to 1974, as machine learning algorithms improved, and researchers developed a better sense of which algorithms to apply to problems. Many of the goals for AI were achieved during the 1990s and early 2000s.
Today’s recommender systems began appearing in the mid 1990s, Seaver says. Ringo, one of the first, was developed at the Massachusetts Institute of Technology Media Lab. It used collaborative filtering but relied on ratings instead of on listening history.
Early systems used listening habits to rate how much a user liked a certain artist, song or genre. But that began to change around 2010, about the time Seaver began his research.
“It wasn’t obvious that people cared about getting recommendations predicted so precisely,” Seaver says. “There was also a growing availability of interaction data. It suddenly became possible to have a record of literally every song that someone listened to on Spotify, Pandora or whatever platform people use.”
Spotify doesn’t ask users to pick their favorite songs. The system already knows based on data profiles and listening behavior. The systems can also recommend music based on what a listener is doing, such as working out, studying, riding a bike or making dinner.
Pandora was one of the first music services to offer recommendations, primarily through artist radios. Today, the platform tells users why a song or artist is recommended—factors like rhythm, groove or female vocals.
Beyond recommendation
These systems aren’t just for recommending songs. They now compose original music to fit an individual’s tastes and needs.
A company called AIVA (Artificial Intelligence Virtual Artist) is doing exactly that. It has compiled a massive database of music stored in many formats with the goal of creating personalized compositions and playlists for artists and consumers.
The AIVA system knows every chord and note, including pace and rhythm. It searches for patterns to understand the basic style of the music. Then the system predicts which notes and chords should come next.
AIVA started with classical music compositions for orchestras but later created original rock and pop music. It determines notes and chords, but the music is still performed by real people.
The system’s plagiarism tracker detects notes that may already be under copyright, so there’s little room for duplication. Ashkhen Zakharyan, head of customer development at AIVA, says the AI systems are based on stochastic algorithms, “meaning that the likelihood of AIVA generating exactly the same composition twice is practically impossible when trained properly.”
The AIVA AI’s musical ear recognizes whether a composition is good or bad. It creates compositions for a full orchestra and can read scripts for games or movies to discern emotion and map it into music.
“AIVA can supercharge the creativity of composers,” Zakharyan says. “Those who face writer’s block and feel stuck can get a quick start with AIVA, generate several fresh ideas and take them wherever they want and shape them according to their taste, needs and skills.”
AIVA can fill gaps where artist bandwidth is limited. In one example, it can write hours of original music at a much faster pace than any human. But how does that affect human composers and the people who listen to the music?
“It’s true that a lot of people’s music listening has been sort of enclosed within these systems,” Seaver says. “Our tastes in music are absolutely tied up in the technologies that we have, and our basic idea of what taste is, is wrapped up in the idea of selecting among audio recordings.”
The future of recommendation
Some listeners are trying to minimize the data footprint on music platforms, Seaver notes. The desire for data privacy could eventually restrict information collection on the platforms, he says.
Seaver’s book Computing Taste: Algorithms and the Makers of Music Recommendation was released in December and describes his findings on recommender systems and the developers behind the algorithms. He concludes there’s no way to base predictions solely on how music sounds—there are multiple theories of taste and they are ever-changing.
In his book, Seaver puts it this way: “As the makers of music recommendation work, they make choices and bring in ideas that structure the open plan. These choices affect what people hear; they alter the circulation of music; they change what it means to have taste in a world filled with computers.”
LIKE THAT? TRY THIS.
Algorithms suggest songs based on users’ listening habits, as opposed to musical genre or musical similarities.
Because The Rockhound listens to: Spotify recommends:
Kurt Vile My Morning Jacket
Fleet Foxes Bon Iver
Dr. Dog Shakey Graves
Taylor Swift Harry Styles
Fleetwood Mac Creedence Clearwater Revival