As allegations of artificial intelligence tarnish the release of Spotify’s annual “Wrapped” series, a bigger concern arises: why don’t we know for sure?
By: Virginia "Ginger" Gilbert
Good news, Apple Music fans: Spotify is in the hot seat.
For context: the annual Spotify Wrapped release–culminated and presented by music streaming service Spotify each November–has achieved icon status among modern marketing schemes for its presentation of an analytical wardrobe of fun facts about users’ listening habits throughout the year alongside bold, colorful graphics. The company’s Wrapped tradition has generated virality since it began in 2016 as users excitedly share and react to their top five tracks, most streamed artists, and the cutesy visual manifestation of their “listening personality” into star signs or superheroes. It has long set the platform apart from other streaming services for its innovative combination of data interpretation and artistry.
That is…until this year.
Following the drop of their most recent rewind, Spotify users took to social media not to show off their niche top song or listener status, but to express their disappointment with their end of year recap. Many users noted inconsistencies between the results of their Spotify Wrapped and other platforms they were using to track their streaming, inaccurate portrayals of their music library, and a lack of elements historically included in the publication, for example–genre-based analysis and “listening characters.” The in-your-face graphics, roundup of data, and bubbly one-liners were there, but something about them was…off.
After a quick analysis of Wrapped’s underwhelming performance, social media sleuths reached one conclusion: artificial intelligence.
Spotify wasn’t exactly shy in embracing AI technology in their “Wrapped” package for the first time. Throughout the year, the platform had been piloting a series of “AI DJs” and playlist generators to prepare for the debut of an AI-generated “Wrapped Podcast,” which discussed each song on a user’s year-end “Top Five” tracks. However, what users began to suspect after watching their “Wrapped” slideshows is that AI played a larger role in Wrapped than Spotify was advertising. Among clues of creative dissonance and emotional distance–which generated the general dissatisfaction with the wrap-up–the wording and conceptualization of Wrapped were simply off-beat, implying the use of AI as the reason for the poor outcome. One popularly noted example being a series of phrases included in the Wrapped graphics which described the types of music a user listened to each month that just felt like they were generated by AI: clunky, irrelevant, and generalized. (One of mine was “Boujee Football Rap.” Seriously, what does that even mean?)
Sure, people could be wrong. It’s very possible that the public just lacks self-awareness in terms of their listening patterns. Maybe I did listen to “Block Party” by DJ Laz 100 times in 2024 and simply…didn’t realize it? That’s the point of Spotify Wrapped, right? To remind us of playlists forgotten?
But that’s kind of my point: consumers shouldn’t have to act as detectives on a hunch in an online tailspin of finger-pointing and artificial-intelligence-detecting technology to determine the extent of the role AI is playing in a company’s output. That sort of disclosure should be mutually exclusive to the use of the program. Whether or not Spotify used AI to create their Wrapped algorithm in 2024 is ultimately irrelevant. Maybe the interns forgot to record some data, so they pressed a few buttons, and Spotify Wrapped 2024 was born. Maybe a team of professionals worked all year to publish a minimalistic, to-the-point Wrapped, and they missed the mark. Maybe Wrapped was just on Spotify’s backburner this year. Who cares, right? Even if Spotify utilized an entirely human team in developing their year-end analytics, the proof of the downside to unregulated AI usage is in the accusations they are receiving for it. The importance of AI’s role in this situation lies not in the actual part it played, but in the public’s uncertainty of it.
The reality is that the growing use of AI in media paired with a lack of transparency in its use is driving a wedge between companies and consumers.
The air of frustration observed in recent months–from everything between Coca-Cola ads to midterm assignments on Canvas–makes it clear that the growth of AI use without policies enforcing the disclosure of such fosters a sense of increasing paranoia when ingesting media. There is an ethical cloud that accumulates when consumers are “tricked” into consuming AI-generated content that is suggested to be a human creation, and–when viewers can’t depend on companies to be straight-forward in their use of AI–they have to rely on their own devices of intuition to set apart human brains from microchips.
This is where situations like Spotify’s come in: Wrapped might have been clean of AI, but the unsuspecting presence of AI in other media has diminished trust between branded media and the consumer and premiered an eerie inability to know for certain when something is or is not artificial.
Between the dystopian unease of looking at a human face generated by a computer program and the very real threat of mass layoffs following AI integration, consumers are entitled to know where artificial intelligence is in the media output they consume. Though companies may innocently be acting in their best financial interest by integrating AI practices, it can be perceived as intentional manipulation when the gravity of their AI use is not divulged to a public that already sees AI as an active threat to their role in society and–further–their perception of reality.
While the feat of artificial intelligence in overcoming the monotonous–performing routine equations, sorting data patterns, etc–might be a digestible form of AI integration, it is inevitable for people to become uncomfortable when this programming poaches jobs in fields viewed as intrinsically, uniquely human like the arts or media production.
So, as companies replace nine-to-five creative teams with 24/7 computers, they need to consider the objective behind the media they produce: humanizing your brand enough to build client relations by gaining their trust. Sure, AI is a cutting-edge way to cut costs, but is it worth it if your consumers see your brand as heartless?
Really, how can you humanize an image generated by a computer? Do we want to find out?
Spotify’s failure comes not from the use of AI or a lack thereof, it arises from a series of missteps attributable to the continuous maltreatment of AI in a way that makes the public feel misled. But what is the solution in cases like these? How do we prevent this technology from dividing our workforce and our approach to culture?
At the heart of it, there needs to be a way to address the unprecedented implications of an unpaid, round-the-clock, human-brained “employee” with an unlimited potential skillset joining the workforce or picking up hobbies. Without policy that promotes the divulgence of AI use and transparency in AI industries, the benefits that AI can provide are suppressed by the lack of trust people have for it.
AI doesn’t have to be a case of corporations flying too close to the sun, and can truly empower a more efficient and healthy workplace if we regulate it enough to promote the trust the public has in it. Until then…keep soaring, Spotify.
Virginia Gilbert is a first-year Finance major at the University of Florida and is from Jupiter, Florida. She is an online writer for Rowdy Magazine and her top song on Spotify this year was “XO” by Beyonce.