Artificial Intelligence is destroying everything, including music

Graphic By Julia Norkus

By Stephanie Weber

Artificial Intelligence (AI) is all anyone can talk about lately. It has infiltrated discussions in technology, philosophy, and academia, institutionally but also in the classroom. AI is everywhere; in film, broadcasting, and even in the music we listen to. Opinions on this new technology are often at opposite ends of the spectrum, but pushback against new technology is normal. Misconceptions abound, AI is heading your way soon.

A lot of AI discourse comes from misinformation, especially because the technology is constantly changing. There are a plethora of different free AI platforms, with ChatGPT and Midjourney being two of the most well known. In the case of ChatGPT, humans input information (data) that helps the machine learn. For example, a user types, “Write me an email to a colleague saying I can’t come into work because I’m sick.” ChatGPT will give you a response based on the past inputs of this very question; it uses data it already has in its system to help you, after which it will catalog your interaction to help the next person. AI is based on algorithms; the machine learns via a human-made algorithm that will spew out information aligned with what it already knows but continues building its database. It’s self-sustaining. But if a machine learns from an inaccurate algorithm, it will generate incorrect or even offensive responses. 

AI is complicated and institutions and industries are trying to keep up. Even Emerson College issued an AI statement about using this technology in the classroom. The college concludes, like most institutions, that AI “has challenged and will challenge virtually all industries, organizations, societies, and governments to react and adapt.” AI is everywhere and has been around for a while, infiltrating and possibly threatening all parts of life including music production.

In terms of the music industry, AI can produce many different outcomes. For a good laugh, look up Frank Sinatra singing Radiohead’s “Creep”, Plankton singing Adele’s “Rolling in The Deep,” or Toad singing Sia’s “Chandelier.” These funny videos are made possible because of AI, giving YouTube and TikTok users something to send our friends. AI can also recreate the voice of iconic singers and musicians, an interesting tool for recreating the sound of legendary artists, especially for those who have died. Even The Beatles released what they’re calling their “Final Song” on Nov. 2, using AI to help in the process. It’s aptly titled “Now and Then,” using AI to finish one of John Lennon’s songs only available on a 1977 tape recording. The remaining Beatles members—Paul McCartney and Ringo Starr—tried to do this years ago, but ran into a problem; Lennon’s voice couldn’t be isolated from the piano, but with the help of AI, they were able to find a solution. Source separation, separating lines of sound from others, is new and is especially useful for situations like this. In a short documentary video about the making of the song, Sean Ono Lennon (the son of Lennon and Yoko Ono) says, “This is the last song that my dad [John Lennon], and Paul [McCartney], and George [Harrison], and Ringo [Starr] will get to make together.” This is a fallacy; McCartney and Starr made the song together, not Lennon and Harrison. “Now and Then” was made to remember simpler times, to reflect on nostalgia. The Beatles were at their peak in the 1960s and 1970s, but still maintain their status as one of the most popular bands in the world. But there is still a sense of wanting to return to when music production was simpler; artists could write songs and go into a music recording studio, reserved for those who had access and status. To go to Abbey Road is a destination for all Beatles fans, but with the access to AI, music production can happen anywhere in the world, from teenage bedrooms to the workplace. The industry is changing rapidly, and artists and their estates are adapting to preserve themselves.

Before this song, though, AI had been used by glitchcore and hyperpop artists for years, who’ve used the tools they already had to enhance the creative process. There are a load of different websites where musicians can make music based on pre-generated AI beats. On some of these sites, the user has control over other audio inputs like BPM and frequencies. This lets users, especially those who don’t have access to mixing boards or studios, get to make their own music, giving them the opportunity to earn money and make a career for their talent. These often free websites increase accessibility to the music industry which has been historically exclusive, granting opportunities for those who can’t afford it. AI can break down financial barriers. It can produce music incredibly quickly, streamlining the music production process to just the musician and the machine. Since the process is expedited, money can be put in the pockets of small-time musicians a lot faster. That’s especially the case for young people. “Two thirds (63%) of young creatives say they are embracing Artificial Intelligence (AI) to assist in the creative processes in music making,” only increasing the chance for more music to be made and streamed. 

Clearly, AI has its benefits. It increases accessibility and people are actually using it whether they are the producer or the listener. But a quick laugh or monetary gain isn’t worth the possible long term consequences of AI generated music.

Most of these consequences originate from the debate between appropriation and plagiarism. Appropriation comes from the post-modern application where artists—musicians but also photographers, filmmakers, painters, etc.—take a piece of work from someone else, reworking it to be theirs. Think of musicians’ ability to sample songs, often giving credit where credit is due. Vanilla Ice’s 1990 debut single “Ice Ice Baby” is a well known example of appropriation. He sampled Queen’s “Under Pressure,” but changed the bass lines by a couple notes, claiming that, because of this change, the song was entirely different. Queen’s lawyers came in and threatened Vanilla Ice for copyright infringement but never went to trial, watching as his song climbed on the Top 100 charts over the world. Vanilla Ice also appropriated Black culture, mixing songs to fit his new found “vibe.” Appropriation continues to face criticism; artists are having the same conversations they had decades ago about the extent of stealing and ownership in any creative industry. The same phenomenon is happening with AI-generated music. Sometimes AI will spit out lyrics that already exist, telling the user that they’re “good-to-go” and can use this content without fear of being sued.

The bigger conversation around AI is about what is considered “original content.” If an artist uses an AI generator to write a whole song or even a lyric, is that content owned by the musician—implying it’s an original work—or does the ownership lie with the AI program? Chorus, a song-writing platform that aims to help creatives write more fluently, and ChatGPT are examples of how this phenomenon is happening. One young artist said that “You can type in a lyric or a phrase and it’ll essentially give you anywhere between seven and 10 alternative lyric ideas based off the ones that you've put in.” Neil Tennant of The Pet Shop Boys endorses this idea, saying that AI can be useful to finish a song written 10 years ago or to combat writer’s block. As harmless or fun as this may sound, using these programs has major consequences.

“Music publishers Universal Music, ABKCO and Concord Publishing sued the artificial intelligence company Anthropic.”

“The Universal Music Group has been fighting to get AI-generated songs taken down from streaming sites.” 

A proposed class action was filed against Microsoft, Github and OpenAI claiming the billions of lines of computer code that their AI technology analyzes to generate its own code constitutes piracy.”

The text above are just some of the headlining lawsuits built on the connection between AI and the music industry, advocating for artist rights against plagiarism. AI technology sometimes uses lyrics that already exist by recognizable artists and provides it to new artists as original content. Since there is a poor legal framework for AI, some musicians took to writing an Open Letter to U.S. Congress, signed by 191 artists. In the letter, Björk says, “We write this letter today as professional artists using generative AI tools to help us put soul in our work.” An artist like Björk will always be  known as a creative mind, no doubt, but AI hinders the creative process, giving artists the ability to discredit the hard work that musicians have been putting into their craft forever. 

Despite the added benefits, using AI is shameful. It creates a breeding ground for limiting artist autonomy and removes all legitimacy. The idea that anyone can be an artist is now broadened to the idea that “any machine can be an artist.” Pushback to technology is justified because rights to privacy and protection are essential to humanity. Accountability needs to be placed on technology companies and their advancements made for profit. Users need a seat at the table just as much as musicians do. As Emerson students, many of us want to create, whether it’s in journalism, filmmaking, or performance art. Without consideration for our own ownership in this creative process, our education and future will lose all its meaning.

WECB GM