Taking a step back from manual production, many artists now employ artificial intelligence in the music-making process. From composition apps and mastering platforms to song-identifying tools and highly personalized playlists, AI is changing the way music is created and heard.
AI Music: Artificial Intelligence and Music Generation and Composition
More than 25 years after Kurt Cobain’s death, a new “Nirvana” song was posted online. Why the scare quotes? The track wasn’t actually some long-lost recording from the seminal grunge trio’s active era, unearthed from the vaults. Dubbed “Drowned in the Sun,” it was the product of two AI frameworks: Google’s Magenta, which was used to produce the music, based on input data of dozens of original Nirvana recordings, and a neural network that generated lyrics, which were then delivered by the singer of a Nirvana tribute band.
The track really does sound a lot like Nirvana — the loud-quiet-loud dynamics, the fill-happy drumming. True, the audio is a bit muddy, but the generative complexity on display is striking, said Oliver Bown, author of Beyond the Creative Species: Making Machines That Make Art and Music.
“It’s able to coherently generate a multi-instrumental piece of music with metrical structure, musical phrases, progressions that make sense, all while doing it at a granular audio rate,” he said.
At the same time, AI mimicry has a ceiling of interest. Even diehard Beatles fans probably don’t listen much to “Daddy’s Car,” an AI-generated track from 2016 in the style of Sgt. Pepper’s. In these examples, the sonics can be replicated, but not the sense of innovation that defined the artists.
“Nirvana was famous for doing things in a different way than had come before, but machine learning is really good at doing things the way that humans have already done them before,” said Jason Palamara, assistant professor of music and arts technology at Indiana University-Purdue University Indianapolis.
While these kinds of AI tracks garner headlines, musicians are also using AI for composition in ways that resemble a collaborative, forward-thinking tool more than an imitation-game parlor trick with limited returns.
“Nirvana was famous for doing things in a different way than had come before, but machine learning is really good at doing things the way that humans have already done them before.”
Writer Robin Sloan and musician Jesse Solomon Clark recently created an album using OpenAI’s Jukebox, which, like Magenta, can predictively continue a musical snippet. Holly Herndon’s 2019 album, Proto, described by Vulture as the “world’s first mainstream album made with AI,” incorporated a neural network that generated audio variations based on hours of vocal samples.
“[Herndon is] working with AI as this sort of extended choir,” said Bown.
Inspired by these applications, artists and technologists hope for an even greater step forward. For experts like Palamara and Roger Dannenberg, a professor of computer science, art and music at Carnegie Mellon University, the holy grail of AI in music would be models that fruitfully collaborate with artists in real-time performances. Rather than edit down the interesting bits offered by a model that ingested reams of training data, humans could bounce off musical ideas with the AI, just like a bass player and drummer in a rhythm section.
Rudimentary versions exist, but with a host of shortcomings. A lack of sophisticated real-time musical interfaces means simple details for humans (like synchronizing and beat tracking) are a major challenge for models, Dannenberg said. The data limitations are pronounced, too. The Nirvana track stems from hours of rich MIDI data, but a live performance generates scant audio in comparison. So for live music generation, “you have to kind of dumb it down,” said Palamara.
Still, hope persists for a truly transformative AI accompanist in the not-too-distant future. “It’s a long shot, but it could have a huge impact,” Dannenberg said.
AI Music Examples
While advanced compositional AI remains the most interesting AI-in-music end game for many, artificial intelligence has already been impacting the music industry for years. AI-generated mindfulness ambient music, rights-free music generation for content creators and automation-assisted mixing and mastering have all matured into major industries in the last five or so years. And, of course, the recommendation systems pioneered in the music-streaming ecosystem have had major implications for all product recommendation engines.
Here’s a closer look at some notable players.
AI Music Companies to Know
- Aiva Technologies
- Amper Music
Location: Cambridge, Massachusetts
Audio technology company iZotope emerged as a pioneer in AI-assisted music production back in 2016, with the release of Track Assistant. The mixing feature uses AI to generate custom effects settings based on the sonic palette of a given track. Today, it hosts a full suite of assistants that tailor starting-point suggestions for vocal mixes, reverb application and mastering. Because of its software and AI-centric products like Spire Studio, iZotope software has been used to mix and master records by artists such as Beyoncé, Kendrick Lamar and Foo Fighters.
Location: Fully Remote
Aiva Technologies is the creator of a soundtrack-producing artificial intelligence music engine. The platform enables composers and creators to make originals or upload their work to create new variations. Depending on the plan chosen, creators can also forgo the worry of licensing because the platform offers full usage rights. Rather than replacing musicians, Aiva wants to enhance the collaboration between artificial and organic creativity.
Location: New York, New York
Amper Music provides an AI music tool that performs, composes and produces custom music for media content. The web application enables creators to choose composition style, mood and length, crafting it to fit their content with no additional musical knowledge or skills. Amper was acquired by Shutterstock in 2020, but its Score program remains available as a browser-based tool and an API. It allows users to create a custom, AI-generated track in fewer than 10 clicks.
Location: Chicago, Illinois
Brain.fm is a web and mobile application that provides atmospheric music to encourage rest, relaxation and focus. Created by a team of engineers, entrepreneurs, musicians and scientists, the company’s music engine uses AI to arrange musical compositions and add acoustic features that enable listeners to enter certain mental states. In a pilot study led by a Brain.fm academic collaborator, the application showed higher rates of sustained attention and less mind-wandering, which led to a boost in productivity.
Location: New York, New York
LANDR is a creative platform that enables musicians to create, master and sell their music. The company’s mastering software uses AI and machine learning to analyze track styles and enhance parameters based on its reference library of genres and styles. Beyond AI-enhanced mastering, Landr enables musicians to create quality music and distribute it on major streaming platforms while avoiding the costs associated with a professional studio.
Location: San Francisco, California
Muzeek is a music-generating AI algorithm that creates customized, licensed music for video content. The platform analyzes videos, matching length and rhythm to create relevant soundtracks for creators, developers or agencies that need original, professional-quality music. As social and sharing platforms crack down on copyright infringements, Muzeek provides a new way to legally pair video and music.
Location: Oakland, California
Pandora is a personalized streaming radio service that provides customized music based on a user’s preferences and listening habits. When users select “thumbs up” or “thumbs down” in response to songs, Pandora’s intelligent technology learns more about their likes and dislikes. With access to data derived from 80 billion thumb selections, Pandora uses machine learning to sift through thousands of new releases each week while assisting human curators in finding new artists and helping to detect fake ones.
Location: Fully Remote
Since its 2017 launch under the name “Popgun,” Splash has remained focused on bridging AI and music production in an amusing, intuitive fashion. Popgun’s intelligent digital instruments, which interacted with users and each other, gave way to Splash’s game-based, AI-assisted music tools that found a big audience in Roblox. Now, the company aims to focus on the intersection of smart, user-friendly music composition and the metaverse.
Location: Fully Remote
Available to users even before Apple’s App Store existed, Shazam was one of the first consumer-used AI services. Now a part of the Apple family, Shazam uses intelligent technology to hear and identify songs in just a few seconds. It works by taking a digital fingerprint of a song, matching it to a massive library of previously fingerprinted music and presenting the matched song to users.
Location: Los Angeles, California
Output’s signature software, Arcade, lets users build and manipulate loops into full-length tracks. Users can access audio-preset plug-ins, then adjust sonic details like delay, chorus, echo and fidelity before minting a track. The latest version also features an AI-powered tool called Kit Generator, which lets users generate a full kit, or collection of sounds, from discrete audio samples. Output’s technology has supported music by artists like Drake and Rihanna and the scores of Black Panther and Game of Thrones.
Stephen Gossett contributed reporting to this story.