Music And Artificial Intelligence: A Bond That’s Growing By Leaps And Bounds

A robot automation Image by Gerd Altmann (CC0C/Pixabay)
Picture by Gerd Altmann (CC0C/Pixabay)

In the last decade, artificial intelligence (AI) has become more and more pervasive in everyday life, from online ads that seem to know exactly what you’re looking for, to music composition and other creative uses.

Just the idea of ​​making music with AI raises questions about the nature of creativity and the future of human composers. From useful tools to groundbreaking prototypes, here’s a look at some of the latest innovations using AI in the music writing process.

ScoreCloud songwriter

DoReMIR Music Research AB recently announced the launch of ScoreCloud Songwriter, a tool that converts original music into leadsheets. The software uses information recorded with a single microphone and may include vocals and instruments. Various AI protocols separate the vocals and then transcribe the music, including the melody and chords, along with the lyrics in English. What you get is a lead sheet with melody, lyrics and chord symbols.

“Many established and emerging songwriters are brilliant musicians but struggle to notate their music for others to play,” said Sven Ahlback, CEO of DoReMIR, in a media release. “Our vision is that ScoreCloud Songwriter will help songwriters, composers and other music professionals such as educators and performers. It can even inspire a playful use of music lovers who never thought they could write a song. We hope it will become an indispensable tool for creating, sharing and preserving musical works.”

Harmonai’s Dance Diffusion

Harmonai is a company that creates open source models for the music industry and Dance Diffusion is their latest innovation in AI audio generation. It uses a combination of publicly available models to generate bits of audio – so far around 1-3 seconds long – out of thin air, so to speak, which can then be interpolated into longer recordings. Since it’s AI, it will evolve and evolve as more users input audio files for it to learn from. If you are interested in how Dance Diffusion came about, here is a video interview with the creators.

Here’s one of their projects, an AI infinite bass solo that’s been playing since January 27, 2021. It is based on the work of the late musician Adam Neely.

It’s still in the testing phase, but its impact is profound.

Google’s AudioLM

Google’s new AudioLM bases its approach to audio generation on the way speech is processed. It can generate music for piano with a short snippet of input. Language combines sounds into words and sentences, just as music is about individual tones that combine to form melody and harmony. Google engineers took their cues from advanced language modeling concepts. The AI ​​captures both the melody and the overall structure and details of the audio waveform to create realistic sounds. It reconstructs sounds in layers designed to capture the nuances.

Meta’s AudioGen

Meta’s new AudioGen uses a text-to-audio AI model to create both sounds and music. The user enters a text prompt like “wind is blowing” or even a combination like “wind is blowing and leaves are rustling” and the AI ​​responds with an appropriate sound. Developed by the Met and the Hebrew University of Jerusalem, the system is capable of creating sound from scratch. The AI ​​can separate different sounds from a complex situation, for example when several people are speaking at the same time. Researchers trained the AI ​​with a mix of audio samples, and it can generate new audio on top of its training data set. Along with sounds, it can produce music, but that part of its functionality is still in its infancy.

What’s next?

With AI music generation still in its infancy, it’s easy to dismiss its future impact on the industry. But it cannot be ignored.

An electronic band called YACHT recorded an entire album with AI in 2019, using technology that has already been surpassed. Essentially they taught the AI ​​how to be a YACHT and she wrote the music. The band then turned it into their next album.

“I’m not interested in being a reactionary,” YACHT member and tech writer Claire L. Evans mentioned this ambivalence in a documentary about her 2019 AI-enabled album chain release (as quoted in tech crunch). “I don’t want to go back to my roots and play acoustic guitar because I’m so freaked out about the upcoming robot apocalypse, but I also don’t want to jump into the trenches and welcome our new robot overlords.”

The onslaught of new technologies is relentless. The only option is to get on the train.

#LUDWIGVAN

Get the daily art news straight to your inbox.

Sign up for the Ludwig van Daily – classical music and opera in five minutes or less HERE.

Latest posts by Anya Wassenberg (See everything)