*Whatever happens to musicians will happen to everybody.
https://medium.com/@elluba/creative-ai-newsletter-9-art-design-and-music-updates-over-the-past-few-months-d0ccf838b72e
Creative AI newsletter #9 — Art, Design and Music updates over the past few months
by Luba Elliott
Oct 28 · 13 min read
This is an occasional newsletter with updates on creative applications of artificial intelligence in art, music, design and beyond. Below is issue #9 and you can subscribe for future editions here
(...)
Music (((these paragraphs from the Medium article are bristling with embedded links)))
YACHT released their new album Chain Tripping, for which they collaborated with many leading creative AI practitioners including Tom White, Magenta and Ross Goodwin. Jenna Sutela’s album nimiia vibié is the audio accompaniment to her earlier video installation work nimiia cétiï. Holly Herndon’s Proto, is an album created with Spawn, an AI trained to reproduce different voices. Dadabots have been generating free jazz using AI from aboard NASA space probe Voyager 3. AIVA completed an unfinished piano piece by Antonín Dvořák. Yuri Suzuki reimagined Raymond Scott’s Electronium using Magenta. Google Creative Lab and NOAA, trained an AI on humpback whale songs and Kyle McDonald wrote about his experience on the project. Jai Paul created Bronze AI to generate unique and infinite playback of his piece Jasmine. Julianna Barwick made a sound art installation influenced by its environment for the new Sister City hotel in New York. Earlier this year, Warner Music signed a record deal for 20 albums with an algorithm.
Rebecca Fiebrink’s Sound Control is out, an accessible software for making custom musical instruments with sensors. Leandro Garber’s AudioStellar is an open source data-driven musical instrument for latent sound structure discovery and music experimentation. There’s also a real-time voice cloning implementation by Corentin Jemine.
OpenAI released MuseNet, which can generate musical compositions with 10 different instruments and combine styles. It’s been used by Ars Electronica to complete an unfinished symphony by Gustav Mahler. Following the Bach Doodle in March, which harmonizes user melodies into Bach’s style using the Coconet model, Magenta have now released the dataset of the 21.6 million harmonizations. The team have also developed GrooVAE for generating and controlling expressive drum performances; MiDiMe for personalising machine learning models and introduced a new Colab notebook for generating piano music with transformer. Meanwhile, Sony came up with DrumNet with aim of creating musically plausible drum patterns. Tero Parviainen’s Counterpoint studio released GANHarp, an experimental musical instrument based on AI-generated sounds, made with Magenta.js and the Magenta GANSynth model trained on acoustic instruments to generate continuously morphing waveform interpolations. Chris Donahue and Vibert Thio made the procedural music sequencer Neural Loops, Andrew Shaw came up with MusicAutoBot, using transformer to generate pop music. Yi Yu and Simon Canales generated melodies from lyrics using conditional LSTM-GAN. MIT Researchers translated proteins into music and back — you can hear the sounds via their Amino Acid Synthesizer app. Christian S. Perone experimented with turning gradient norms into sound during training.
Here’s an overview of using neural networks for music generation that covers major projects from 2016 onwards.... (((etc etc)))