*It's sort of like you have a wah-wah pedal, only it's a big clump of neurons.
Google has been working on neural networks for sound synthesis
Introduction
NSynth is, in my opinion, one of the most exciting developments in audio synthesis since granular and concatenative synthesis. It is one of the only neural networks capable of learning and directly generating raw audio samples. Since the release of WaveNet in 2016, Google Brain’s Magenta and DeepMind have gone on to explore what’s possible with this model in the musical domain. They’ve built an enormous dataset of musical notes and also released a model trained on all of this data. That means you can encode your own audio using their model, and then use the encoding to produce fun and bizarre new explorations of sound.
(...)
Now we can explore interpolations of our own sounds! In the Jupyter Notebook, I show an example of mixing the breakbeat and the cello from above by simply averaging their embeddings together. This is unlike adding the two signals together in Ableton or simply hearing both sounds at the same time. Instead, we’re averaging the representation of their timbres, tonality, change over time, and resulting audio signal. This is way more powerful than a simple averaging....
My goodness, that sure is google-toy
