Google’s work with artificial intelligence is pretty incredible, but it’s most impressive progress comes from products that integrate with it. Now, a Google side project has brought us a touchscreen synthesizer that uses AI to enhance creativity.
Nomad case for Pixel 3
The “NSynth Super” is an open-source, experimental project which comes from Google’s research project, Magenta. Using NSynth, a neural network that generates sounds, the synthesizer can actually generate the sounds that come from an instrument. The algorithm uses the core qualities of an instrument to create sounds instead of just generating notes.
To play around with these capabilities, the NSynth Super uses a central X/Y pad which has quadrants assigned to an instrument. Using your fingers on the touchscreen, you can also mix the sounds of these instruments, but as The Verge points out, it’s not in the way you might think.
What’s particularly unique is that NSynth Super isn’t just layering sounds on top of each other. Rather, it’s synthesizing an entirely new sound based upon the acoustic qualities of the individual instruments. This gives some unexpected results. In the demo video above, blending a flute and a snare makes a sound that’s glassy and quasi-sharp, without any overtly “drum-like” qualities.
Unfortunately for those interested, Google isn’t actually selling this product to the public. Rather, it has made all of the materials and schematics needed to build it for yourself with the help of a Raspberry Pi. All of that information is available on Github.