Google Brain, the search giant’s machine learning arm, is setting up a new group to see if it can teach AI to make its own, original works of art. The company, named Magenta, will be announced more officially at the beginning of June, but was referenced to in a talk given by Douglas Eck, a Google Brain researcher, at Moogfest.
This new group has been founded specifically to find out if computers can actually create their own works of art. Whether that be more traditional pictures, videos or even music, Magenta wants to find out if it’s possible.
The aim may sound familiar to other Google Brain projects, but its direction is completely different. In recent times, Google’s AI brain has been used to ‘create’ forms of poetry by connecting two sentences from existing books. The method was to feed the neural networks thousands of novels, and get it to join the dots. The aim was to get Tensorflow, the brain behind Google’s AI, to create and understand more naturally-spoken commands and questions.
In short, the aim wasn’t to create art for art’s sake. There was a genuine end goal to improve the company’s existing search products, and has inevitably lead to the recently announced Google Assistant. What’s more, the machine was basically being taught to mimic other people’s work, rather than create its own.
Likewise with DeepDream, the machine was fed works of art and then tasked with distorting and creating new patterns based on them. With both of those examples, the Tensorflow ‘brain’ was adapting something already made.
With Magenta, the goal is to see if its neural network can make something truly original. The first step, as reported by Quartz reporter Mike Murphy, is to feed it music, and get Tensorflow beefed up on musical knowledge. It will also open it up to other developers to crowd-source development:
Much in the same way that Google opened up TensorFlow, Eck said Magenta will make available its tools to the public. The first thing it will be launching is a simple program that will help researchers import music data from MIDI music files into TensorFlow, which will allow their systems to get trained on musical knowledge.
Adam Roberts, a member of Eck’s team, told Quartz that the Magenta group will on June 1 start posting more information about the resources it will be producing, adding new software on its GitHub page, and posting regular updates to a blog.
Initially the development has meant playing a few individual notes, and having the AI listen and then create its own melody, as shown in the video below:
[youtube=https://www.youtube.com/watch?v=0iNhCGbgYUc]
While we’re very much in the early stages, at some point in the future the vision is to have a machine that’s not only capable of making music, but music that listeners react to emotionally. We’re a long way from that being a reality right now. Currently, the machine needs a lot of human input before ‘creating’ works of art. Hopefully, in the future, the machine will be able to make music completely independently.
FTC: We use income earning auto affiliate links. More.
Comments