It’s not the name of a novel. It’s not a song either. The title of this article refers to artificial intelligence. A University of Washington team asked a question. Could artificial intelligence restore the performance experience of musicians with just visual cues? The answer was the music that comes out of silence. This is how Audeo was born, which creates audio from silent piano performances.
The team tested the music that Audeo had created using music recognition applications such as SoundHound. They correctly identified the piece that Audeo reproduced approximately 86% of the time. «Create music that sounds like it could be played in a musical performance? It used to be thought impossible, “lead author Eli Shlizerman said in a statement.
“An algorithm has to determine the signals in the video images that relate to the creation of music. And imagine the sound that occurs between the video frames. When we got music that sounded pretty good it was a surprise.
Audeo uses a series of steps to decode what is happening in the video and then translate it into music. You need to understand which buttons are being pressed in each video frame in order to create a graph over time. Then you need to translate that diagram into something that a music synthesizer will recognize as a piano sound. In this second step, the data is cleaned up and additional information is added, e.g. B. how hard and how long each key is pressed.
The researchers trained and tested the system using YouTube videos by pianist Paul Barton. The line-up consisted of around 172,000 video images from Barton playing music by well-known classical composers such as Bach and Mozart. They then tested Audeo on nearly 19,000 pictures of Barton playing various music by these composers and others like Scott Joplin.
Music that comes out of silence sounds different on every synthesizer. This is similar to changing the Instrument setting on an electric keyboard. For this study, the researchers used two different synthesizers.