A new Google Magenta project (created by an intern!) lets you mix lo-fi, hip-hop music tracks to build a custom music room in your browser, with no musical ability required. Magenta is designed to use Google’s machine learning systems for the creation of art and music, and the Lo-Fi Player is a fun example of what it can do.
When you open Lo-Fi Player, you’re taken to a pixellated virtual “room” where you click different objects — a clock, a cat, or a piano, for instance— in the room to change the different tracks, like the bass line and the melody. “The view outside the window relates to the background sound in the track, and you can change both the visual and the music by clicking on the window,” Lo-Fi Player creator Vibert Thio wrote in a blog post.
Lo-Fi Player also has an interactive YouTube stream, a “shared space” where people can be in the same music room together. But instead of clicking on elements in the room, players type commands into the live chat window to rearrange the tracks.
Magenta is powered by Google’s open source TensorFlow system, part of an ongoing research project “explorting the role of machine learning as a tool in the creative process.” Other Magenta projects have included the Piano Genie, an AI program that lets anyone “play” the piano (think Guitar Hero), and NSynth, a machine learning algorithm that uses a neural network to learn and create new sounds.
The Lo Fi Player is customizable; its source code can be found on GitHub, and Thio says the team also built a tutorial called “Play, Magenta!” where users can edit sounds and canvas live in their own browsers. Thio also emphasizes that Lo Fi Player isn’t designed to replace human producers or existing lo-fi hip hop streams. “Think of it more as a prototype for an interactive music piece or an interactive introduction to the genre to help people appreciate the art even more,” he says.