From the genesis of sheet music to the dawn of streaming giants like Spotify, the one constant in music is its chaotic reinvention at the hands of changing technology, where culture gets shook to its core and the production, distribution, and consumption of music alters in its wake. Now that the dust has settled from the streaming wars, it’s enticing to envision how tech will shape the next collective musical landscape and early signifiers indicate a potential renaissance on our horizon.
A quick glance over suggests a Black Mirror-like future where the front row of the Grammy’s are populated by startup founders who invented AI that created the year’s best albums, but as dark and as interesting as that may seem (or eventually be) I think this new era of AI-based music production programs could lead to some of the most adventurous and meaningful (read: human-created) recordings of the century. AI in music seems to imply some sort of death knell to soul and authenticity but when looking closer at the programming, tools, and ideologies of the founders behind these programs, they’re potentially going to amplify our most human qualities as composers and creators. Take for instance Google’s Magenta, which has been quietly creating several brilliant AI-based music tools, my favorite being their NSYNTH Super, a neural synthesizer. Using a machine-learning algorithm, the synth uses a neural network to study the characteristics of pre-existing sounds, and then, in turn, create entirely new sounds inspired by the learned characteristics. Only 16 source sounds (at 15 different pitches) were uploaded into the neural synth but it spits out over 100,000 new sounds based on those inputs. It allows for songwriters and producers to aggressively create new sounds based on how they manipulate the synth. What’s most awe-inspiring is that the synthesizer is built with TensorFlow and openFrameworks to allow everyone from coders to artists to try out the synthesizer and create unprecedented sounds.
My hope for the future is that these machine-learning composition programs will start to collaborate with the video game industry even closer, so players can create, share, swap, and download original tracks within their games either as an Easter egg or as a primary feature. If a game like Fornite can host a concert by Marshmello for 10 million players (and allow Marshmello to sell merch within the game), it would be interesting for a large-scale game to start converting players into composers and have an open-world marketplace for virtual collaborations to occur. It could even go as micro as having a miniature streaming service within the game where player-created music could chart, and the most popular tracks could be permanently implemented into the game’s soundtrack. Needless to say, the future is wide open for tech to influence the shape of music in a way we’ve never quite experienced.
Written by: Jake Wargin