Visualizing Music Production
Turn any audio track into a shareable video with some of the new features I developed for Sounds.studio
In September of last year I wrote that I started working with an artificial intelligence (AI) + music company called Sounds Studio. It has been a challenging collaboration because they are taking a different approach to how users interface with AI. Unlike chatbots where you exclusively interface through a text box, Sounds Studio presents a robust Digital Audio Workstation (DAW) like Ableton to its users. Each of their AI features are unique interactions with this DAW. I worked with the team to define and build out many of the interactions which you can find in the sidebar of the application (below). I encourage you to try it out and let me know if you find it intuitive. But, I really want to spend some time sharing a feature that recently arrived in production: video rendering.
Making videos, making music videos, and making visual accompaniment to music all have a long and storied history. Since the beginning of moving images, audio has played an important role to captivate an audience. Conversely, since the distribution of recorded music started, videos have played a vital role in conveying the presence and impact that a particular piece of music can establish. Today, video accompanies audio for almost every form of music distribution. Whether you listen to music on Spotify, YouTube, or any other major streaming service, you will inevitably also have the option to watch the music. For major recording artists this is often their music video. But, for the smaller artists out there, a Lyric Video commonly accompanies the distribution of a song. Not surprising this type of video displays lyrics of a song as they are being sung. As channels of distribution have shifted to streaming platforms, the Lyric Video’s responsibilities too have grown. Streaming platforms prefer multi-sensory content. From their perspective, audio and video is better than audio only. So, the Lyric Video has become the de facto way for artists to easily add video to their music. But, is it the only way?
This is where my conversation about video rendering started with Sounds Studio. We wanted to give artists video that they could use in this changing landscape of music distribution. We wanted it to be based on their music. And we did not want artists to feel like they had to be visual artists to make one of these videos. So, we added a video render feature that visually represents your session on Sounds Studio. With a button click, artists can generate video of their music.
The resulting video can serve as visually relevant information an artist can layer in with other footage: shots of themselves performing, behind the scenes (BTS) footage, or whatever video material that speaks to the artist’s voice. Videos are rendered in 1080p at 60 frames per second making them ideal for streaming platforms like YouTube as well as social media platforms like Instagram. I quickly cut together the different types of videos that Sounds Studio can generate to my brother’s band’s, Texas 3000, track: Tomorrow’s King (above). You can also see the Sounds Studio session I created for it and try generating videos for yourself from following button.
This is all based on some emerging and powerful features available in web browsers today. For many years I have wanted to be able to generate videos on websites1. For instance, I have received dozens of emails asking people if I will add a record function to Patatap. It is amazing to see that it is possible today. I definitely see the ability to generate videos programmatically as an important tool to empower creative expression.
What about you? What empowers your creative expression?
—Jono
My first serious attempt to work with videos on the web was This Exquisite Forest, 2012. I did not handle any of the video rendering, but got a first hand account of the hoops involved to make it possible.