In the past few months, an increasing number of ads for automated mastering services seemed to pop up on my Facebook newsfeed. From online solutions such as LANDR/Cloudbounce to A.I. integration into dedicated software like Ozone 8, it appears that quite a few companies are jumping onto the trend of music mastered with the help of algorithms.
Curious about the matter, I decided to try these out and see for myself how efficient they all are.
But first of all: What exactly is mastering?
The role of a mastering engineer is to add the final polish to a great stereo mix and prepare it for distribution. These include….
- Repairing any technical issues overlooked during the mixing process. (Background noise, clicks, pops, harshness, etc…)
- Where necessary applying processing to the audio, such as EQ, dynamics, stereo image processing etc. The goal here is to outline the strengths of the mix while ensuring the best possible translation across a range of speakers.
- Optimise the overall volume of each song to be competitive to other releases in similar genres.
- Sequence the album (place the tracks in the right order, insert appropriate pauses and create fade-ins/outs)
- Make sure that the songs flow well into each other and tweak if necessary (level, tone, transitions, etc…)
- Embed all relevant information into the files (CD-Text, ISRC/UPC/EAN codes, etc…)
- Export the files in the desired format (DDP for CD production, uncompressed/compressed files for digital release, cut lacquer for vinyl pressing, etc…)
It’s a demanding process that draws on both technical and artistic judgement. Can a piece of software really compete with a real mastering engineer?
My main interest was to find out how they compared sonically.
How does it sound?
As an experiment, I decided to submit two mixes (one pop/rock, one heavy-metal) to LANDR and Cloudbounce, then master the tracks myself and finally compare the results.
The results weren’t as terrible as I was expecting, and sounded reasonable for the price. However it was in the fine details that they fell short.
On both tracks, the tonal balance was pretty close to what I had, although, in my opinion, the results sounded a bit thin in the low-mid area. This translated by the “human” masters sounding more “gelled” together and a little more impactful.
Some musical details such as occasional muddiness in the side signal and intermittent harshness in the high-mids slipped by the artificial intelligence and therefore weren’t dealt with.
On the pop song, I thought the dynamics were handled reasonably well. The master from LANDR sounded a bit denser while the one from Cloudbounce seemed a little more open but both were passable.
On the heavy track, however, both results sounded too tame and small in my opinion. The sound didn’t jump out of the speakers like it should have done. Everything felt limited and rather unexciting.
As a whole, the songs mastered automatically were a bit quieter than other releases in similar genres. By extension, they were also quieter than the versions that I mastered myself. An explanation for this might be that the target audience of these online services is exclusively self-releasing artists. The developpers probably assumed that users will be using streaming platforms as their main distribution channels. Therefore, they anticipate loudness normalisation at some point in the process.
Are the results really worth comparing?
I can appreciate its value as an intermediate step in the creative process. However whilst automated mastering didn’t sound awful, it couldn’t compare to the real thing.
For instance, these services could be a good way for bands to polish their demos before going into the studio. This could make referencing easier and outline potential improvements to the arrangement.
I can also imagine EDM producers using these online solutions. It would allow DJs to test their work in progress in a live situation and adjust mixes accordingly.
I can think of few other situations in which automated mastering could be useful. However from these results I don’t believe it can replace the skill, judgement and experience of an actual engineer.
The human element.
Personally, when approached with a mastering project my first step is to open a discussion. This ensures that everyone is on the same page right from the start. Even if the entire project is conducted remotely, I like to spend time emailing back and forth to paint a detailed picture of what the end goal is.
During this period, I would listen to the song(s) multiple times and pinpoint potential issues that might need fixing either in the mix or in some cases on the stereo file.
Only then would I proceed to “master” the audio, constantly comparing the results to a selection of reference tracks that the artist and myself agreed-upon in order to attain the target that we set while discussing the project.
Once I am happy with what I’ve got, I would submit it to the client. I would then listen to any feedback they might have and tweak the results until everybody is happy.
All of this simply isn’t possible with an automated mastering service where every decision is based on algorithms, or in other terms, sets of rules. I am a strong believer that some of the best records out there were made by breaking said rules. Even though mastering can seem pretty technical and dry, there is still room for creativity.
My role as an audio engineer, is to figure out what drives a song and make sure that the listener catches onto it. It all comes down to making decisions based on emotions, which even the best computer cannot achieve. Music, and art in general, is extremely subjective. Surely, in the near future, artificial intelligence will be able to offer results that get pretty close to what a human can do, but I doubt it will ever be as good as the real thing.