The role of the mastering engineer has changed a great deal over the course of the history of recorded music. In this article, we trace those changes from the dawn of mastering as a discipline right up to the present day.
In the earliest days of recording, the separate disciplines of engineering, mixing, and mastering were not yet required. The sound was captured through an acoustic horn which drove a stylus connected to a diaphragm. This contraption cut the recording directly to a wax disc, which was then used to create stampers. These stampers would then be used to press shellac records.
In the late 1940s, a series of innovations in recording technology necessitated the creation of new roles in the production of records. In 1948, vinyl records became available while Ampex started producing their new Model 200 tape recorder. A dedicated 'Transfer Engineer' was needed to transfer recordings from tape to a vinyl master.
The magnetic tape used for recording had a more comprehensive dynamic range and different frequency response to vinyl. In the early days, each record label or engineer applied their own equalisation curve to optimize this transfer from tape to vinyl. Still, in 1954, the Recording Industry Association of America introduced the RIAA equalization curve. This curve was soon adopted as the global industry standard for vinyl records. Standardising the frequency response of playback devices in this way allowed for records to be cut with narrower grooves, and therefore longer playback times.
However, bass-heavy recordings could cause problems with these records – the needle could pop up out of the groove and damage the record itself. Therefore, engineers needed to add corrective EQ curves to create more playable and better-sounding records. This was the beginning of mastering as an art – a role that had up until this point been purely technical and now also became creative. Engineers started using tools such as compression and EQ to enhance the masters that they created.
By the late 1960s, stereo technology was starting to appear – offering up even more creative opportunities to transfer engineers; they could now adjust stereo width along with dynamics and frequency balance. Gradually the term transfer engineer gave way to mastering engineer.
In 1982 CDs were introduced, a pivotal moment in recorded music history. Digital music formats offered vastly improved signal-to-noise-ratio and an improved dynamic range. This meant that mastering engineers could increase the loudness of the tracks they were working on.
Gradually more and more of the recording industry transitioned to digital processes – digital recorders came in, paving the way for DAWs and the fully digital recording processes that are so common today.
Since the 1950s, engineers have competed to try and make their records louder than those of the competition. It was thought that radio listeners would think that a louder song sounded 'better'. Therefore if you sound louder than your competitors, your music will become more popular.
This competition reached fever pitch in the 1990s as digital technology allowed mastering engineers to push tracks louder and louder. By the late '90s, records were on average 18dB louder than they had been in 1980, and this did not come without sacrificing certain aspects of the final product. By compressing recordings this hard, dynamic range is lost, and music can end up sounding lifeless as a result. This process can also lead to distortion artefacts appearing in the master.
Although in recent years, the average loudness of records has actually started to decrease again, the loudness wars are far from over. Radio is less important than it once was, but music continues to compete for listeners' ears, and many artists and labels still want their music to be louder than the competition's.
The exciting thing about streaming services is that they are actually helping to bring dynamic range back into music – this is good news for people that were getting tired of the square waveforms generated by hyper-compressed masters. Since 2009, recorded music's average loudness has started to decrease again.
Streaming services such as Spotify use a loudness-normalized system – they smooth out the levels of the different songs on their platform as they want to make sure that their users hear tracks at a consistent level, even if they are listening to different artists and disparate genres. To explain how this process works from a listener's perspective, Jon Schorah & Sam Inglis used an article in Sound On Sound to outline an experiment that can be tried on Spotify.
On the platform, you can listen to ZZ Top's 1983 song 'Sharp Dressed Man' in both its original version and a remastered 2008 version. With loudness normalisation switched off, the 2008 version measures about 3LU (loudness units) louder. However, with loudness normalisation switched on (as it always is by default in Spotify), both versions have the same subjective level. The interesting thing is that listeners consistently prefer the 1983 version – as it has not been so compressed, 'it retains more micro-dynamic variation and is widely perceived to have more 'presence' and 'detail', especially in the mid-range.'
In recent years, more and more developers have started using AI to create automated 'intelligent' mastering services. You see this in online services such as LANDR, and in the 'smart' tools provided in plug-ins such as iZotope's Ozone. For now, the technology doesn't exist that can compete with a skilled professional mastering engineer – and the evidence is that high-end recordings will always still be mastered by a human being – even if they do now use smart tools to help them in many cases. However, it's clear that although mastering is a decades-old profession, this is an area of music production that is still in constant development.
Comments:
Dec 20, 2022
Login to comment on this post