Resampling / Converting Audio Files

Here is a little tip. I have just finished mixing concert recording. The mixing was done in Reaper at 24bit/48kHz and now I needed to convert / downsample it to 16bit/44.1kHz. If you leave that up to black box encoders you can actually hear a change in tonality or loudness even with untrained ears. At least I noticed sometimes that inside the DAW it sounds different than what I had rendered out to 16bit/44.1kHz.

Reaper, like any other DAW, has this feature built in, however it presented me many different options on how to actually do it. I also watched an interview with Bob Katz (Loudness War fighter and author of this this awesome book on mastering
) who talked about how he used to do it.

So apparently there are multiple options and as a beginner in mixing / audio processing it was totally unclear which option would be the best. I started googling and stumbled upon this comparison website ( for SRC engines and it turned out none of the options in Reaper was on par with the pricy industry standard iZotope. In fact many of the popular DAWs don’t perform as good as that particular piece of software. (Hint: Click the help button for explanations of the graphs)

After researching some more I found out about SoX, which is an open source audio converter which is superb at sample rate conversion. It was awailable in homebrew and within a few seconds I was able to convert my 24bit/48kHz files with the following command line:

sox 24bit_48khz_infile.wav -b 16 \
16bit_44.1khz_outfile.wav rate -v -s 44100

This should correspond to the SoX VHQ Linear setting on

Of course you can ask whether you’d actually able to hear any difference … well I’d have to do a double blind test and would probably fail but I don’t see any reason not to use the superior and free tool when I have the choice.

Also I’ve just donated a few bucks to the Sox project and I hope you’ll do the same for open source software which makes your job easier!

One thought on "Resampling / Converting Audio Files"

  1. Robert says:

    Interesting blog post!

    I repeated your experiment with Audacity (which seems to perform very well in the SRC comparison) and Ableton live, measured some numbers and did a listening test:

    I created one 300 Hz Sin Wave with natively 41.1 kHz and one in 48 kHz that I downsampled to 44.1 kHz. Then I generated the difference signal (Invert one track and then join tracks). The resulting Signal is not audible at the normal volume level that I have my speakers and Headphones set to.
    Audacity reports a level of -71,6 dB which is significantly higher than the quantisation noise of -96 dB of a 16 Bit Signal.

    When normalized to 0 dB the signal sounds quite “musical” and consists of dozens of harmonics. (Visualisation with Audacities Spectrum analysis).

    Using Ableton to perform the down sampling results in a higher level of distortion (-67.1 dB) and the (normalized) signal sounds less pleasing and musical but has some harsh sounding components.
    (The generation and analysis of the error signal were done in Audacity)

    So, could any of these error signals be perceived by a listener?

    There is a brilliant website ( that has some tests to determine your personal hearing abilities in terms of dynamic range and maximum frequency. Accoring to those the maximum dynamic range (difference between loudest level still comfortable to me and the lowest volume still perceptible) with my DT770 pro Headphones is <60dB. So this signal is way below anything I could hear.

    The distortion is relevant in numbers but probably inaudible in any signal. When however the signal is amplified of repeatedly resampled the noise might reach a perceivable level and the choice of the resampling routines can influence the noise characteristic a lot.

    All Samples were generated and analysed with 32 Bit depth to rule out any effect from Quantisation noise. The software used was Ableton Live 8 Lite and Audacity 2.0.0

Leave a Reply

Your email address will not be published. Required fields are marked *