VJR output file format question

dvs

Green Mountain Blues
For recordings in the VJR, I understand it's good to use a bitrate of 256 or even 320 kbps for the mixdown (exported MP3 file) because in the VJR we're going to be re-recording the original track over and over again, and at normal bitrates the MP3 might be degraded by the time the last of us is done.

Does the sample rate also matter? If we are not all using the same sample rate, does switching back and forth cause any issues? I know humans can't hear the difference between 44.1 and 48 kHz sample rates, but given the choice which should we use (and why)?
 
Last edited:

PapaRaptor

Father Vyvian O'Blivion
Staff member
For recordings in the VJR, I understand it's good to use a bitrate of 256 or even 320 kbps for the mixdown (exported MP3 file) because in the VJR we're going to be re-recording the original track over and over again, and at normal bitrates the MP3 might be degraded by the time the last of us is done.

Does the sample rate also matter? If we are not all using the same sample rate, does switching back and forth cause any issues? I know humans can't hear the difference between 44.1 and 48 kHz sample rates, but given the choice which should we use (and why)?
Good question. I've done both.
The only reason I tend to favor 48Khz sampling is because it is the standard for MP4 videos. But I just noticed that in Studio One, it appears I have set my rendering so that WAV files are rendered at 48kHz and my MP3 rendering is done at 44.1.

I certainly can't hear any difference.
 
  • Like
Reactions: dvs

CaptainMoto

Blues Voyager
I'm not sure about the concept of file deterioration with multiple remixes. ????
What I do know is, If you record at a higher bit rate and sample size, it won't matter if it's redone at a lower level.
Conversely, if the original track is at a lower resolution subsequent iterations at higher resolution will not improve the quality.
I think, If a track is recorded at a lower resolution and subsequently upgraded, you might have some anomalies.

My default has been at 48 / 24 wave but I've been reading more about that and I drawn in two directions:
- Back to good old 44.1 / 16 or up to 192 / 24 , I'm waffling

I usually do my VJR as MP3 44.1 / 320 kb, but because my default is set at 48 /24, some times I forget to change it for VJR
 
Last edited:

PapaRaptor

Father Vyvian O'Blivion
Staff member
I'm not sure about the concept of file deterioration with multiple remixes. ????
What I do know is, If you record at a higher bit rate and sample size, it won't matter if it's redone at a lower level.
Conversely, if the original track is at a lower resolution subsequent iterations at higher resolution will not improve the quality.
I think, If a track is recorded at a lower resolution and subsequently upgraded, you might have some anomalies.
Several years ago, I did a test, comparing the various sampling rates of MP3 over about a dozen download/upload cycles. I've since tossed the source files, but as I recall, anything under 256kHz started showing artifacts after three down/up cycles. At the time, the longer tracks were attracting 8 to 10 players and by the time it all got finished, the last recording sounded pretty awful.

I can't imagine that going back and forth between 44.1kHz and 48kHz would make much difference on the material we're swapping in the VJR. There isn't much content in most of these tracks above about 12kHz. It would seem any artifacts would show up at or above 15kHz.

That didn't take into account the changes made when someone applied compression or reverb against the entire track instead of just their own track.

I can remember when I thought a 64kHz MP3 sounded pretty good. I was probably still using dial-up at the time.
 

JPsuff

Blackstar Artist
I think if you're looking for a difference you're more likely to find one.

This reminds me of setting latency.
S1 seems to take care of latency all by itself but back when I had Audacity I had to set it myself. I slowly dialed in numbers until I hit what I thought sounded good which was originally around 240ms but after listening to a few VJR posts, I changed it to 245, then 250 and finally settled on 256ms.
The overall difference was just eleven thousandths of a second but after intently listening for any latency, it soon stood out like a sore thumb.
Like I said, if you're listening for it, you'll hear it. It's sort of like having a bum tooth that doesn't bother you until you start thinking about it.
 

JPsuff

Blackstar Artist
It does in the actual recorded product, but it's still present any time you monitor something you're recording.

All I know is that I've never noticed it, it doesn't get in the way and aside from my playing, my recordings are right on the money and that's just fine with me!
 

CaptainMoto

Blues Voyager
I think if you're looking for a difference you're more likely to find one.

This reminds me of setting latency.
S1 seems to take care of latency all by itself but back when I had Audacity I had to set it myself. I slowly dialed in numbers until I hit what I thought sounded good which was originally around 240ms but after listening to a few VJR posts, I changed it to 245, then 250 and finally settled on 256ms.
The overall difference was just eleven thousandths of a second but after intently listening for any latency, it soon stood out like a sore thumb.
Like I said, if you're listening for it, you'll hear it. It's sort of like having a bum tooth that doesn't bother you until you start thinking about it.
Hey JP,
You got my head spinning on that one.
If you had latency that high you'd go nuts trying to record anything while monitoring.
Are you referring to buffer size?
 
Last edited:

JPsuff

Blackstar Artist
Hey JP,
You got my head spinning on that one.
If you had latency that high you'd go nuts trying to record anything while monitoring.
Are you referring to buffer size?

No, it was latency.

In fact Audacity suggested that 180ms is pretty much average and I believe that Audacity comes with 180ms already dialed in. That, of course, sounded horrible and so began my incremental adjustments.
 

MarkDyson

Blues Hound Wannabe
Y'all making my head spin. The last (and only) two times I recorded some stuff as stems (or whatever the heck you call it) for collaborative work with folks here I just used the default settings on my Logic Pro installation and that seemed okay. I don't remember there being any significant latency when I was recording.
 

Paleo

Student Of The Blues
When I recorded my very first "contribution" in Audacity nine years ago (?) I set the "recording latency" by first creating a click track, recording a single note with each click, noting the time difference between a click and a recorded note on the timeline and then I set that in the latency window.

I don't even remember how to get to that window anymore.

The only thing I've ever done since is adjust my volume to match the imported track and once in a while pan different tracks left or right,

Oh yeah. I did reset to 256 when Papa suggested I do so. I had been using 128 for years when ripping my CDs.

Addendum: My OCD got the better of me and I had to check out my recording preferences.

In the latency section:

Audio to buffer is set at 100
Latency Correction is set at -40.
 
Last edited:

dvs

Green Mountain Blues
Latency is a separate question from bit rates and sample rates.

The reason the bit rate matters is that we're re-rendering a copy of the original MP3 file. I said "re-recording" in the first post, but that's not exactly the problem. Typically when you import an MP3 backing track into a DAW, it is converted with no loss of data into a WAV file. You record your part, also as WAV, and when you mix it down the two parts are combined and rendered to an MP3 file. The conversion from WAV to MP3 compresses the data and some information is lost. I don't know what "bitrate" actually means, but the higher the bitrate, the smaller is the loss. If you lose a lot of data each time you render and you do it several times, you can hear the difference.

Sample rates affect the range of frequency that can be reproduced in the MP3. Higher sample rate corresponds to higher frequencies. 44.1 kHz and 48 kHz are standards, and both can easily handle frequencies that are way higher than we can hear, or that our sound systems can reproduce, for that matter. I wasn't sure if anything bad happened if you changed between them, I guess the answer is probably not (and the older I get, the less I'll care, since my hearing is not improving as I age...)

In the posts above, there are two different aspects of latency being discussed. There is latency in monitoring if you are sending your guitar into your DAW, adding effects, and listening to the processed sound while you play. This is unavoidable and can be minimized by having a fast computer and a lower buffer size in the audio interface, so the effects processing happens as quickly as possible. You can also bypass the issue entirely by monitoring the dry signal directly from your interface and turning off monitoring from the DAW. These latencies are typically on the order of 10 ms or less, which is tolerable, i.e, not too noticeable. Most people would not be able to hear 3-5 ms latency.

The issue of latency in recording, where the recorded guitar sound ends up out of sync with the backing track played back from the DAW, is what is taken care of automatically by Studio One and most other DAWs - with Audacity being a notable exception. In Audacity's case, the lag due to computational processing time of the playback of the track and USB conversion plus the conversion and processing of the input guitar part have to be measured and entered into Audacity in the settings. You can measure it precisely by creating a click track in Audacity and recording it back to another track with a microphone (or loopback input, if your interface supports that). You only have to do this once - it doesn't change until you change your computer and/or audio interface. This, as JP says, is on the order of hundreds of ms. The 11 ms difference he's talking about hearing is the difference between adjusting for the total latency by 245 ms vs 256 ms.
 
Last edited:

CaptainMoto

Blues Voyager
It's been years since I used Audacity so I had a hard time getting my head around what @JPsuff was talking about.

I was interested in trying understanding how to adjust latency in Audacity.
I found this video, perhaps it will help clarify this issue.....or maybe just confuse everyone even moreo_O
It covers the "click track" that @Paleo & @dvs mentioned.
Zero latency is always the goal.
In this case the the latency was 186 ms so, it had to be compincated by negative 186 to achieve near zero.

 
Last edited:

TexBill

Blues in Texas
@CaptainMoto thanks for sharing the viedo on how to correct latency in Audacity. That is a very interesting video and shows clearly how latency plays an important role in recording. Most especially when creating new tracks against a backing track. VJR frequent flyers take note if you want your recording to sound better...

And on the subject of BIT RATE. sample size matters. So of sample size is 512 and Bit rate of 44.1 the results is 11.6 ms latency. Reduce sample size and latency decreases. Sample size 256 and bit rate 44.1 results in latency of 5.8 ms. So reducing sample size by half and maintaining bit rate results in reducing latency by one half.
 
Last edited:

CaptainMoto

Blues Voyager
Back to Biterate for VJR:

I think there is something to Doug's explanation about losing quality when converting from higher to low rates.

In an earlier post my choice of words was not appropreiate, I said:
"What I do know is, If you record at a higher bit rate and sample size, it won't matter if it's redone at a lower level."
What I meant to say was, If the original recording is done at a high rate & sample size, the quality would not be preserved if you rerecord at a lower rate, consequently the original quality won't matter.

As I see it, there is a definite audible difference between high/low bit rate recordings.
Typical MP3's are as low as 128kbps while Wave files are much higher.
My preference for VJR stuff has been 320kbp MP3.

Once the rate has been lowered by any VJR participant, the new rate is locked in for all subsiquent contributions.
 
Last edited:

dvs

Green Mountain Blues
@CaptainMoto thanks for sharing the viedo on how to correct latency in Audacity. That is a very interesting video and shows clearly how latency plays an important role in recording. Most especially when creating new tracks against a backing track. VJR frequent flyers take note if you want your recording to sound better...

And on the subject of BIT RATE. sample size matters. So of sample size is 512 and Bit rate of 44.1 the results is 11.6 ms latency. Reduce sample size and latency decreases. Sample size 256 and bit rate 44.1 results in latency of 5.8 ms. So reducing sample size by half and maintaining bit rate results in reducing latency by one half.
A good point. What you're calling sample size is what I called buffer size. I think where you're using the term BIT rate, what you're describing is the SAMPLE rate.
 
Last edited:

TexBill

Blues in Texas
What you're calling sample size is what I called buffer size. I think you're using the term BIT rate but what you're describing is SAMPLE rate.
Ok, I was comparing sample size of 512 at 44.1 to a sample size of 256 at 44.1. With the reduction in sample size while maintaing 44.1 quality, the latency is reduced proportionally. As 256 is one half of 512, latency is reduced by one half.

I may have confused myself attempting to explain what happens when sample size is changed.
 
Top