When working in a home studio or a studio, the topic of latency in audio interfaces is an important subject that directly impacts the recording and mixing process.
But what is it really about?
And above all, how should you adjust the latency of your sound card to make it work as best as possible (sometimes we talk about adjusting the buffer size — if this term is unfamiliar to you, no worries, I will explain everything)?
Well, that’s convenient: we will cover all this in detail in this article.

Here is the summary of the topics we will address together:
- Definition: What is the latency of an audio interface?
- How to adjust the latency / buffer of your audio interface?
- Working with latency (when you have no choice)
- How to adjust latency in Ableton Live
- How to adjust latency in Cubase
- How to adjust latency in FL Studio
- How to adjust latency in Logic Pro
- How to adjust latency in Pro Tools
- How to adjust latency in Studio One
- How to adjust latency in Reaper
Definition: What is the latency of an audio interface?
Let’s start by defining what audio latency is: you will see, it’s not complicated, but there are a few small details to keep in mind.
A simple definition
Audio latency in the context of audio interfaces is the delay, usually very short, between the moment a sound is produced (like the strum of a guitar or singing) and the moment you can hear that sound through your playback equipment (like headphones or speakers).
In other words, it’s the time it takes for the sound to “travel” through your audio interface and your computer before reaching your ears.
Very low latency is essential for an experience of recording and playback without noticeable delay.
For example, imagine you have plugged your electric guitar into your interface: if when you play, the sound comes back to your ears almost instantly, everything is fine.
But if it arrives with a half-second delay, there will be a problem: you will have noticeable audio latency.
Indeed, our brain can compensate for small delays: generally, if the latency is less than 10 milliseconds, we should be able to play without too many problems.
If it’s less than 5 milliseconds, that’s even better.
But beyond 10 milliseconds, it will impact performance because we will hear the sound delay too clearly. The consequence: it’s hard to keep the rhythm, hard to add feeling when playing because we are not sure if we are on time or not.
Note that depending on the person and the instruments recorded, latency will be more or less perceptible: singers are particularly sensitive to this phenomenon, while guitarists can often work with slightly higher latency.
Input latency…
To understand well, let’s go into detail by looking at what happens step by step in a classic voice recording situation.

First, you will place a microphone in front of the singer.
When the artist sings, the microphone will capture the signal, convert it into an electrical signal, and send all that (1) to your sound card.
In the sound card, the analog electrical signal will be converted into a digital signal, so that it can be understood by the computer, by the A-N converter (2). This step will already take a little time. This time generally corresponds to just over half a millisecond.
Then, the data will be sent via the USB bus to your computer (3).
The problem is that USB cannot handle real-time data: in order to process the digital signal of your voice recording, it will be divided into small pieces, which will be stored alternately in what is called a buffer.
Basically, it’s a temporary storage area used to manage audio data before it is processed or played. The buffer is mainly used to optimize audio latency, allowing the system to prepare and process audio data more smoothly.
Depending on your settings (I will explain later how to set everything correctly), the size of the buffer will vary: for example, you can have a buffer of 32 samples, 64 samples, 128 samples, etc.
The larger the buffer, the more it will add delay to the signal.
Indeed, the number of samples is directly related to the sampling rate at which you are working.
If you are working, as is often the case, with a sampling rate of 44100 Hz, it means you have 44100 audio samples per second.
In this case:
- a buffer of 32 samples corresponds roughly to a duration of 0.7 milliseconds;
- but a buffer of 256 samples corresponds to a duration of 5.8 milliseconds.
And this duration will translate into an additional delay added to the audio signal.
But that’s not all: at the level of your computer (4), the signal will be managed by an audio driver, similar to the driver for your printer or whatever.
And this driver, depending on how it is programmed (and this is a significantly important point of difference between audio interface manufacturers), will also add more or less latency to the audio signal before it reaches your DAW or STAN.
A processing latency…
Now that your signal has arrived in your DAW, you may have added effect plugins on different tracks, including the voice track you are recording.
These plugins can generally work in two ways:
- either they are capable of processing the signal in real-time;
- or they are not.
In the second case, you can see where I’m going: the plugins will add latency again.
On most DAWs, you can actually check the latency added by the plugins, as shown in this window in the Studio One interface:

Note: whether the plugin adds latency or not is not a quality criterion. It is indeed completely dependent on the type of algorithm and what you want to do with it.
And an output latency…
Of course, if you want to listen in real-time through your headphones to the sound of the voice you are recording, you will need to send the audio signal from your DAW back to the headphones.

And here, we have exactly the same phenomena occurring as before:
- There is latency added by the driver (1);
- there is latency related to the data transport buffer (2);
- and there is a small additional latency related to the Digital-to-Analog conversion in your interface (3).
And it is only after all this that the signal can be broadcast on your listening device (headphones or speakers).
Which gives us a total latency…
As a consequence of everything we just explained, when you record a signal via your audio interface and then send it back to headphones or speakers, you have a total latency that corresponds to the sum of all the delays we discussed earlier.
Sometimes we talk about “input/output latency,” a figure that is displayed in the settings panels of all DAWs.
And it is this latency that can cause us problems, or not, depending on the settings made on the audio interface.
How to adjust the latency / buffer of your audio interface?
Because yes, in order to work under good conditions with your audio interface, it is essential to adjust the latency: this is part of the settings that must be done to configure a sound card / audio interface.
The buffer problem
As we saw just before: the buffer corresponds to the buffer memory. The larger the buffer size, the more latency you will add.
So instinctively, one might think, “no worries, I will set my buffer to the minimum to minimize latency.”
But that would be too simple…
Indeed, the more you reduce your buffer, the smaller the data packets will be, but they will be sent to your computer more quickly.
And thus, your processor (CPU) will have to work faster.
And at some point, there is a limit.
This limit manifests itself through the appearance of crackling in the recorded and/or played audio signal.
And of course, we don’t want those crackles.
Adjusting the buffer size
What you need to do is try to find the right balance.
So reduce the buffer size as much as possible, but as soon as you notice crackles, you increase it.
You can do this step by step: start with a fairly large buffer size, like 1024 samples.
Then move to 512.
Then 256.
Then 128.
Et cetera…
► And if you still find yourself with crackles, I recommend reading my complete troubleshooting guide for audio interfaces.
Beware of plugins
A word of caution: be mindful of the impact of your plugins on the processor.
Indeed, if you try to record on a session heavily loaded with plugins (effects or instruments), your computer’s processor may struggle, as it has to:
- calculate the processing related to the plugins;
- and handle the signal passing through the USB bus.
So when you are recording, don’t hesitate to disable some unnecessary plugins or to bounce some of your tracks to audio if you have too many crackles.
Also, disable plugins that add latency.
Vary the latency according to your needs
Note also that depending on what you are doing, you are not obliged to keep the same buffer setting.
Typically, it often happens that for recording, we use of course the lowest possible setting — but as soon as we start mixing, we use a much higher buffer setting.
This is generally a good practice, as when recording it is important to have imperceptible audio latency, but when mixing, it usually doesn’t pose a problem if the signal has a delay of about fifty milliseconds.
So especially if you have a computer that is not very powerful, don’t hesitate to use this little technique.
Working with latency (when you have no choice)
In fact, if you are in a studio or home studio, you will regularly find yourself in situations where you will have too high latency to work properly.
Notably, when you are recording.
Use direct monitoring
The first thing we are going to talk about, for me, is essential for recording.
That is to say, even if my sound card is high-end, even if the PC is powerful — I will always use some form of direct monitoring to send the sound back to the artist’s headphones.
UNLESS I absolutely need plugins on the DAW, like for example for guitar amp simulations.
But for example when I record vocals, I will always use direct monitoring.
So what is it?
Concretely, most interfaces allow you to route the audio signal directly to an audio output before it passes through the computer.
This allows you to have no latency at all.
This can be done thanks to a button present on the interface, which allows you to say “in the headphone output, I want to hear both my accompaniment AND the signal I am currently recording,” but this is becoming less and less common.

Indeed, today, this type of audio routing is often managed via the software provided with your audio interface (and not your DAW).
You can thus create “headphone mixes” by choosing exactly which signal you will send to each headphone output.
And really, this is the approach I recommend you follow when recording (again, unless you are playing guitar and need amp simulations).
But what if you need effects?
In some cases, however, you will need to add effects.
Typically, a reverb on the voice: when recording vocals, we often add reverb to the artist’s headphone mix so that he/she feels more comfortable and hears something that resembles a finished product.
A first option, then, is to use an audio interface containing DSP — that is, integrated processing units, which allow offloading the computer’s processor in terms of computing power, but also calculating effects with much less latency.
This type of technology can be found in Universal Audio cards, for example.
But today, with processors in our computers becoming increasingly powerful, it doesn’t seem necessary to switch to this type of technology.
However, it is still very convenient when your sound card has a small integrated DSP that allows generating a reverb: this is the case with several RME audio interfaces, for example, which allow equalizing the signal, compressing it, or adding reverb in a simplified manner.
If you do not have this kind of functionality on your audio interface, it is not a big deal since you can still set up your equipment as follows:
- for the microphone sound, you only use direct monitoring to route the signal to the headphone output;
- and for the reverb, you use a plugin in your DAW (for reverb, a few milliseconds of delay will not be a problem at all).
It’s a bit more complicated to do, but it works very well and allows you to use the reverb of your choice.

To conclude this article, I suggest we look at the main DAWs on the market and the manipulations to adjust latency.
Note that depending on whether you are using MacOS or Windows, there may be slight differences (typically on Windows, the buffer setting, although accessible from the DAW, is often done via a control panel specific to the interface).
How to adjust latency in Ableton Live
Open the software control panel by going to the Options > Preferences menu and then to the Audio tab.
Depending on your hardware, you can either click on Hardware Configuration to access the buffer adjustment panel or adjust the Buffer Size in the Latency section.

How to adjust latency in Cubase
In Cubase, to adjust the buffer size, you need to go to the Studio > Studio Setup menu.
Then select your driver from the dropdown list on the left, and click on Dashboard to open the buffer settings options.

How to adjust latency in FL Studio
In FL Studio (formerly Fruity Loops), you can click on the Options > Audio Settings menu to open the audio settings panel.
Then, on MacOS you can directly adjust the buffer size, and on Windows you will need to click on the large rectangular button Show ASIO panel.

How to adjust latency in Logic Pro
To adjust the buffer in Logic Pro, click on the Logic Pro > Settings (or Preferences) > Audio menu, then click on Devices.
You can then change the buffer size, and thus modify your latency, by changing the I/O Buffer Size option.

How to adjust latency in Pro Tools
In Pro Tools, to adjust your buffer size and thus your latency, click on the Setup > Playback Engine menu.
You can then adjust the buffer size directly from the window that opens by modifying the H/W Buffer Size.

How to adjust latency in Studio One
In Studio One, click on the Studio One > Options menu and then on the Audio Setup tab.
Then you can adjust the buffer size in the Audio Device sub-tab, either directly or by clicking on the Setup button.

How to adjust latency in Reaper
Open the Reaper preferences panel by clicking on the Options > Preferences menu and then find the Audio > Device submenu in the left scrolling box of the window that opens.
Then simply click on the ASIO Configuration… button to adjust the buffer size.
You can also use the Request block size option to force the interface if the first method does not work.

In conclusion
There you go, you now know exactly what latency means in relation to audio interfaces, and you also know how to adjust it properly to work under optimal conditions.
► If despite this you notice crackling, don’t forget to read this article.
► And if you no longer have any issues, I suggest you take a look at my selection of the best audio interfaces for home studio.