Understanding what “channels”, “buffer_size”, “period_size”, “bindings” and “ipc_key”...
Has anyone in space seen or photographed a simple laser pointer from Earth?
Is anyone advocating the promotion of homosexuality in UK schools?
Why are Hobbits so fond of mushrooms?
How can I calculate the sum of 2 random dice out of a 3d6 pool in AnyDice?
Cops: The Hidden OEIS Substring
Why didn't Nick Fury expose the villain's identity and plans?
Optimization terminology: "Exact" v. "Approximate"
Why are they 'nude photos'?
Can fluent English speakers distinguish “steel”, “still” and “steal”?
How to memorize multiple pieces?
How is angular momentum conserved for the orbiting body if the centripetal force disappears?
Is there a way to know which symbolic expression mathematica used
How can a dictatorship government be beneficial to a dictator in a post-scarcity society?
Why does the U.S. tolerate foreign influence from Saudi Arabia and Israel on its domestic policies while not tolerating that from China or Russia?
Ownership of a PhD Student's Research
As the Dungeon Master, how do I handle a player that insists on a specific class when I already know that choice will cause issues?
What is this triple-transistor arrangement called?
Can a Beast Master ranger have its beast chase down and attack enemies?
Credit score and financing new car
Storming Area 51
What's the minimum number of sensors for a hobby GPS waypoint-following UAV?
Does throwing a penny at a train stop the train?
How would vampires avoid contracting diseases?
How would my creatures handle groups without a strong concept of numbers?
Understanding what “channels”, “buffer_size”, “period_size”, “bindings” and “ipc_key” stand for in .asoundrc
recording audio from web-based audio player using ALSA loop deviceUnderstanding Mixing of sounds in ALSAFocusrite Scarlett on Linux input mapping?ALSA skipping initially during playbackALSA - Traktor Audio 2 - How to split front and rear stereo channels / How to debug dmixALSA error: Channel count (2) not available for playback: Invalid argumentalsa pre-amp volumeDoes the dmix plugin convert to stereo automatically?Using and configuring ALSA plugins dmix and dsnoop for stereo play and captureALSA mono configuration not working
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
To setup my USB soundcard for my Linux PC, I started to learn about ALSA and about writing configuration files. After much effort, I was able to write one and get it working. The following is my .asoundrc
stored in my home folder:
pcm.!default {
type plug
slave {
pcm "hw:1,0"
}
}
ctl.!default {
type hw
card 1
}
pcm_slave.maudiomtrackeight1 {
pcm "hw:1,0"
channels 8
rate 44100
buffer_size 4096
period_size 1024
}
pcm.outch1 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 0 ]
hint.description "M-Audio M-Track Eight output/playback channel 1"
}
pcm.inch1 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 0 ]
hint.description "M-Audio M-Track Eight input/capture channel 1"
}
pcm.outch2 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 1 ]
hint.description "M-Audio M-Track Eight output/playback channel 2"
}
pcm.inch2 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 1 ]
hint.description "M-Audio M-Track Eight input/capture channel 2"
}
pcm.outch3 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 2 ]
hint.description "M-Audio M-Track Eight output/playback channel 3"
}
pcm.inch3 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 2 ]
hint.description "M-Audio M-Track Eight input/capture channel 3"
}
pcm.outch4 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 3 ]
hint.description "M-Audio M-Track Eight output/playback channel 4"
}
pcm.inch4 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 3 ]
hint.description "M-Audio M-Track Eight input/capture channel 4"
}
pcm.outch5 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 4 ]
hint.description "M-Audio M-Track Eight output/playback channel 5"
}
pcm.inch5 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 4 ]
hint.description "M-Audio M-Track Eight input/capture channel 5"
}
pcm.outch6 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 5 ]
hint.description "M-Audio M-Track Eight output/playback channel 6"
}
pcm.inch6 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 5 ]
hint.description "M-Audio M-Track Eight input/capture channel 6"
}
pcm.outch7 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 6 ]
hint.description "M-Audio M-Track Eight output/playback channel 7"
}
pcm.inch7 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 6 ]
hint.description "M-Audio M-Track Eight input/capture channel 7"
}
pcm.outch8 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 7 ]
hint.description "M-Audio M-Track Eight output/playback channel 8"
}
pcm.inch8 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 7 ]
hint.description "M-Audio M-Track Eight input/capture channel 8"
}
Though I was able to grasp most of the concepts, I was unable to understand the following:
channels
: does it stand for audio channel numbers? Like if I'm gonna use mono or stereo? Say, if my soundcard has 8 input ports and 8 output ports on it, if I were to use mono configuration should I set a value of 16 - 8 inputs + 8 outputs or 8 - 8 input-output pairs (and in case I were to use stereo configuration, should I set a value of 8 - 4 inputs + 4 outputs or 4 - 4 input-output pairs)?
buffer_size
: I don't know anything except that making these sizes smaller is needed to ensure lower latency. What exactly does this mean?
period_size
: again this has to be related to latency?
bindings
: are these the ones that maps channels to the ports? For mono configuration I used[ <index_number> ]
. Can I use[ <index_number1> <index_number2> ]
for stereo configuration and likewise?
ipc_key
: I understand it is a given unique number, same for each PCM device defined from the same slave. Supposing I add a new soundcard detected ashw:2,0
, and go ahead to define PCM devices in the same manner as above, I will have to assign a different value for this parameter there (say 2222) for each PCM device defined from the new slave?
I could try and experiment a bit to understand the rest, but still some stuffs couldn't be cleared. The fact that not many tutorials and lack of good official ALSA documentation is not helping the cause either. Can someone throw some light on this?
audio configuration alsa
add a comment |
To setup my USB soundcard for my Linux PC, I started to learn about ALSA and about writing configuration files. After much effort, I was able to write one and get it working. The following is my .asoundrc
stored in my home folder:
pcm.!default {
type plug
slave {
pcm "hw:1,0"
}
}
ctl.!default {
type hw
card 1
}
pcm_slave.maudiomtrackeight1 {
pcm "hw:1,0"
channels 8
rate 44100
buffer_size 4096
period_size 1024
}
pcm.outch1 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 0 ]
hint.description "M-Audio M-Track Eight output/playback channel 1"
}
pcm.inch1 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 0 ]
hint.description "M-Audio M-Track Eight input/capture channel 1"
}
pcm.outch2 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 1 ]
hint.description "M-Audio M-Track Eight output/playback channel 2"
}
pcm.inch2 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 1 ]
hint.description "M-Audio M-Track Eight input/capture channel 2"
}
pcm.outch3 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 2 ]
hint.description "M-Audio M-Track Eight output/playback channel 3"
}
pcm.inch3 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 2 ]
hint.description "M-Audio M-Track Eight input/capture channel 3"
}
pcm.outch4 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 3 ]
hint.description "M-Audio M-Track Eight output/playback channel 4"
}
pcm.inch4 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 3 ]
hint.description "M-Audio M-Track Eight input/capture channel 4"
}
pcm.outch5 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 4 ]
hint.description "M-Audio M-Track Eight output/playback channel 5"
}
pcm.inch5 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 4 ]
hint.description "M-Audio M-Track Eight input/capture channel 5"
}
pcm.outch6 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 5 ]
hint.description "M-Audio M-Track Eight output/playback channel 6"
}
pcm.inch6 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 5 ]
hint.description "M-Audio M-Track Eight input/capture channel 6"
}
pcm.outch7 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 6 ]
hint.description "M-Audio M-Track Eight output/playback channel 7"
}
pcm.inch7 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 6 ]
hint.description "M-Audio M-Track Eight input/capture channel 7"
}
pcm.outch8 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 7 ]
hint.description "M-Audio M-Track Eight output/playback channel 8"
}
pcm.inch8 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 7 ]
hint.description "M-Audio M-Track Eight input/capture channel 8"
}
Though I was able to grasp most of the concepts, I was unable to understand the following:
channels
: does it stand for audio channel numbers? Like if I'm gonna use mono or stereo? Say, if my soundcard has 8 input ports and 8 output ports on it, if I were to use mono configuration should I set a value of 16 - 8 inputs + 8 outputs or 8 - 8 input-output pairs (and in case I were to use stereo configuration, should I set a value of 8 - 4 inputs + 4 outputs or 4 - 4 input-output pairs)?
buffer_size
: I don't know anything except that making these sizes smaller is needed to ensure lower latency. What exactly does this mean?
period_size
: again this has to be related to latency?
bindings
: are these the ones that maps channels to the ports? For mono configuration I used[ <index_number> ]
. Can I use[ <index_number1> <index_number2> ]
for stereo configuration and likewise?
ipc_key
: I understand it is a given unique number, same for each PCM device defined from the same slave. Supposing I add a new soundcard detected ashw:2,0
, and go ahead to define PCM devices in the same manner as above, I will have to assign a different value for this parameter there (say 2222) for each PCM device defined from the new slave?
I could try and experiment a bit to understand the rest, but still some stuffs couldn't be cleared. The fact that not many tutorials and lack of good official ALSA documentation is not helping the cause either. Can someone throw some light on this?
audio configuration alsa
add a comment |
To setup my USB soundcard for my Linux PC, I started to learn about ALSA and about writing configuration files. After much effort, I was able to write one and get it working. The following is my .asoundrc
stored in my home folder:
pcm.!default {
type plug
slave {
pcm "hw:1,0"
}
}
ctl.!default {
type hw
card 1
}
pcm_slave.maudiomtrackeight1 {
pcm "hw:1,0"
channels 8
rate 44100
buffer_size 4096
period_size 1024
}
pcm.outch1 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 0 ]
hint.description "M-Audio M-Track Eight output/playback channel 1"
}
pcm.inch1 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 0 ]
hint.description "M-Audio M-Track Eight input/capture channel 1"
}
pcm.outch2 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 1 ]
hint.description "M-Audio M-Track Eight output/playback channel 2"
}
pcm.inch2 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 1 ]
hint.description "M-Audio M-Track Eight input/capture channel 2"
}
pcm.outch3 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 2 ]
hint.description "M-Audio M-Track Eight output/playback channel 3"
}
pcm.inch3 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 2 ]
hint.description "M-Audio M-Track Eight input/capture channel 3"
}
pcm.outch4 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 3 ]
hint.description "M-Audio M-Track Eight output/playback channel 4"
}
pcm.inch4 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 3 ]
hint.description "M-Audio M-Track Eight input/capture channel 4"
}
pcm.outch5 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 4 ]
hint.description "M-Audio M-Track Eight output/playback channel 5"
}
pcm.inch5 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 4 ]
hint.description "M-Audio M-Track Eight input/capture channel 5"
}
pcm.outch6 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 5 ]
hint.description "M-Audio M-Track Eight output/playback channel 6"
}
pcm.inch6 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 5 ]
hint.description "M-Audio M-Track Eight input/capture channel 6"
}
pcm.outch7 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 6 ]
hint.description "M-Audio M-Track Eight output/playback channel 7"
}
pcm.inch7 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 6 ]
hint.description "M-Audio M-Track Eight input/capture channel 7"
}
pcm.outch8 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 7 ]
hint.description "M-Audio M-Track Eight output/playback channel 8"
}
pcm.inch8 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 7 ]
hint.description "M-Audio M-Track Eight input/capture channel 8"
}
Though I was able to grasp most of the concepts, I was unable to understand the following:
channels
: does it stand for audio channel numbers? Like if I'm gonna use mono or stereo? Say, if my soundcard has 8 input ports and 8 output ports on it, if I were to use mono configuration should I set a value of 16 - 8 inputs + 8 outputs or 8 - 8 input-output pairs (and in case I were to use stereo configuration, should I set a value of 8 - 4 inputs + 4 outputs or 4 - 4 input-output pairs)?
buffer_size
: I don't know anything except that making these sizes smaller is needed to ensure lower latency. What exactly does this mean?
period_size
: again this has to be related to latency?
bindings
: are these the ones that maps channels to the ports? For mono configuration I used[ <index_number> ]
. Can I use[ <index_number1> <index_number2> ]
for stereo configuration and likewise?
ipc_key
: I understand it is a given unique number, same for each PCM device defined from the same slave. Supposing I add a new soundcard detected ashw:2,0
, and go ahead to define PCM devices in the same manner as above, I will have to assign a different value for this parameter there (say 2222) for each PCM device defined from the new slave?
I could try and experiment a bit to understand the rest, but still some stuffs couldn't be cleared. The fact that not many tutorials and lack of good official ALSA documentation is not helping the cause either. Can someone throw some light on this?
audio configuration alsa
To setup my USB soundcard for my Linux PC, I started to learn about ALSA and about writing configuration files. After much effort, I was able to write one and get it working. The following is my .asoundrc
stored in my home folder:
pcm.!default {
type plug
slave {
pcm "hw:1,0"
}
}
ctl.!default {
type hw
card 1
}
pcm_slave.maudiomtrackeight1 {
pcm "hw:1,0"
channels 8
rate 44100
buffer_size 4096
period_size 1024
}
pcm.outch1 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 0 ]
hint.description "M-Audio M-Track Eight output/playback channel 1"
}
pcm.inch1 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 0 ]
hint.description "M-Audio M-Track Eight input/capture channel 1"
}
pcm.outch2 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 1 ]
hint.description "M-Audio M-Track Eight output/playback channel 2"
}
pcm.inch2 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 1 ]
hint.description "M-Audio M-Track Eight input/capture channel 2"
}
pcm.outch3 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 2 ]
hint.description "M-Audio M-Track Eight output/playback channel 3"
}
pcm.inch3 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 2 ]
hint.description "M-Audio M-Track Eight input/capture channel 3"
}
pcm.outch4 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 3 ]
hint.description "M-Audio M-Track Eight output/playback channel 4"
}
pcm.inch4 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 3 ]
hint.description "M-Audio M-Track Eight input/capture channel 4"
}
pcm.outch5 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 4 ]
hint.description "M-Audio M-Track Eight output/playback channel 5"
}
pcm.inch5 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 4 ]
hint.description "M-Audio M-Track Eight input/capture channel 5"
}
pcm.outch6 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 5 ]
hint.description "M-Audio M-Track Eight output/playback channel 6"
}
pcm.inch6 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 5 ]
hint.description "M-Audio M-Track Eight input/capture channel 6"
}
pcm.outch7 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 6 ]
hint.description "M-Audio M-Track Eight output/playback channel 7"
}
pcm.inch7 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 6 ]
hint.description "M-Audio M-Track Eight input/capture channel 7"
}
pcm.outch8 {
type dshare
ipc_key 1111
slave maudiomtrackeight1
bindings [ 7 ]
hint.description "M-Audio M-Track Eight output/playback channel 8"
}
pcm.inch8 {
type dsnoop
ipc_key 1111
slave maudiomtrackeight1
bindings [ 7 ]
hint.description "M-Audio M-Track Eight input/capture channel 8"
}
Though I was able to grasp most of the concepts, I was unable to understand the following:
channels
: does it stand for audio channel numbers? Like if I'm gonna use mono or stereo? Say, if my soundcard has 8 input ports and 8 output ports on it, if I were to use mono configuration should I set a value of 16 - 8 inputs + 8 outputs or 8 - 8 input-output pairs (and in case I were to use stereo configuration, should I set a value of 8 - 4 inputs + 4 outputs or 4 - 4 input-output pairs)?
buffer_size
: I don't know anything except that making these sizes smaller is needed to ensure lower latency. What exactly does this mean?
period_size
: again this has to be related to latency?
bindings
: are these the ones that maps channels to the ports? For mono configuration I used[ <index_number> ]
. Can I use[ <index_number1> <index_number2> ]
for stereo configuration and likewise?
ipc_key
: I understand it is a given unique number, same for each PCM device defined from the same slave. Supposing I add a new soundcard detected ashw:2,0
, and go ahead to define PCM devices in the same manner as above, I will have to assign a different value for this parameter there (say 2222) for each PCM device defined from the new slave?
I could try and experiment a bit to understand the rest, but still some stuffs couldn't be cleared. The fact that not many tutorials and lack of good official ALSA documentation is not helping the cause either. Can someone throw some light on this?
audio configuration alsa
audio configuration alsa
edited Nov 17 '18 at 0:05
Rui F Ribeiro
40.8k16 gold badges91 silver badges152 bronze badges
40.8k16 gold badges91 silver badges152 bronze badges
asked Mar 22 '17 at 16:56
skrowten_hermitskrowten_hermit
2082 gold badges5 silver badges22 bronze badges
2082 gold badges5 silver badges22 bronze badges
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
Partial answer:
First, let me say that very likely you don't need to write a configuration for your M-Track at all. In fact, the way you have set it up is what you don't want under most circumstances: You have made each channel a separate device.
That means when you try to record, say, a band playing at the same time, it's possible to get random offsets between the channels (band members) because each channel is processes separately. So normally, you'd just record all 8 channel into separate tracks, and then you have them nicely synchronized and you can edit them.
The same holds if you just want to connect up your home Hi-Fi system for playing music: You want synchronous channels for left/right/center/subwoofer/rear etc., not separate devices.
The only circumstances I can think if where it makes sense to make separate devices is if for some reason each channel is connected to a loudspeaker in a different room, and you want to play different music through each of them.
Also, modern ALSA automatically provides dshare
and dsnoop
plugins on top of the hardware decice by default, so you don't need to specify them explicitely.
That said, here are the explanations:
channels
: The number of channels that are simultaniously recorded/played. 1 for mono, 2 for stereo, 8 for your card. Input and output is counted separately, so for 8 input and 8 input channels you just say "8 channels". The way you setup yourinch
andoutch
devices requires achannels 1
entry for each.bindings
: map channels from the device the slave device is bound to to the channels on this device. Say you want to swap the left and right channel of the original device by putting a plugin on top, then you'd saybindings { 0 1 1 0 }
.ipc_key
:dmix
,dshare
anddsnoop
plugins allow multiple clients to communicate with a single source/sink. This communication is done via this key (IPC = Inter-Process Communication). So the key needs to be different for every plugin, no matter if you have several plugins for one soundcard or one plugin each for several soundcard, or you'll run into trouble.buffer_size
: Audio data is stored and transferred in so-called buffers, i.e. pieces of RAM for a number of samples. If you make this way high, lots of data will be stored before it is processed, so you increase latency. If you make it way low, the overhead of processing will prevent all data to be processed before the next data comes in or must go out, so you'll have audio drop out.period_size
: No idea.
All ALSA PCM plugins are also described here in detail.
Don't mess with buffer_size
or period_size
unless you really know what you are doing. If latency is important for you (e.g., if you want to use the computer for a live performance), the first thing to do is to make sure Pulseaudio is uninstalled, and use jackd
for all things audio. Only if you still experience noticable latency problems, you can try different values for buffer_size
.
This is as good as it can get. Probably, partial only because ofperiod_size
being left blank. So,bindings { 0 1 1 0 }
is a fixed string for swapping left and right channels of a device or is constructed based on some logic? Also, my PCM devices all are defined with sameipc_key
as1111
. Going by your explanation, if I use1111
for dsnoop, I must use something else fordshare
right? Is there a way to know the optimalbuffer_size
andperiod_size
values? Or its done through trial and error only?
– skrowten_hermit
Mar 23 '17 at 11:20
1
No, you must use different keys for every single plugin. Soinch1
gets 1001,inch2
gets 1002,outch1
get 2001 etc. See the link above for the the format ofbinding
. Forshare
,dmix
etc. it's pairs of slave channel/client channel. As for buffer size, define "optimal". As I said: Too low and you'll loose audio. Too high and you'll get higher latency. Just leave it alone unless you really need to modify it, because either you get drop outs, or you have noticable latency.
– dirkt
Mar 23 '17 at 11:43
So, I need to modify my.asoundrc
and assign different key for each of the plugins above and probably removebuffer_size
andperiod_size
completely from the definition to make the configuration more stable and efficient, right?
– skrowten_hermit
Mar 27 '17 at 3:36
1
If you insist on using this particular configuration, because for example you have connected each channel to a single speaker in a different room, yes. If you want a stable and efficient configuration for most other purposes, you'd just completely delete it and use the default configuration (play/record 8 channels at once, with default dsnoop/dshare on top). If you would tell me in what way you intend to use it, I could comment on the best configuration.
– dirkt
Mar 27 '17 at 5:46
I'm trying to play an audio file on 8 separate devices and record them (not necessarily at the same time always), making sure I use dedicated channels for each.
– skrowten_hermit
Mar 29 '17 at 4:01
add a comment |
This article has a short explanation of the relationship of buffers and periods:
A sound card has a hardware buffer that stores recorded samples. When the buffer is sufficiently full, it [the sound card?] generates an interrupt. The kernel sound driver then uses direct memory access (DMA) to transfer samples to an application buffer in memory.
[...]
The buffer can be quite large, and transferring it in one operation could result in unacceptable delays, called latency. To solve this, ALSA splits the buffer up into a series of periods (called fragments in OSS/Free) and transfers the data in units of a period.
It sounds like:
- audio samples are stored in a buffer
- the kernel copies audio samples from the buffer to application memory
- the buffer could be too large to transfer in one copy (causing latency)
- the buffer is instead copied in pieces, called
period
s
The article provides a diagram.
New contributor
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f353125%2funderstanding-what-channels-buffer-size-period-size-bindings-and-ipc%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Partial answer:
First, let me say that very likely you don't need to write a configuration for your M-Track at all. In fact, the way you have set it up is what you don't want under most circumstances: You have made each channel a separate device.
That means when you try to record, say, a band playing at the same time, it's possible to get random offsets between the channels (band members) because each channel is processes separately. So normally, you'd just record all 8 channel into separate tracks, and then you have them nicely synchronized and you can edit them.
The same holds if you just want to connect up your home Hi-Fi system for playing music: You want synchronous channels for left/right/center/subwoofer/rear etc., not separate devices.
The only circumstances I can think if where it makes sense to make separate devices is if for some reason each channel is connected to a loudspeaker in a different room, and you want to play different music through each of them.
Also, modern ALSA automatically provides dshare
and dsnoop
plugins on top of the hardware decice by default, so you don't need to specify them explicitely.
That said, here are the explanations:
channels
: The number of channels that are simultaniously recorded/played. 1 for mono, 2 for stereo, 8 for your card. Input and output is counted separately, so for 8 input and 8 input channels you just say "8 channels". The way you setup yourinch
andoutch
devices requires achannels 1
entry for each.bindings
: map channels from the device the slave device is bound to to the channels on this device. Say you want to swap the left and right channel of the original device by putting a plugin on top, then you'd saybindings { 0 1 1 0 }
.ipc_key
:dmix
,dshare
anddsnoop
plugins allow multiple clients to communicate with a single source/sink. This communication is done via this key (IPC = Inter-Process Communication). So the key needs to be different for every plugin, no matter if you have several plugins for one soundcard or one plugin each for several soundcard, or you'll run into trouble.buffer_size
: Audio data is stored and transferred in so-called buffers, i.e. pieces of RAM for a number of samples. If you make this way high, lots of data will be stored before it is processed, so you increase latency. If you make it way low, the overhead of processing will prevent all data to be processed before the next data comes in or must go out, so you'll have audio drop out.period_size
: No idea.
All ALSA PCM plugins are also described here in detail.
Don't mess with buffer_size
or period_size
unless you really know what you are doing. If latency is important for you (e.g., if you want to use the computer for a live performance), the first thing to do is to make sure Pulseaudio is uninstalled, and use jackd
for all things audio. Only if you still experience noticable latency problems, you can try different values for buffer_size
.
This is as good as it can get. Probably, partial only because ofperiod_size
being left blank. So,bindings { 0 1 1 0 }
is a fixed string for swapping left and right channels of a device or is constructed based on some logic? Also, my PCM devices all are defined with sameipc_key
as1111
. Going by your explanation, if I use1111
for dsnoop, I must use something else fordshare
right? Is there a way to know the optimalbuffer_size
andperiod_size
values? Or its done through trial and error only?
– skrowten_hermit
Mar 23 '17 at 11:20
1
No, you must use different keys for every single plugin. Soinch1
gets 1001,inch2
gets 1002,outch1
get 2001 etc. See the link above for the the format ofbinding
. Forshare
,dmix
etc. it's pairs of slave channel/client channel. As for buffer size, define "optimal". As I said: Too low and you'll loose audio. Too high and you'll get higher latency. Just leave it alone unless you really need to modify it, because either you get drop outs, or you have noticable latency.
– dirkt
Mar 23 '17 at 11:43
So, I need to modify my.asoundrc
and assign different key for each of the plugins above and probably removebuffer_size
andperiod_size
completely from the definition to make the configuration more stable and efficient, right?
– skrowten_hermit
Mar 27 '17 at 3:36
1
If you insist on using this particular configuration, because for example you have connected each channel to a single speaker in a different room, yes. If you want a stable and efficient configuration for most other purposes, you'd just completely delete it and use the default configuration (play/record 8 channels at once, with default dsnoop/dshare on top). If you would tell me in what way you intend to use it, I could comment on the best configuration.
– dirkt
Mar 27 '17 at 5:46
I'm trying to play an audio file on 8 separate devices and record them (not necessarily at the same time always), making sure I use dedicated channels for each.
– skrowten_hermit
Mar 29 '17 at 4:01
add a comment |
Partial answer:
First, let me say that very likely you don't need to write a configuration for your M-Track at all. In fact, the way you have set it up is what you don't want under most circumstances: You have made each channel a separate device.
That means when you try to record, say, a band playing at the same time, it's possible to get random offsets between the channels (band members) because each channel is processes separately. So normally, you'd just record all 8 channel into separate tracks, and then you have them nicely synchronized and you can edit them.
The same holds if you just want to connect up your home Hi-Fi system for playing music: You want synchronous channels for left/right/center/subwoofer/rear etc., not separate devices.
The only circumstances I can think if where it makes sense to make separate devices is if for some reason each channel is connected to a loudspeaker in a different room, and you want to play different music through each of them.
Also, modern ALSA automatically provides dshare
and dsnoop
plugins on top of the hardware decice by default, so you don't need to specify them explicitely.
That said, here are the explanations:
channels
: The number of channels that are simultaniously recorded/played. 1 for mono, 2 for stereo, 8 for your card. Input and output is counted separately, so for 8 input and 8 input channels you just say "8 channels". The way you setup yourinch
andoutch
devices requires achannels 1
entry for each.bindings
: map channels from the device the slave device is bound to to the channels on this device. Say you want to swap the left and right channel of the original device by putting a plugin on top, then you'd saybindings { 0 1 1 0 }
.ipc_key
:dmix
,dshare
anddsnoop
plugins allow multiple clients to communicate with a single source/sink. This communication is done via this key (IPC = Inter-Process Communication). So the key needs to be different for every plugin, no matter if you have several plugins for one soundcard or one plugin each for several soundcard, or you'll run into trouble.buffer_size
: Audio data is stored and transferred in so-called buffers, i.e. pieces of RAM for a number of samples. If you make this way high, lots of data will be stored before it is processed, so you increase latency. If you make it way low, the overhead of processing will prevent all data to be processed before the next data comes in or must go out, so you'll have audio drop out.period_size
: No idea.
All ALSA PCM plugins are also described here in detail.
Don't mess with buffer_size
or period_size
unless you really know what you are doing. If latency is important for you (e.g., if you want to use the computer for a live performance), the first thing to do is to make sure Pulseaudio is uninstalled, and use jackd
for all things audio. Only if you still experience noticable latency problems, you can try different values for buffer_size
.
This is as good as it can get. Probably, partial only because ofperiod_size
being left blank. So,bindings { 0 1 1 0 }
is a fixed string for swapping left and right channels of a device or is constructed based on some logic? Also, my PCM devices all are defined with sameipc_key
as1111
. Going by your explanation, if I use1111
for dsnoop, I must use something else fordshare
right? Is there a way to know the optimalbuffer_size
andperiod_size
values? Or its done through trial and error only?
– skrowten_hermit
Mar 23 '17 at 11:20
1
No, you must use different keys for every single plugin. Soinch1
gets 1001,inch2
gets 1002,outch1
get 2001 etc. See the link above for the the format ofbinding
. Forshare
,dmix
etc. it's pairs of slave channel/client channel. As for buffer size, define "optimal". As I said: Too low and you'll loose audio. Too high and you'll get higher latency. Just leave it alone unless you really need to modify it, because either you get drop outs, or you have noticable latency.
– dirkt
Mar 23 '17 at 11:43
So, I need to modify my.asoundrc
and assign different key for each of the plugins above and probably removebuffer_size
andperiod_size
completely from the definition to make the configuration more stable and efficient, right?
– skrowten_hermit
Mar 27 '17 at 3:36
1
If you insist on using this particular configuration, because for example you have connected each channel to a single speaker in a different room, yes. If you want a stable and efficient configuration for most other purposes, you'd just completely delete it and use the default configuration (play/record 8 channels at once, with default dsnoop/dshare on top). If you would tell me in what way you intend to use it, I could comment on the best configuration.
– dirkt
Mar 27 '17 at 5:46
I'm trying to play an audio file on 8 separate devices and record them (not necessarily at the same time always), making sure I use dedicated channels for each.
– skrowten_hermit
Mar 29 '17 at 4:01
add a comment |
Partial answer:
First, let me say that very likely you don't need to write a configuration for your M-Track at all. In fact, the way you have set it up is what you don't want under most circumstances: You have made each channel a separate device.
That means when you try to record, say, a band playing at the same time, it's possible to get random offsets between the channels (band members) because each channel is processes separately. So normally, you'd just record all 8 channel into separate tracks, and then you have them nicely synchronized and you can edit them.
The same holds if you just want to connect up your home Hi-Fi system for playing music: You want synchronous channels for left/right/center/subwoofer/rear etc., not separate devices.
The only circumstances I can think if where it makes sense to make separate devices is if for some reason each channel is connected to a loudspeaker in a different room, and you want to play different music through each of them.
Also, modern ALSA automatically provides dshare
and dsnoop
plugins on top of the hardware decice by default, so you don't need to specify them explicitely.
That said, here are the explanations:
channels
: The number of channels that are simultaniously recorded/played. 1 for mono, 2 for stereo, 8 for your card. Input and output is counted separately, so for 8 input and 8 input channels you just say "8 channels". The way you setup yourinch
andoutch
devices requires achannels 1
entry for each.bindings
: map channels from the device the slave device is bound to to the channels on this device. Say you want to swap the left and right channel of the original device by putting a plugin on top, then you'd saybindings { 0 1 1 0 }
.ipc_key
:dmix
,dshare
anddsnoop
plugins allow multiple clients to communicate with a single source/sink. This communication is done via this key (IPC = Inter-Process Communication). So the key needs to be different for every plugin, no matter if you have several plugins for one soundcard or one plugin each for several soundcard, or you'll run into trouble.buffer_size
: Audio data is stored and transferred in so-called buffers, i.e. pieces of RAM for a number of samples. If you make this way high, lots of data will be stored before it is processed, so you increase latency. If you make it way low, the overhead of processing will prevent all data to be processed before the next data comes in or must go out, so you'll have audio drop out.period_size
: No idea.
All ALSA PCM plugins are also described here in detail.
Don't mess with buffer_size
or period_size
unless you really know what you are doing. If latency is important for you (e.g., if you want to use the computer for a live performance), the first thing to do is to make sure Pulseaudio is uninstalled, and use jackd
for all things audio. Only if you still experience noticable latency problems, you can try different values for buffer_size
.
Partial answer:
First, let me say that very likely you don't need to write a configuration for your M-Track at all. In fact, the way you have set it up is what you don't want under most circumstances: You have made each channel a separate device.
That means when you try to record, say, a band playing at the same time, it's possible to get random offsets between the channels (band members) because each channel is processes separately. So normally, you'd just record all 8 channel into separate tracks, and then you have them nicely synchronized and you can edit them.
The same holds if you just want to connect up your home Hi-Fi system for playing music: You want synchronous channels for left/right/center/subwoofer/rear etc., not separate devices.
The only circumstances I can think if where it makes sense to make separate devices is if for some reason each channel is connected to a loudspeaker in a different room, and you want to play different music through each of them.
Also, modern ALSA automatically provides dshare
and dsnoop
plugins on top of the hardware decice by default, so you don't need to specify them explicitely.
That said, here are the explanations:
channels
: The number of channels that are simultaniously recorded/played. 1 for mono, 2 for stereo, 8 for your card. Input and output is counted separately, so for 8 input and 8 input channels you just say "8 channels". The way you setup yourinch
andoutch
devices requires achannels 1
entry for each.bindings
: map channels from the device the slave device is bound to to the channels on this device. Say you want to swap the left and right channel of the original device by putting a plugin on top, then you'd saybindings { 0 1 1 0 }
.ipc_key
:dmix
,dshare
anddsnoop
plugins allow multiple clients to communicate with a single source/sink. This communication is done via this key (IPC = Inter-Process Communication). So the key needs to be different for every plugin, no matter if you have several plugins for one soundcard or one plugin each for several soundcard, or you'll run into trouble.buffer_size
: Audio data is stored and transferred in so-called buffers, i.e. pieces of RAM for a number of samples. If you make this way high, lots of data will be stored before it is processed, so you increase latency. If you make it way low, the overhead of processing will prevent all data to be processed before the next data comes in or must go out, so you'll have audio drop out.period_size
: No idea.
All ALSA PCM plugins are also described here in detail.
Don't mess with buffer_size
or period_size
unless you really know what you are doing. If latency is important for you (e.g., if you want to use the computer for a live performance), the first thing to do is to make sure Pulseaudio is uninstalled, and use jackd
for all things audio. Only if you still experience noticable latency problems, you can try different values for buffer_size
.
answered Mar 23 '17 at 9:00
dirktdirkt
18.4k3 gold badges15 silver badges39 bronze badges
18.4k3 gold badges15 silver badges39 bronze badges
This is as good as it can get. Probably, partial only because ofperiod_size
being left blank. So,bindings { 0 1 1 0 }
is a fixed string for swapping left and right channels of a device or is constructed based on some logic? Also, my PCM devices all are defined with sameipc_key
as1111
. Going by your explanation, if I use1111
for dsnoop, I must use something else fordshare
right? Is there a way to know the optimalbuffer_size
andperiod_size
values? Or its done through trial and error only?
– skrowten_hermit
Mar 23 '17 at 11:20
1
No, you must use different keys for every single plugin. Soinch1
gets 1001,inch2
gets 1002,outch1
get 2001 etc. See the link above for the the format ofbinding
. Forshare
,dmix
etc. it's pairs of slave channel/client channel. As for buffer size, define "optimal". As I said: Too low and you'll loose audio. Too high and you'll get higher latency. Just leave it alone unless you really need to modify it, because either you get drop outs, or you have noticable latency.
– dirkt
Mar 23 '17 at 11:43
So, I need to modify my.asoundrc
and assign different key for each of the plugins above and probably removebuffer_size
andperiod_size
completely from the definition to make the configuration more stable and efficient, right?
– skrowten_hermit
Mar 27 '17 at 3:36
1
If you insist on using this particular configuration, because for example you have connected each channel to a single speaker in a different room, yes. If you want a stable and efficient configuration for most other purposes, you'd just completely delete it and use the default configuration (play/record 8 channels at once, with default dsnoop/dshare on top). If you would tell me in what way you intend to use it, I could comment on the best configuration.
– dirkt
Mar 27 '17 at 5:46
I'm trying to play an audio file on 8 separate devices and record them (not necessarily at the same time always), making sure I use dedicated channels for each.
– skrowten_hermit
Mar 29 '17 at 4:01
add a comment |
This is as good as it can get. Probably, partial only because ofperiod_size
being left blank. So,bindings { 0 1 1 0 }
is a fixed string for swapping left and right channels of a device or is constructed based on some logic? Also, my PCM devices all are defined with sameipc_key
as1111
. Going by your explanation, if I use1111
for dsnoop, I must use something else fordshare
right? Is there a way to know the optimalbuffer_size
andperiod_size
values? Or its done through trial and error only?
– skrowten_hermit
Mar 23 '17 at 11:20
1
No, you must use different keys for every single plugin. Soinch1
gets 1001,inch2
gets 1002,outch1
get 2001 etc. See the link above for the the format ofbinding
. Forshare
,dmix
etc. it's pairs of slave channel/client channel. As for buffer size, define "optimal". As I said: Too low and you'll loose audio. Too high and you'll get higher latency. Just leave it alone unless you really need to modify it, because either you get drop outs, or you have noticable latency.
– dirkt
Mar 23 '17 at 11:43
So, I need to modify my.asoundrc
and assign different key for each of the plugins above and probably removebuffer_size
andperiod_size
completely from the definition to make the configuration more stable and efficient, right?
– skrowten_hermit
Mar 27 '17 at 3:36
1
If you insist on using this particular configuration, because for example you have connected each channel to a single speaker in a different room, yes. If you want a stable and efficient configuration for most other purposes, you'd just completely delete it and use the default configuration (play/record 8 channels at once, with default dsnoop/dshare on top). If you would tell me in what way you intend to use it, I could comment on the best configuration.
– dirkt
Mar 27 '17 at 5:46
I'm trying to play an audio file on 8 separate devices and record them (not necessarily at the same time always), making sure I use dedicated channels for each.
– skrowten_hermit
Mar 29 '17 at 4:01
This is as good as it can get. Probably, partial only because of
period_size
being left blank. So, bindings { 0 1 1 0 }
is a fixed string for swapping left and right channels of a device or is constructed based on some logic? Also, my PCM devices all are defined with same ipc_key
as 1111
. Going by your explanation, if I use 1111
for dsnoop, I must use something else for dshare
right? Is there a way to know the optimal buffer_size
and period_size
values? Or its done through trial and error only?– skrowten_hermit
Mar 23 '17 at 11:20
This is as good as it can get. Probably, partial only because of
period_size
being left blank. So, bindings { 0 1 1 0 }
is a fixed string for swapping left and right channels of a device or is constructed based on some logic? Also, my PCM devices all are defined with same ipc_key
as 1111
. Going by your explanation, if I use 1111
for dsnoop, I must use something else for dshare
right? Is there a way to know the optimal buffer_size
and period_size
values? Or its done through trial and error only?– skrowten_hermit
Mar 23 '17 at 11:20
1
1
No, you must use different keys for every single plugin. So
inch1
gets 1001, inch2
gets 1002, outch1
get 2001 etc. See the link above for the the format of binding
. For share
, dmix
etc. it's pairs of slave channel/client channel. As for buffer size, define "optimal". As I said: Too low and you'll loose audio. Too high and you'll get higher latency. Just leave it alone unless you really need to modify it, because either you get drop outs, or you have noticable latency.– dirkt
Mar 23 '17 at 11:43
No, you must use different keys for every single plugin. So
inch1
gets 1001, inch2
gets 1002, outch1
get 2001 etc. See the link above for the the format of binding
. For share
, dmix
etc. it's pairs of slave channel/client channel. As for buffer size, define "optimal". As I said: Too low and you'll loose audio. Too high and you'll get higher latency. Just leave it alone unless you really need to modify it, because either you get drop outs, or you have noticable latency.– dirkt
Mar 23 '17 at 11:43
So, I need to modify my
.asoundrc
and assign different key for each of the plugins above and probably remove buffer_size
and period_size
completely from the definition to make the configuration more stable and efficient, right?– skrowten_hermit
Mar 27 '17 at 3:36
So, I need to modify my
.asoundrc
and assign different key for each of the plugins above and probably remove buffer_size
and period_size
completely from the definition to make the configuration more stable and efficient, right?– skrowten_hermit
Mar 27 '17 at 3:36
1
1
If you insist on using this particular configuration, because for example you have connected each channel to a single speaker in a different room, yes. If you want a stable and efficient configuration for most other purposes, you'd just completely delete it and use the default configuration (play/record 8 channels at once, with default dsnoop/dshare on top). If you would tell me in what way you intend to use it, I could comment on the best configuration.
– dirkt
Mar 27 '17 at 5:46
If you insist on using this particular configuration, because for example you have connected each channel to a single speaker in a different room, yes. If you want a stable and efficient configuration for most other purposes, you'd just completely delete it and use the default configuration (play/record 8 channels at once, with default dsnoop/dshare on top). If you would tell me in what way you intend to use it, I could comment on the best configuration.
– dirkt
Mar 27 '17 at 5:46
I'm trying to play an audio file on 8 separate devices and record them (not necessarily at the same time always), making sure I use dedicated channels for each.
– skrowten_hermit
Mar 29 '17 at 4:01
I'm trying to play an audio file on 8 separate devices and record them (not necessarily at the same time always), making sure I use dedicated channels for each.
– skrowten_hermit
Mar 29 '17 at 4:01
add a comment |
This article has a short explanation of the relationship of buffers and periods:
A sound card has a hardware buffer that stores recorded samples. When the buffer is sufficiently full, it [the sound card?] generates an interrupt. The kernel sound driver then uses direct memory access (DMA) to transfer samples to an application buffer in memory.
[...]
The buffer can be quite large, and transferring it in one operation could result in unacceptable delays, called latency. To solve this, ALSA splits the buffer up into a series of periods (called fragments in OSS/Free) and transfers the data in units of a period.
It sounds like:
- audio samples are stored in a buffer
- the kernel copies audio samples from the buffer to application memory
- the buffer could be too large to transfer in one copy (causing latency)
- the buffer is instead copied in pieces, called
period
s
The article provides a diagram.
New contributor
add a comment |
This article has a short explanation of the relationship of buffers and periods:
A sound card has a hardware buffer that stores recorded samples. When the buffer is sufficiently full, it [the sound card?] generates an interrupt. The kernel sound driver then uses direct memory access (DMA) to transfer samples to an application buffer in memory.
[...]
The buffer can be quite large, and transferring it in one operation could result in unacceptable delays, called latency. To solve this, ALSA splits the buffer up into a series of periods (called fragments in OSS/Free) and transfers the data in units of a period.
It sounds like:
- audio samples are stored in a buffer
- the kernel copies audio samples from the buffer to application memory
- the buffer could be too large to transfer in one copy (causing latency)
- the buffer is instead copied in pieces, called
period
s
The article provides a diagram.
New contributor
add a comment |
This article has a short explanation of the relationship of buffers and periods:
A sound card has a hardware buffer that stores recorded samples. When the buffer is sufficiently full, it [the sound card?] generates an interrupt. The kernel sound driver then uses direct memory access (DMA) to transfer samples to an application buffer in memory.
[...]
The buffer can be quite large, and transferring it in one operation could result in unacceptable delays, called latency. To solve this, ALSA splits the buffer up into a series of periods (called fragments in OSS/Free) and transfers the data in units of a period.
It sounds like:
- audio samples are stored in a buffer
- the kernel copies audio samples from the buffer to application memory
- the buffer could be too large to transfer in one copy (causing latency)
- the buffer is instead copied in pieces, called
period
s
The article provides a diagram.
New contributor
This article has a short explanation of the relationship of buffers and periods:
A sound card has a hardware buffer that stores recorded samples. When the buffer is sufficiently full, it [the sound card?] generates an interrupt. The kernel sound driver then uses direct memory access (DMA) to transfer samples to an application buffer in memory.
[...]
The buffer can be quite large, and transferring it in one operation could result in unacceptable delays, called latency. To solve this, ALSA splits the buffer up into a series of periods (called fragments in OSS/Free) and transfers the data in units of a period.
It sounds like:
- audio samples are stored in a buffer
- the kernel copies audio samples from the buffer to application memory
- the buffer could be too large to transfer in one copy (causing latency)
- the buffer is instead copied in pieces, called
period
s
The article provides a diagram.
New contributor
New contributor
answered 57 mins ago
Kevin W MatthewsKevin W Matthews
112 bronze badges
112 bronze badges
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f353125%2funderstanding-what-channels-buffer-size-period-size-bindings-and-ipc%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown