High Activity
235
I Use This!

News

Analyzed 30 days ago. based on code collected 30 days ago.
Posted over 1 year ago
Some friends give me their old hardware to help me on my next project: Create an open source embedded controller firmware for Lenovo Thinkpads. My target platform are all Lenovo Thinkpads using a Renesas/Hitachi H8S as EC. These chips built into ... [More] Thinkpads from very old one to new models like the X230. So what can be better that using a very old laptop like a T40 to solder and do hardware mods? The first step is flashing the chip independent from the Operating System running on the hardware. Why that? Because the EC controls certain power regulators and take care of turning the mainboard on. A broken EC would allow you to turn your Laptop on. The H8S supports different boot modes. It can boot normal advance boot. The boot mode is a special mode for developing and flashing. It receive their firmware over normal UART and execute it. The H8S defines it boot mode over 2 pins - MD0 and MD1 - also named mode0 and mode1. After looking on the schematics (which is available on the internet) we need to solder the UART pin RX, TX, MD1, GND. MD0 isn't required, because it's already set to 0. I soldered the test pin TP1, TP10 (I2C). /RES was soldered too, because I miscounted the pins. /RES lies beside of MD1. But it can be useful later. I colored the pins on the bottom picture. color pin Red MD1 Orange GND Blue /RES Green RX Yellow TX The soldered image is missing MD1. [Less]
Posted over 1 year ago
I got the task to setup LACP witch an not so new 3com siwtch on one side and a Debian Jessie + OpenVSwitch on the other. Usally not a so big problem. Just keep in mind, 3com was bought by HP and is still using 3com software in their newer products. ... [More] It's even possible to update an 3com switch with HP firmware if you're lucky to know which HP product matches your HP switch. Back to the task: setup LACP. I've done everything mentioned in the manual: Check all ports are the same media type All ports must be configured in the same way * have the same LACP priority Everything seems be ok. The documentation also say that all Ports will loose their individual port conifiguration when added. On the switch side I can see the switch is showing my Linux Box as port partner, but still the link group isn't going into 'Active' state. Still showing 'The port is not configured properly.'. An update is not a option from remote, to much service depends on this switch. Let's take a closer look on the VLAN configuration. The LACP group isn't configured yet for any VLAN. But the ports still have an old configuration with tagged VLAN and is in hybrid mode? Why? It has a PVID configured, but not an untagged vlan asigned. Looks strange. Go to VLAN -> Modify Port: Select both LACP ports as well as the LACP group and set them into Trunk mode without any VLAN. Now the LACP changed to active. Maybe this changed in newer HP firmware versions. [Less]
Posted almost 2 years ago
This is a technical post about PulseAudio internals and the upcoming protocol improvements in the upcoming PulseAudio 6.0 release. PulseAudio memory copies and buffering PulseAudio is said to have a “zero-copy” architecture. So let’s look at what ... [More] copies and buffers are involved in a typical playback scenario. Client side When PulseAudio server and client runs as the same user, PulseAudio enables shared memory (SHM) for audio data. (In other cases, SHM is disabled for security reasons.) Applications can use pa_stream_begin_write to get a pointer directly into the SHM buffer. When using pa_stream_write or through the ALSA plugin, there will be one memory copy into the SHM. Server resampling and remapping On the server side, the server might need to convert the stream into a format that fits the hardware (and potential other streams that might be running simultaneously). This step is skipped if deemed unnecessary. First, the samples are converted to either signed 16 bit or float 32 bit (mainly depending on resampler requirements). In case resampling is necessary, we make use of external resampler libraries for this, the default being speex. Second, if remapping is necessary, e g if the input is mono and the output is stereo, that is performed as well. Finally, the samples are converted to a format that the hardware supports. So, in worst case, there might be up to four different buffers involved here (first: after converting to “work format”, second: after resampling, third: after remapping, fourth: after converting to hardware supported format), and in best case, this step is entirely skipped. Mixing and hardware output PulseAudio’s built in mixer multiplies each channel of each stream with a volume factor and writes the result to the hardware. In case the hardware supports mmap (memory mapping), we write the mix result directly into the DMA buffers. Summary The best we can do is one copy in total, from the SHM buffer directly into the DMA hardware buffer. I hope this clears up any confusion about what PulseAudio’s advertised “zero copy” capabilities means in practice. However, memory copies is not the only thing you want to avoid to get good performance, which brings us to the next point: Protocol improvements in 6.0 PulseAudio does pretty well CPU wise for high latency loads (e g music playback), but a bit worse for low latency loads (e g VOIP, gaming). Or to put it another way, PulseAudio has a low per sample cost, but there is still some optimisation that can be done per packet. For every playback packet, there are three messages sent: from server to client saying “I need more data”, from client to server saying “here’s some data, I put it in SHM, at this address”, and then a third from server to client saying “thanks, I have no more use for this SHM data, please reclaim the memory”. The third message is not sent until the audio has actually been played back. For every message, it means syscalls to write, read, and poll a unix socket. This overhead turned out to be significant enough to try to improve. So instead of putting just the audio data into SHM, as of 6.0 we also put the messages into two SHM ringbuffers, one in each direction. For signalling we use eventfds. (There is also an optimisation layer on top of the eventfd that tries to avoid writing to the eventfd in case no one is currently waiting.) This is not so much for saving memory copies but to save syscalls. From my own unscientific benchmarks (i e, running “top”), this saves us ~10% – 25% of CPU power in low latency use cases, half of that being on the client side. [Less]
Posted almost 2 years ago
This is a technical post about PulseAudio internals and the upcoming protocol improvements in the upcoming PulseAudio 6.0 release. PulseAudio memory copies and buffering PulseAudio is said to have a “zero-copy” architecture. So let’s look at what ... [More] copies and buffers are involved in a typical playback scenario. Client side When PulseAudio server and client runs as the same user, PulseAudio enables shared memory (SHM) for audio data. (In other cases, SHM is disabled for security reasons.) Applications can use pa_stream_begin_write to get a pointer directly into the SHM buffer. When using pa_stream_write or through the ALSA plugin, there will be one memory copy into the SHM. Server resampling and remapping On the server side, the server might need to convert the stream into a format that fits the hardware (and potential other streams that might be running simultaneously). This step is skipped if deemed unnecessary. First, the samples are converted to either signed 16 bit or float 32 bit (mainly depending on resampler requirements). In case resampling is necessary, we make use of external resampler libraries for this, the default being speex. Second, if remapping is necessary, e g if the input is mono and the output is stereo, that is performed as well. Finally, the samples are converted to a format that the hardware supports. So, in worst case, there might be up to four different buffers involved here (first: after converting to “work format”, second: after resampling, third: after remapping, fourth: after converting to hardware supported format), and in best case, this step is entirely skipped. Mixing and hardware output PulseAudio’s built in mixer multiplies each channel of each stream with a volume factor and writes the result to the hardware. In case the hardware supports mmap (memory mapping), we write the mix result directly into the DMA buffers. Summary The best we can do is one copy in total, from the SHM buffer directly into the DMA hardware buffer. I hope this clears up any confusion about what PulseAudio’s advertised “zero copy” capabilities means in practice. However, memory copies is not the only thing you want to avoid to get good performance, which brings us to the next point: Protocol improvements in 6.0 PulseAudio does pretty well CPU wise for high latency loads (e g music playback), but a bit worse for low latency loads (e g VOIP, gaming). Or to put it another way, PulseAudio has a low per sample cost, but there is still some optimisation that can be done per packet. For every playback packet, there are three messages sent: from server to client saying “I need more data”, from client to server saying “here’s some data, I put it in SHM, at this address”, and then a third from server to client saying “thanks, I have no more use for this SHM data, please reclaim the memory”. The third message is not sent until the audio has actually been played back. For every message, it means syscalls to write, read, and poll a unix socket. This overhead turned out to be significant enough to try to improve. So instead of putting just the audio data into SHM, as of 6.0 we also put the messages into two SHM ringbuffers, one in each direction. For signalling we use eventfds. (There is also an optimisation layer on top of the eventfd that tries to avoid writing to the eventfd in case no one is currently waiting.) This is not so much for saving memory copies but to save syscalls. From my own unscientific benchmarks (i e, running “top”), this saves us ~10% – 25% of CPU power in low latency use cases, half of that being on the client side. [Less]
Posted almost 2 years ago
The third week of October was quite action-packed, with a whole bunch of conferences happening in Düsseldorf. The Linux audio developer community as well as the PulseAudio developers each had a whole day of discussions related to a wide range of ... [More] topics. I’ll be summarising the events of the PulseAudio mini summit day here. The discussion was split into two parts, the first half of the day with just the current core developers and the latter half with members of the community participating as well. I’d like to thank the Linux Foundation for sparing us a room to carry out these discussions — it’s fantastic that we are able to colocate such meetings with a bunch of other conferences, making it much easier than it would otherwise be for all of us to converge to a single place, hash out ideas, and generally have a good time in real life as well! Happy faces — incontrovertible proof that everyone loves PulseAudio! With a whole day of discussions, this is clearly going to be a long post, so you might want to grab a coffee now. :) Release plan We have a few blockers for 6.0, and some pending patches to merge (mainly HSP support). Once this is done, we can proceed to our standard freeze → release candidate → stable process. Build simplification for BlueZ HFP/HSP backends For simplifying packaging, it would be nice to be able to build all the available BlueZ module backends in one shot. There wasn’t much opposition to this idea, and David (Henningsson) said he might look at this. (as I update this before posting, he already has) srbchannel plans We briefly discussed plans around the recently introduced shared ringbuffer channel code for communication between PulseAudio clients and the server. We talked about the performance benefits, and future plans such as direct communication between the client and server-side I/O threads. Routing framework patches Tanu (Kaskinen) has a long-standing set of patches to add a generic routing framework to PulseAudio, developed by notably Jaska Uimonen, Janos Kovacs, and other members of the Tizen IVI team. This work adds a set of new concepts that we’ve not been entirely comfortable merging into the core. To unblock these patches, it was agreed that doing this work in a module and using a protocol extension API would be more beneficial. (Tanu later did a demo of the CLI extensions that have been made for the new routing concepts) module-device-manager As a consequence of the discussion around the routing framework, David mentioned that he’d like to take forward Colin’s priority list work in the mean time. Based on our discussions, it looked like it would be possible to extend module-device-manager to make it port aware and get the kind functionality we want (the ability to have a priority-order list of devices). David was to look into this. Module writing infrastructure Relatedly, we discussed the need to export the PA internal headers to allow externally built modules. We agreed that this would be okay to have if it was made abundantly clear that this API would have absolutely no stability guarantees, and is mostly meant to simplify packaging for specialised distributions. Which led us to the other bit of infrastructure required to write modules more easily — making our protocol extension mechanism more generic. Currently, we have a static list of protocol extensions in our core. Changing this requires exposing our pa_tagstruct structure as public API, which we haven’t done. If we don’t want to do that, then we would expose a generic “throw this blob across the protocol” mechanism and leave it to the module/library to take care of marshalling/unmarshalling. Resampler quality evaluation Alexander shared a number of his findings about resampler quality on PulseAudio, vs. those found on Windows and Mac OS. Some questions were asked about other parameters, such as relative CPU consumption, etc. There was also some discussion on how to try to carry this work to a conclusion, but no clear answer emerged. It was also agreed on the basis of this work that support for libsamplerate and ffmpeg could be phased out after deprecation. Addition of a “hi-fi” mode The discussion came around to the possibility of having a mode where (if the hardware supports it), PulseAudio just plays out samples without resampling, conversion, etc. This has been brought up in the past for “audiophile” use cases where the card supports 88.2/96 kHZ and higher sample rates. No objections were raised to having such a mode — I’d like to take this up at some point of time. LFE channel module Alexander has some code for filtering low frequencies for the LFE channel, currently as a virtual sink, that could eventually be integrated into the core. rtkit David raised a question about the current status of rtkit and whether it needs to exist, and if so, where. Lennart brought up the fact that rtkit currently does not work on systemd+cgroups based setups (I don’t seem to have why in my notes, and I don’t recall off the top of my head). The conclusion of the discussion was that some alternate policy method for deciding RT privileges, possibly within systemd, would be needed, but for now rtkit should be used (and fixed!) kdbus/memfd Discussions came up about the possibility of using kdbus and/or memfd for the PulseAudio transport. This is interesting to me, there doesn’t seem to be an immediately clear benefit over our SHM mechanism in terms of performance, and some work to evaluate how this could be used, and what the benefit would be, needs to be done. ALSA controls spanning multiple outputs David has now submitted patches for controls that affect multiple outputs (such as “Headphone+LO”). These are currently being discussed. Audio groups Tanu would like to add code to support collecting audio streams into “audio groups” to apply collective policy to them. I am supposed to help review this, and Colin mentioned that module-stream-restore already uses similar concepts. Stream and device objects Tanu proposed the addition of new objects to represent streams and objects. There didn’t seem to be consensus on adding these, but there was agreement of a clear need to consolidate common code from sink-input/source-output and sink/source implementations. The idea was that having a common parent object for each pair might be one way to do this. I volunteered to help with this if someone’s taking it up. Filter sinks Alexander brough up the need for a filter API in PulseAudio, and this is something I really would like to have. I am supposed to sketch out an API (though implementing this is non-trivial and will likely take time). Dynamic PCM for HDMI David plans to see if we can use profile availability to help determine when an HDMI device is actually available. Browser volumes The usability of flat-volumes for browser use cases (where the volume of streams can be controlled programmatically) was discussed, and my patch to allow optional opt-out by a stream from participating in flat volumes came up. Tanu and I are to continue the discussion already on the mailing list to come up with a solution for this. Handling bad rewinding code Alexander raised concerns about the quality of rewinding code in some of our filter modules. The agreement was that we needed better documentation on handling rewinds, including how to explicitly not allow rewinds in a sink. The example virtual sink/source code also needs to be adjusted accordingly. BlueZ native backend Wim Taymans’ work on adding back HSP support to PulseAudio came up. Since the meeting, I’ve reviewed and merged this code with the change we want. Speaking to Luiz Augusto von Dentz from the BlueZ side, something we should also be able to add back is for PulseAudio to act as an HSP headset (using the same approach as for HSP gateway support). Containers and PA Takashi Iwai raised a question about what a good way to run PA in a container was. The suggestion was that a tunnel sink would likely be the best approach. Common ALSA configuration Based on discussion from the previous day at the Linux Audio mini-summit, I’m supposed to look at the possibility of consolidating the various mixer configuration formats we currently have to deal with (primarily UCM and its implementations, and Android’s XML format). (thanks to Tanu, David and Peter for reviewing this) [Less]
Posted almost 2 years ago
The third week of October was quite action-packed, with a whole bunch of conferences happening in Düsseldorf. The Linux audio developer community as well as the PulseAudio developers each had a whole day of discussions related to a wide range of ... [More] topics. I’ll be summarising the events of the PulseAudio mini summit day here. The discussion was split into two parts, the first half of the day with just the current core developers and the latter half with members of the community participating as well. I’d like to thank the Linux Foundation for sparing us a room to carry out these discussions — it’s fantastic that we are able to colocate such meetings with a bunch of other conferences, making it much easier than it would otherwise be for all of us to converge to a single place, hash out ideas, and generally have a good time in real life as well! Happy faces — incontrovertible proof that everyone loves PulseAudio! With a whole day of discussions, this is clearly going to be a long post, so you might want to grab a coffee now. :) Release plan We have a few blockers for 6.0, and some pending patches to merge (mainly HSP support). Once this is done, we can proceed to our standard freeze → release candidate → stable process. Build simplification for BlueZ HFP/HSP backends For simplifying packaging, it would be nice to be able to build all the available BlueZ module backends in one shot. There wasn’t much opposition to this idea, and David (Henningsson) said he might look at this. (as I update this before posting, he already has) srbchannel plans We briefly discussed plans around the recently introduced shared ringbuffer channel code for communication between PulseAudio clients and the server. We talked about the performance benefits, and future plans such as direct communication between the client and server-side I/O threads. Routing framework patches Tanu (Kaskinen) has a long-standing set of patches to add a generic routing framework to PulseAudio, developed by notably Jaska Uimonen, Janos Kovacs, and other members of the Tizen IVI team. This work adds a set of new concepts that we’ve not been entirely comfortable merging into the core. To unblock these patches, it was agreed that doing this work in a module and using a protocol extension API would be more beneficial. (Tanu later did a demo of the CLI extensions that have been made for the new routing concepts) module-device-manager As a consequence of the discussion around the routing framework, David mentioned that he’d like to take forward Colin’s priority list work in the mean time. Based on our discussions, it looked like it would be possible to extend module-device-manager to make it port aware and get the kind functionality we want (the ability to have a priority-order list of devices). David was to look into this. Module writing infrastructure Relatedly, we discussed the need to export the PA internal headers to allow externally built modules. We agreed that this would be okay to have if it was made abundantly clear that this API would have absolutely no stability guarantees, and is mostly meant to simplify packaging for specialised distributions. Which led us to the other bit of infrastructure required to write modules more easily — making our protocol extension mechanism more generic. Currently, we have a static list of protocol extensions in our core. Changing this requires exposing our pa_tagstruct structure as public API, which we haven’t done. If we don’t want to do that, then we would expose a generic “throw this blob across the protocol” mechanism and leave. Resampler quality evaluation Alexander shared a number of his findings about resampler quality on PulseAudio, vs. those found on Windows and Mac OS. Some questions were asked about other parameters, such as relative CPU consumption, etc. There was also some discussion on how to try to carry this work to a conclusion, but no clear answer emerged. It was also agreed on the basis of this work that support for libsamplerate and ffmpeg could be phased out after deprecation. Addition of a “hi-fi” mode The discussion came around to the possibility of having a mode where (if the hardware supports it), PulseAudio just plays out samples without resampling, conversion, etc. This has been brought up in the past for “audiophile” use cases where the card supports 88.2/96 kHZ and higher sample rates. No objections were raised to having such a mode — I’d like to take this up at some point of time. LFE channel module Alexander has some code for filtering low frequencies for the LFE channel, currently as a virtual sink, that could eventually be integrated into the core. rtkit David raised a question about the current status of rtkit and whether it needs to exist, and if so, where. Lennart brought up the fact that rtkit currently does not work on systemd+cgroups based setups (I don’t seem to have why in my notes, and I don’t recall off the top of my head). The conclusion of the discussion was that some alternate policy method for deciding RT privileges, possibly within systemd, would be needed, but for now rtkit should be used (and fixed!) kdbus/memfd Discussions came up about the possibility of using kdbus and/or memfd for the PulseAudio transport. This is interesting to me, there doesn’t seem to be an immediately clear benefit over our SHM mechanism in terms of performance, and some work to evaluate how this could be used, and what the benefit would be, needs to be done. ALSA controls spanning multiple outputs David has now submitted patches for controls that affect multiple outputs (such as “Headphone+LO”). These are currently being discussed. Audio groups Tanu would like to add code to support collecting audio streams into “audio groups” to apply collective policy to them. I am supposed to help review this, and Colin mentioned that module-stream-restore already uses similar concepts. Stream and device objects Tanu proposed the addition of new objects to represent streams and objects. There didn’t seem to be consensus on adding these, but there was agreement of a clear need to consolidate common code from sink-input/source-output and sink/source implementations. The idea was that having a common parent object for each pair might be one way to do this. I volunteered to help with this if someone’s taking it up. Filter sinks Alexander brough up the need for a filter API in PulseAudio, and this is something I really would like to have. I am supposed to sketch out an API (though implementing this is non-trivial and will likely take time). Dynamic PCM for HDMI David plans to see if we can use profile availability to help determine when an HDMI device is actually available. Browser volumes The usability of flat-volumes for browser use cases (where the volume of streams can be controlled programmatically) was discussed, and my patch to allow optional opt-out by a stream from participating in flat volumes came up. Tanu and I are to continue the discussion already on the mailing list to come up with a solution for this. Handling bad rewinding code Alexander raised concerns about the quality of rewinding code in some of our filter modules. The agreement was that we needed better documentation on handling rewinds, including how to explicitly not allow rewinds in a sink. The example virtual sink/source code also needs to be adjusted accordingly. BlueZ native backend Wim Taymans’ work on adding back HSP support to PulseAudio came up. Since the meeting, I’ve reviewed and merged this code with the change we want. Speaking to Luiz Augusto von Dentz from the BlueZ side, something we should also be able to add back is for PulseAudio to act as an HSP headset (using the same approach as for HSP gateway support). Containers and PA Takashi Iwai raised a question about what a good way to run PA in a container was. The suggestion was that a tunnel sink would likely be the best approach. Common ALSA configuration Based on discussion from the previous day at the Linux Audio mini-summit, I’m supposed to look at the possibility of consolidating the various mixer configuration formats we currently have to deal with (primarily UCM and its implementations, and Android’s XML format). (thanks to Tanu, David and Peter for reviewing this) [Less]
Posted about 2 years ago
There are lot of howtos written for this topic but most won't worked for me. Irky's howto is very good. I'm mirroring his files here and summarise his howto. If you have any question look into his article. The dockstar is special device, because it ... [More] does not support serial boot like other Marvel kirkwood devices. E.g. when you bricked a Seagate GoFlex you can recover it without jtag over serial boot. What is serial boot? Recover without jtag This howto is for the Seagate Dockstar tested on archlinux with openocd 0.8! You need only a buspirate with a jtag firmware, no serial is needed here. Steps Download buspirate.cfg dockstar.cfg uboot.j.kwb change ttyUSB1 to your buspirate interface in buspirate.cfg connect buspirate to the dockstar Dockstar pinout Buspirate pinout power the dockstar openocd -f dockstar.cfg telnet localhost 4444 enter "halt" into telnet session but don't hit enter press reset button and very shortly after press enter into telnet session when openocd shows "target halted in ARM state due to debug-request, current mode: Supervisor" everything is good when openocd shows "target halted in Thumb state due to debug-request, current mode: Supervisor" repeat halt + reset procedure telnet: sheevaplug_init telnet: nand probe 0 telnet: nand erase 0 0x0 0xa0000 telnet: nand write 0 uboot.j.kwb 0 oob_softecc_kw wait ~15 minutes openocd shows a success message [Less]
Posted about 2 years ago
There are lot of howtos written for this topic but most won't worked for me. Irky's howto is very good. I'm mirroring his files here and summarise his howto. If you have any question look into his article. The dockstar is special device, because it ... [More] does not support serial boot like other Marvel kirkwood devices. E.g. when you bricked a Seagate GoFlex you can recover it without jtag over serial boot. What is serial boot? Recover without jtag This howto is for the Seagate Dockstar tested on archlinux with openocd 0.8! You need only a buspirate with a jtag firmware, no serial is needed here. Steps Download buspirate.cfg dockstar.cfg uboot.j.kwb change ttyUSB1 to your buspirate interface in buspirate.cfg connect buspirate to the dockstar Dockstar pinout Buspirate pinout power the dockstar openocd -f dockstar.cfg telnet localhost 4444 enter "halt" into telnet session but don't hit enter press reset button and very shortly after press enter into telnet session when openocd shows "target halted in ARM state due to debug-request, current mode: Supervisor" everything is good when openocd shows "target halted in Thumb state due to debug-request, current mode: Supervisor" repeat halt + reset procedure telnet: sheevaplug_init telnet: nand probe 0 telnet: nand erase 0 0x0 0xa0000 telnet: nand write 0 uboot.j.kwb 0 oob_softecc_kw wait ~15 minutes openocd shows a success message [Less]
Posted over 2 years ago
Headsets come in many sorts and shapes. And laptops come with different sorts of headset jacks – there is the classic variant of one 3.5 mm headphone jack and one 3.5 mm mic jack, and the newer (common on smartphones) 3.5 mm headset jack which can do ... [More] both. USB and Bluetooth headsets are also quite common, but that’s outside the scope for this article, which is about different types of 3.5 mm (1/8 inch) jacks and how we support them in Ubuntu 14.04. You’d think this would be simple to support, and for the classic (and still common) version of having one headphone jack and one mic jack that’s mostly true, but newer hardware come in several variants. If we talk about the typical TRRS headset – for the headset itself there are two competing standards, CTIA and OMTP. CTIA is the more common variant, at least in the US and Europe, but it means that we have laptop jacks supporting only one of the variants, or both by autodetecting which sort has been plugged in. Speaking of autodetection, hardware differs there as well. Some computers can autodetect whether a headphone or a headset has been plugged in, whereas others can not. Some computers also have a “mic in” mode, so they would have only one jack, but you can manually retask it to be a microphone input. Finally, a few netbooks have one 3.5 mm TRS jack where you can plug in either a headphone or a mic but not a headset. So, how would you know which sort of headset jack(s) you have on your device? Well, I found the most reliable source is to actually look at the small icon present next to the jack. Does it look like a headphone (without mic), headset (with mic) or a microphone? If there are two icons separated by a slash “/”, it means “either or”. For the jacks where the hardware cannot autodetect what has been plugged in, the user needs to do this manually. In Ubuntu 14.04, we now have a dialog: In previous versions of Ubuntu, you would have to go to the sound settings dialog and make sure the correct input and output were selected. So still solvable, just a few more clicks. (The dialog might also be present in some Ubuntu preinstalls running Ubuntu 12.04.) So in userspace, we should be all set. Now let’s talk about kernels and individual devices. Quite common with Dell machines manufactured in the last year or so, is the version where the hardware can’t distinguish between headphones and headsets. These machines need to be quirked in the kernel, which means that for every new model, somebody has to insert a row in a table inside the kernel. Without that quirk, the jack will work, but with headphones only. So if your Dell machine is one of these and not currently supporting headset microphones in Ubuntu 14.04, here’s what you can do: Check which codec you have: We currently can enable this for ALC255, ALC283, ALC292 and ALC668. “grep -r Realtek /proc/asound/card*” would be the quickest way to figure this out. Try it for yourself: edit /etc/modprobe.d/alsa-base.conf and add the line “options snd-hda-intel model=dell-headset-multi”. (A few instead need “options snd-hda-intel model=dell-headset-dock”, but it’s not that common.) Reboot your computer and test. Regardless of whether you manage to resolve this or not, feel free to file a bug using the “ubuntu-bug audio” command. Please remove the workaround from the previous step (and reboot) before filing the bug. This might help others with the same hardware, as well as helping us upstreaming your fix to future kernels in case the workaround was successful. Please keep separate machines in separate bugs as it helps us track when a specific hardware is fixed. Notes for people not running Ubuntu Kernel support for most newer devices appeared in 3.10. Additional quirks have been added to even newer kernels, but most of them are with CC to stable, so will hopefully appear in 3.10 as well. PulseAudio support is present in 4.0 and newer. The “what did you plug in”-dialog is a part of unity-settings-daemon. The code is free software and available here. [Less]
Posted over 2 years ago
Headsets come in many sorts and shapes. And laptops come with different sorts of headset jacks – there is the classic variant of one 3.5 mm headphone jack and one 3.5 mm mic jack, and the newer (common on smartphones) 3.5 mm headset jack which can do ... [More] both. USB and Bluetooth headsets are also quite common, but that’s outside the scope for this article, which is about different types of 3.5 mm (1/8 inch) jacks and how we support them in Ubuntu 14.04. You’d think this would be simple to support, and for the classic (and still common) version of having one headphone jack and one mic jack that’s mostly true, but newer hardware come in several variants. If we talk about the typical TRRS headset – for the headset itself there are two competing standards, CTIA and OMTP. CTIA is the more common variant, at least in the US and Europe, but it means that we have laptop jacks supporting only one of the variants, or both by autodetecting which sort has been plugged in. Speaking of autodetection, hardware differs there as well. Some computers can autodetect whether a headphone or a headset has been plugged in, whereas others can not. Some computers also have a “mic in” mode, so they would have only one jack, but you can manually retask it to be a microphone input. Finally, a few netbooks have one 3.5 mm TRS jack where you can plug in either a headphone or a mic but not a headset. So, how would you know which sort of headset jack(s) you have on your device? Well, I found the most reliable source is to actually look at the small icon present next to the jack. Does it look like a headphone (without mic), headset (with mic) or a microphone? If there are two icons separated by a slash “/”, it means “either or”. For the jacks where the hardware cannot autodetect what has been plugged in, the user needs to do this manually. In Ubuntu 14.04, we now have a dialog: In previous versions of Ubuntu, you would have to go to the sound settings dialog and make sure the correct input and output were selected. So still solvable, just a few more clicks. (The dialog might also be present in some Ubuntu preinstalls running Ubuntu 12.04.) So in userspace, we should be all set. Now let’s talk about kernels and individual devices. Quite common with Dell machines manufactured in the last year or so, is the version where the hardware can’t distinguish between headphones and headsets. These machines need to be quirked in the kernel, which means that for every new model, somebody has to insert a row in a table inside the kernel. Without that quirk, the jack will work, but with headphones only. So if your Dell machine is one of these and not currently supporting headset microphones in Ubuntu 14.04, here’s what you can do: Check which codec you have: We currently can enable this for ALC255, ALC283, ALC292 and ALC668. “grep -r Realtek /proc/asound/card*” would be the quickest way to figure this out. Try it for yourself: edit /etc/modprobe.d/alsa-base.conf and add the line “options snd-hda-intel model=dell-headset-multi”. (A few instead need “options snd-hda-intel model=dell-headset-dock”, but it’s not that common.) Reboot your computer and test. Regardless of whether you manage to resolve this or not, feel free to file a bug using the “ubuntu-bug audio” command. Please remove the workaround from the previous step (and reboot) before filing the bug. This might help others with the same hardware, as well as helping us upstreaming your fix to future kernels in case the workaround was successful. Please keep separate machines in separate bugs as it helps us track when a specific hardware is fixed. Notes for people not running Ubuntu Kernel support for most newer devices appeared in 3.10. Additional quirks have been added to even newer kernels, but most of them are with CC to stable, so will hopefully appear in 3.10 as well. PulseAudio support is present in 4.0 and newer. The “what did you plug in”-dialog is a part of unity-settings-daemon. The code is free software and available here. [Less]