232
I Use This!
Low Activity

News

Analyzed about 12 hours ago. based on code collected about 23 hours ago.
Posted over 12 years ago
This snippet is usefull for openWRT setups and other rsyslog sending systems. Add this near head of /etc/rsyslog.conf. ServerAddress is optional $ModLoad imudp $UDPServerAddress 10.0.23.43 $UDPServerRun 514 This to /etc/rsyslog.d/aps.conf ... [More] :fromhost-ip, isequal, "10.0.23.42" /var/log/ap-mainhall-ng & ~ &~ means don't log further that line which match 10.0.23.42. Otherwise it will also log into e.g. /var/log/syslog syslog events. You also use this for a more generic way: $template DynFile,"/var/log/system-%HOSTNAME%.log" :fromhost-ip, !isequal, "127.0.0.1" ?DynFile & ~ Also usefull if you want to provide a pipe-file for further processing. :programname, isequal, "freeradius" |/var/run/rsyslog-freeradius.pipe [Less]
Posted over 12 years ago
As a part of the Ubuntu Hardware Summit, I held a presentation on the topic “audio debugging techniques”, focused on HDA Intel cards. I also wrote down some notes for some of those slides. I share the slides and the notes with the hope that you will ... [More] find the information useful if you run into troubles with your audio hardware. Audio stack overview The audio stack can seem a bit complex, but first look at the line all the way from the applications to the hardware. This is the optimal audio path. If the audio path is different, complexity will increase and you might run into undesired behaviour, such as one application blocking another from playing audio. There are valid exceptions though – we have a separate sound server for professional, low-latency audio. But that’s outside the scope of this presentation. Let’s start from the top. On the top we have different kinds of audio applications, which talk to PulseAudio. GStreamer is a library to help media playback, it can for example decode ogg and mp3 files. PulseAudio mixes these audio streams and send them down to the kernel. The ALSA library and the ALSA kernel core do not do much here but send the audio pointers through. The HDA controller driver is responsible for talking directly to the hardware, and so it sets up all necessary DMA streams between the HDA controller and memory. The HDA controller driver also talks to the HDA codec driver, which is different for every codec vendor. As some of you probably know, between the HDA controller – which is a part of the southbridge in most computers – and the HDA codec, a special HDA bus is used. This means that the only way we can talk to the codec is through the controller. Controlling audio volume goes the same path. When you use your volume control application, it controls PulseAudio’s volume. PulseAudio in turn modifies the volume controls being exposed by the kernel, and the kernel in turn talks to the hardware to set volume control registers on the codec. There are two levels of abstraction here: first, the kernel might choose not to expose all of the hardware’s volume controls, and second, PulseAudio exposes only one big volume control which is the sum of some of the volume controls the kernel exposes. So there is filtering on two levels. Audio stack overview – codec Let us have a look at the HDA codec chip and how its internals are represented to the driver. The codec is constructed as a graph, and on this slide one of the more simple HDA codec graphs is shown (just because it would fit the screen). A while ago upstream made a small program to extract this graph from the codec and make a picture of it. Thanks to Keng-Yü, who works for Canonical in Taipei, this tool is available as a package in Ubuntu 11.10. Just install the “codecgraph” package. In this graph we have nodes correspondings to DACs, ADCs, mixers, and pins. In this example we can see what pins are connected to which DACs by following the solid line. The dotted line shows a connection that is possible but not currently active. As the Linux codec driver code grows more intelligent, we depend more and more on this information to be accurate. This way we do not hard code as much in the driver, so we can adapt to future codecs without having to rewrite much code. The information coming from the codec is usually correct. One problem we have from time to time is though, is that sometimes chip vendors add features which they choose not to document in this graph (and not in any other way either). There is a mechanism called “processing coefficients” in the specification, where the vendor can add its own functionality without telling anyone. When that happens, and it is required to use these undocumented “processing coefficients” to enable all inputs and outputs, we usually run into difficult problems that require vendor support to resolve. Also, in some cases the graph cannot describe the functionality needed, e g if some hardware is depending on special pins on the codec. We need to know about this when it happens, so we can support it in the driver. So if you are a hardware designer, my message is: Try to use the standard way of doing things as much as possible. Do this and it will work out of the box on Linux, and likely other operating systems as well. If you do anything special, you’re causing headache for driver writers, possibly causing a slower time to market. An example of this would be how you control external amplifiers: you can use the EAPD pins, which is the standard way, and you can use GPIO pins, ACPI, or anything else, that will be more problematic and require special driver support. Pin configuration default We also depend on information from the writers of BIOS/UEFI, i e the computer’s firmware. As a hardware designer, you have the freedom to choose which pins of the codec that go to what physical jack. You might decide that you want a digital out, or you decide that this machine should not have that functionality, and then you leave that pin unconnected. Then the firmware engineer needs to know this, and program this into the codec when the computer boots. This is done by setting the “Pin Configuration Default” register. This register tells us not only the device type (headphone, mic, etc), but also the location (Internal, External, Docking Station), the color, and the channel mapping (to use for surround functionality). Several years ago, we did not read this register much, but these days, we depend on that for all new computers for setting up the codec correctly. So what do we do if this register is wrong? Well, if we work with hardware pre-release, there might be a chance we can feed this information back to the firmware writers so they can correct the problem. If the hardware is already released, we have to create a “quirk”. This means that the driver overrides the firmware’s pin value(s) and instead uses its own value. Because this value is so important, I’ve written an application where you can try out different combinations of this register. Mixer problems One of the most common problems with getting audio up and running on Linux is to make sure the mixer is correct. Typical symptoms of this would be that some outputs are working where others are not, or that there is something wrong with the volume control. Here are some initial checks of these problems. We do this at the two levels of mixer abstraction. First, let’s have a look at the PulseAudio volume control. You can do that in Gnome’s volume control application. Also, PulseAudio controls the volume of mixers at the ALSA level. You can see how this works by starting the alsamixer program. In this program, you can also see additional sliders, which you can also use to verify that they are in the correct to enable sound. You start alsamixer from a terminal (in Ubuntu the quickest way to launch a terminal is the Ctrl-Alt-T shortcut). Mixer control names So let’s look at these two abstraction levels in more detail and how you can inspect what is actually going on. First, let’s look at the codec level. If you are familiar with the codec’s nodes and how they are connected, e g by running “codecgraph”, you can also find out which ALSA level controls that are connected to which nodes on the codec. This is done by inspecting the “codec proc” file. Every codec in the system has this file, and its name is made up of the sound card name, and the codec’s address on the HDA bus. In this file, you can also see a lot of other information about the codec. So next, we will also take a look at PulseAudio’s abstraction of these controls. This is done by looking at the files in /usr/share/pulseaudio/alsa-mixer. In this case, if we look at /usr/share/pulseaudio/alsa-mixer/paths/analog-output-headphones.conf, you can e g find the sections [Element Master] and [Element Headphones]. That means that the ALSA-level controls “Master” and “Headphones” are being merged in PulseAudio’s volume control when the “Headphones” port has been selected. So these two places are the keys to understanding what is going on when you have mixer problems. PCM/Streaming problems So up next is when you have problems with the streaming. That is usually shown as the audio is breaking up, crackling or glitching. Unfortunately these problems are typically quite hard to resolve. Sometimes this can be a bug in PulseAudio, or in the driver. But more often the problem is on either the application side or the hardware side. If an application is not submitting data to PulseAudio in time, the PulseAudio has no audio to play back, so therefore playback breaks up. Once some more data has reached PulseAudio, it starts playback again, and so playback is started and stopped repeatedly. The other problem could be with bad position reports from the hardware. PulseAudio depends on being able to ask the hardware for its current position at all times, and this should be sample accurate. You can test this by trying to run PulseAudio with timer scheduling disabled, in this case PulseAudio will rely more on DMA interrupts and less on position reports. However, this will also make PulseAudio draw more power than necessary from the machine, so please avoid this if you can. When I try to debug these problems I usually start with making a PulseAudio verbose log. It often takes some knowledge and experience to be able to analyze this log though. Jack sensing Over the last six months or so, one of the things I’ve been working with is trying to get better jack detection handling, throughout the audio stack. “Jack sensing” in this context means what to do when something has been plugged in, or unplugged. When this happens, an interrupt (IRQ) is triggered and control is passed to the HDA codec driver. The driver takes the first action itself. Now, this is an area, unfortunately, when things differ a lot between different drivers, mostly between different vendors, but also between different chips of the same vendor, or even between configurations of the same chip. But as a general rule, and for the most common vendors – that means Realtek, IDT and Conexant – these rules are the ones that are followed: For headphones – when you plug them in, the Internal Speakers are muted. Remember, this is still all at the kernel level. For what we’re doing with Line Outs – it’s not completely standardised everywhere yet, but it seems upstream is leaning on having Headphones mute Line Outs and having Line Outs mute Internal Speakers by default. Some drivers also have a special control where the automute behaviour can be changed. For Microphones – the only rule here is that if we have only one internal microphone and one external microphone, the external microphone takes over when you plug it in, and the internal microphone regains control when you unplug. Should there be any other inputs, e g two external mic jacks, or a line in jack, no autoswitching is done at the kernel level. After this has been done, a signal is sent to userspace. Hopefully – this also varies between vendors. We’ll get back to that. What’s new in Ubuntu 11.10, is that this signal is being picked up by PulseAudio. This is important, because it enables PulseAudio, to switch port for volume control. So this means, when you press your media keys (or use the sound menu) to control your volume, you control your headphone’s volume when you have headphones plugged in, and your speakers’ volume when your headphones are unplugged. So this not working properly, is one of the more common problems. I have written a small tool that helps you to debug whether this issue is in hardware or software. This tool is called “hda-jack-sense-test”. This program sends the “get pin sense” command to each codec and outputs the results. I actually had use for it earlier this week, and confirmed that it was a hardware issue: although the headphones were unplugged, the “get pin sense” command returned that the headphones were being plugged in and unplugged all the time. If you can confirm that things are working at this level, you can also look in “Sound settings” to see if the port (this is known as a “connector”) is automatically switched whenever headphones – or microphone – is plugged in. If it is not, the most common cause is that kernel driver does not notify userspace correctly about that change. HDMI/DisplayPort Audio One of the most common problem with HDMI these days are with newer chips supporting more than one output. These outputs could be HDMI, DisplayPort or DVI (with audio supported through a DVI to HDMI adapter). NVidia has supported four outputs for quite some time and Intel has supported three. But usually, not all of these are actually connected on the board. Now, the problem is: How do we know what pin to output to? And the answer is, that there is no good way to figure that out until something is actually plugged in. If you remember me talking about the pin config default earlier, you would say that maybe the graphics chip could mark the pins not connected to anything. If this was done, it would be a great start (and if they are, we make use of it to hide the outputs that are marked as not connected), but unfortunately, more often than not, these pins are set up as all pins connected and present. So if you write firmware for internal or external graphics cards, please do set up these pins. So if we don’t know, what do we do? Well, here’s also work in progress at the userspace level. First, PulseAudio has to probe how many ports there are. Then we can use the new jack detection feature, to determine what has actually been plugged in. I’m currently working on redesigning the sound settings dialog so that the ports that are not plugged in will be actually hidden from the dialog, and I hope this will land in Ubuntu 12.04 which will be released in April next year. And a final note, just so you don’t forget it: For NVidia and ATI, they both require proprietary video drivers to enable HDMI and DisplayPort audio. The ATI driver used to have support for some of the cards in its open source driver, but this feature was recently removed because they had some problems with it. Intel has no proprietary drivers at all, so there it works with the standard open source driver. [Less]
Posted over 12 years ago
pulseaudio: Do any of our followers have any internal contact with Adobe? Perhaps you can push them to fix http://t.co/lbvD0VJz (by Colin)
Posted over 12 years ago
For those of you who were interested but couldn’t make it to the GStreamer Conference this year, the cool folks at Ubicast have got the talk videos up (can be streamed or downloaded). Among these is my talk about recent developments in the PulseAudio world.
Posted over 12 years ago
Longish day, but I did want to post something fun before going to sleep — I just pushed out patches to hook up the WebRTC folks’ analog gain control to PulseAudio. So your mic will automatically adjust the input level based on how loud you’re ... [More] speaking. It’s quite quick to adapt if you’re too loud, but a bit slow when the input signal is too soft. This isn’t bad, since the former is a much bigger problem than the latter. Also, we’ve switched to the WebRTC canceller as the default canceller (you can still choose the Speex canceller manually, though). Overall, the quality is pretty good. I’d do a demo, but it’s effectively had zero learning time in my tests, so we’re not too far from a stage where this is a feature that, if we’re doing it right you won’t notice it exists. There lot’s of things, big and small that need to be added and tweaked, but this does go some way towards bringing a hassle-free VoIP experience on Linux closer to reality. Once again, kudos to the folks at Google for the great work and for opening up this code. Also a shout-out to fellow Collaboran Sjoerd Simons for bouncing ideas and giving me those much-needed respites from talking to myself. :) [Less]
Posted over 12 years ago
As I’d blogged about last week, we had a couple of Audio BoF sessions last week. Here’s a summary of what was discussed. I’ve collected items in relevance order rather than chronological order to make for easier reading. I think I have everything ... [More] covered, I’ll update this post if one of the attendees points out something I missed or got wrong. Surround: There were a number of topics that came up with regards to multichannel/surround support: There seems to be some lack of uniformity of channel maps, particularly in non-HDA hardware. While it was agreed that this should be fixed, we need some directed testing and bug reports to actually be able to fix this. Multichannel mixers, while theoretically supported, are not actually exposed by any current drivers. It was suggested that these could be exposed without breaking existing applications by having new MC mixers exposed with device names corresponding to the surround PCM device (like “surround51″). We need a way to query channel maps on a given PCM. This will be implemented as a new ALSA API which could be called after the PCM is opened. (AI: Takashi) It would be good to have a way to configure the channel map as well (if supported by the hardware?). The suggestion was to do this as was done in PulseAudio, where an arbitrary channel map could be specified. (NB: is there hardware that supports multiple/arbitrary channel maps? If yes, how do we handle this?) Routing: Unsurprisingly, we came back to the problem of building a simplified mixer graph for PulseAudio. The current status is that ALSA builds a simplified mixer for use by userspace, and PulseAudio further simplifies this by means of some name-based guessing. PulseAudio would ideally like a simplified version of the original mixer graph, but something more complete than what we get now However, since PulseAudio has fairly unique requirements of what information it wants, it probably makes sense to have ALSA provide the entire graph and leave the simplification task to PulseAudio (discussion on this approach continues) There was no consensus on who would do this or how this should be done (creating a new serialisation format, exposing what HDA provides, adding node metadata to ALSA mixer controls, or something else altogether) As an interim step, it was agreed that it would be possible to provide ordering in the simplified ALSA mixer (that is, add metadata to the control to signal what control comes “before” it and what comes “after” it). This should go some way in making the PA mixer simplification logic simpler and more robust. (AI: Takashi) HDMI: A couple of things came up in discussion about the status of HDMI. There was a question about the reliability of ELD information as this will be used more in future versions of PulseAudio. There did not appear to be conclusive evidence in either direction, so we will probably assume that it is reliable and deal with reliability problems as they arise. It was mentioned that it might be desirable to only expose the ALSA device if a receiver is plugged in. This had been mooted earlier as something to do in PulseAudio as an alternative. One thing to consider with this approach is dealing with a device switch on the receiver side. Doing this without a notification to userspace was generally agreed to be a bad idea. Jack detection: The long-standing debate on exposing jacks as input devices or ALSA controls came to a conclusion, with the resolution being that jacks would be exposed as ALSA controls. This requires a change in the kernel (and potentially alsa-lib?) which should not be too complex. Actual buttons (such as volume/mute control) will continue to be input devices. Once this is done, the pending jack detection patches will be adapted to use the new interface. (AI: Takashi (patches are in a branch already!), David) UCM: Another long-standing issue was the merging of the ALSA UCM patches into PulseAudio. Most of the problems thus far had been due to an incomplete understanding of how ALSA and PA concepts mapped to each other. Some consensus was arrived at in this regard after a lengthy discussion: As is the case now, every ALSA PCM maps to a PA sink Each UCM verb maps to a PA card profile Each combination of UCM devices that can be used concurrently maps to a PA port Each UCM modifier is mapped to an intended role on the corresponding sink The code should (as is in the patches currently submitted) be part of the PA ALSA module, and there will be changes required to use the UCM-specified mixer list instead of PA’s guessing mechanism. (AI: ???) (NB: It was mentioned that PulseAudio needs to support multiple intended roles for a sink/source. This is actually already supported — the intended roles property is a whitespace-separated list of roles) (NB2: There was further discussion with the Linaro folks this week about the UCM bits, and there’s likely going to be an IRC/phone gathering to clarify things further in the near future) GStreamer latency settings: We currently do not actually use PulseAudio’s power saving features from GStreamer for several reasons. Suggestions to over come this were mooted. While no definite agreement was reached, one suggestion was to add a “powersave” profile to pulsesink that chose higher latency/buffer-time values. Players would need to set this if they are not using features that break when these values are raised. Corking: The statelessness of current the corking mechanism was discussed in one session, and between the PulseAudio developers later. The problem is that we need to be able to track cork/uncork reasons more closely. This would give us more metadata that is needed to make policy decisions without breaking streams. Particularly, for example, if PA corks a music stream due to an incoming call, then the user plays, then pauses music, and then the call ends, we must not uncork the music stream. We intend to deal with this with two changes: We need to add a per-cause cork/uncork request count We need to associate a “generation” with cork/uncork requests, so certain conditions (such as user intervention) can bump the generation counter, and uncork requests corresponding to old cork requests will be ignored This will make it possible to track the various bits of state we need to do the right thing for cases like the one mentioned before. So that’s that — lots of things discussed, lots of things to do! Thanks to everyone who came for participating. [Less]
Posted over 12 years ago
At the Kernel Summit in Prague last week Kay Sievers and I lead a session on developing shared userspace libraries, for kernel hackers. More and more userspace interfaces of the kernel (for example many which deal with storage, audio, resource ... [More] management, security, file systems or a number of other subsystems) nowadays rely on a dedicated userspace component. As people who work primarily in the plumbing layer of the Linux OS we noticed over and over again that these libraries written by people who usually are at home on the kernel side of things make the same mistakes repeatedly, thus making life for the users of the libraries unnecessarily difficult. In our session we tried to point out a number of these things, and in particular places where the usual kernel hacking style translates badly into userspace shared library hacking. Our hope is that maybe a few kernel developers have a look at our list of recommendations and consider the points we are raising. To make things easy we have put together an example skeleton library we dubbed libabc, whose README file includes all our points in terse form. It's available on kernel.org: The git repository and the README. This list of recommendations draws inspiration from David Zeuthen's and Ulrich Drepper's well known papers on the topic of writing shared libraries. In the README linked above we try to distill this wealth of information into a terse list of recommendations, with a couple of additions and with a strict focus on a kernel hacker background. Please have a look, and even if you are not a kernel hacker there might be something useful to know in it, especially if you work on the lower layers of our stack. If you have any questions or additions, just ping us, or comment below! [Less]
Posted over 12 years ago
For those who are in Prague for GstConf, LinuxCon, ELCE, etc. — don’t forget we’ve a couple of interesting audio-related things happening: Today (Tuesday), at 4 pm, I’ll be talking about recent developments in PulseAudio Tomorrow (Wednesday), at ... [More] 11am, we’re continuing the Audio BoF that I had mentioned earlier (since we ran out of time on Sunday) If you’re around and interested, do drop in! [Less]
Posted over 12 years ago
If you make it to Prague the coming week for the LinuxCon/ELCE/GStreamer/Kernel Summit/... superconference, make sure not to miss: The Linux Audio BoF with numerous Linux audio hackers, 5pm, on Sunday (23rd, i.e. today). Latest developments in ... [More] PulseAudio by Arun Raghavan. 4pm, on Tuesday, GStreamer Summit Linux Kernel Developer Panel, a shared session of LinuxCon and ELCE. Panelists are Linus Torvalds, Alan Cox, Thomas Gleixner and Paul McKenney. Moderated by yours truly. 9:30am, on Wednesday systemd Administration in the Enterprise by Kay Sievers and yours truly. 4:15pm, on Wednesday, LinuxCon Integrating systemd: Booting Userspace in Less Than 1 Second by Koen Kooi. 11:15am, on Friday, ELCE All of that at the Clarion Hotel. See you in Prague! [Less]
Posted over 12 years ago
What is this? Well, I'm off to Prague tomorrow morning. I'm very much looking forward to this trip as there are a whole bunch of interesting talks going on over the three conferences I'll be visiting, plus I get to go to Prague, which has been on my ... [More] "cities to visit" list for quite some time. Tick and tick. Arun will be giving a PulseAudio talk and Lennart will be rambling on about init systems as is customary these days. Very much looking forward to both. We've also had an IRC meeting about bluetooth support and policy stuff for in-car usage with some big car manufacturers which we'll follow up next week in person and there are also a lot of other audio folk in town so we'll hopefully kickstart the UCM discussions again with a view to merging into PA 2.0. Looking forward to catch up with Mark and Liam again on that front. So with pretty much all the people invloved in the Linux audio field, this is a really good opportunity to make some good progress! Here's to a successful trip! Many thanks to Collabora who have helped me organise funding and also to Yocto Project (via Texas Instruments) who have very kindly sponsored my attendance of the LinuxCon/ELC-E part of the event. I look forward to finding out more about their project when I help out at their booth! Share and Enjoy: [Less]