I Use This!
Activity Not Available

News

Analyzed 5 months ago. based on code collected 7 months ago.
Posted 1 day ago
This post mostly affects developers of desktop environments/Wayland compositors. A systemd pull request was merged to add two new properties to some keyboards: XKB_FIXED_LAYOUT and XKB_FIXED_VARIANT. If set, the device must not be switched to a ... [More] user-configured layout but rather the one set in the properties. This is required to make fake keyboard devices work correctly out-of-the-box. For example, Yubikeys emulate a keyboard and send the configured passwords as key codes matching a US keyboard layout. If a different layout is applied, then the password may get mangled by the client. Since udev and libinput are sitting below the keyboard layout there isn't much we can do in this layer. This is a job for those parts that handle keyboard layouts and layout configurations, i.e. GNOME, KDE, etc. I've filed a bug for gnome here, please do so for your desktop environment. If you have a device that falls into this category, please submit a systemd patch/file a bug and cc me on it (@whot). [Less]
Posted 1 day ago
This post applies to most tools that interface with the X server and change settings in the server, including xinput, xmodmap, setxkbmap, xkbcomp, xrandr, xsetwacom and other tools that start with x. The one word to sum up the future for these tools ... [More] under Wayland is: "non-functional". An X window manager is little more than an innocent bystander when it comes to anything input-related. Short of handling global shortcuts and intercepting some mouse button presses (to bring the clicked window to the front) there is very little a window manager can do. It's a separate process to the X server and does not receive most input events and it cannot affect what events are being generated. When it comes to input device configuration, any X client can tell the server to change it - that's why general debugging tools like xinput work. A Wayland compositor is much more, it is a window manager and the display server merged into one process. This gives the compositor a lot more power and responsibility. It handles all input events as they come out of libinput and also manages device's configuration. Oh, and instead of the X protocol it speaks Wayland protocol. The difference becomes more obvious when you consider what happens when you toggle a setting in the GNOME control center. In both Wayland and X, the control center toggles a gsettings key and waits for some other process to pick it up. In both cases, mutter gets notified about the change but what happens then is quite different. In GNOME(X), mutter tells the X server to change a device property, the server passes that on to the xf86-input-libinput driver and from there the setting is toggled in libinput. In GNOME(Wayland), mutter toggles the setting directly in libinput. Since there is no X server in the stack, the various tools can't talk to it. So to get the tools to work they would have to talk to the compositor instead. But they only know how to speak X protocol, and no Wayland protocol extension exists for input device configuration. Such a Wayland protocol extension would most likely have to be a private one since the various compositors expose device configuration in different ways. Whether this extension will be written and added to compositors is uncertain, I'm not aware of any plans or even intentions to do so (it's a very messy problem). But either way, until it exists, the tools will merely shout into the void, without even an echo to keep them entertained. Non-functional is thus a good summary. [Less]
Posted 1 day ago
The Raspberry Pi Foundation recently started contracting with Free Electrons to give me some support on the display side of the stack.  Last week I got to review and release their first big piece of work: Boris Brezillon's code for SDTV support.  I ... [More] had suggested that we use this as the first project because it should have been small and self contained.  It ended up that we had some clock bugs Boris had to fix, and a bug in my core VC4 CRTC code, but he got a working patch series together shockingly quickly.  He did one respin for a couple more fixes once I had tested it, and it's now out on the list waiting for devicetree maintainer review.  If nothing goes wrong, we should have composite out support in 4.11 (we're probably a week late for 4.10).On my side, I spent some more time on HDMI audio and the DSI panel.  On the audio side, I'm now emitting the GCP packet for audio mute appropriately (I think), and with some more clocking fixes it's now accepting the audio data at the expected rate.  On the DSI front, I fixed a bit of sequencing and added debugfs for the registers like we have in our other encoders.  There's still no actual audio coming out of HDMI, and only white coming out of the panel.The DSI situation is making me wish for someone else's panel that I could attach to the connector, so I could see if my bugs are in the Atmel bridge programming or in the DSI driver.I did some more analysis of 3DMMES's shaders, and improved our code generation, for wins of 0.4%, 1.9%, 1.2%, 2.6%, and 1.2%.  I also experimented with changing the UBO (indirect addressed uniform array) upload path, which showed no effect.  3DMMES's uniform arrays are tiny, though, so it may be a win in some other app later.I also got a couple of new patches from Jonas Pfeil.  I went through his thread switch delay slots patch, which is pretty close to ready.  He has a similar patch for branching delay slots, though apparently that one isn't showing wins yet in things he's tested.  Perhaps most exciting, though, is that he went and implemented an idea I had dropped on github: replacing our shadow copies of raster textures with a bunch of math in the shader and using general memory loads.  This could potentially fix X performance without a compositor, which we otherwise really don't have a workaround for other than "use a compositor."  It could also improve GL-in-a-window performance: right now all of our DRI surfaces are raster format, because we want to be able to get full screen pageflipping, but that means we do the shadow copy if they aren't fullscreen.  Hopefully this week I'll get a chance to test and review it. [Less]
Posted 2 days ago
I pushed the patch to require resolution today, expect this to hit the general public with libinput 1.6. If your graphics tablet does not provide axis resolution we will need to add a hwdb entry. Please file a bug in systemd and CC me on it (@whot). ... [More] How do you know if your device has resolution? Run sudo evemu-describe against the device node and look for the ABS_X/ABS_Y entries: # Event code 0 (ABS_X)# Value 2550# Min 0# Max 3968# Fuzz 0# Flat 0# Resolution 13# Event code 1 (ABS_Y)# Value 1323# Min 0# Max 2240# Fuzz 0# Flat 0# Resolution 13if the Resolution value is 0 you'll need a hwdb entry or your tablet will stop working in libinput 1.6. You can file the bug now and we can get it fixed, that way it'll be in place once 1.6 comes out. [Less]
Posted 2 days ago
pastebins are useful for dumping large data sets whenever the medium of conversation doesn't make this easy or useful. IRC is one example, or audio/video conferencing. But pastebins only work when the other side looks at the pastebin before it ... [More] expires, and the default expiry date for a pastebin may only be a few days. This makes them effectively useless for bugs where it may take a while for the bug to be triaged and the assignee to respond. It may take even longer to figure out the source of the bug, and if there's a regression it can take months to figure it out. Once the content disappears we have to re-request the data from the reporter. And there is a vicious dependency too: usually, logs are more important for difficult bugs. Difficult bugs take longer to fix. Thus, with pastebins, the more difficult the bug, the more likely the logs become unavailable. All useful bug tracking systems have an attachment facility. Use that instead, it's archived with the bug and if a year later we notice a regression, we still have access to the data. If you got here because I pasted the link to this blog post, please do the following: download the pastebin content as raw text, then add it as attachment to the bug (don't paste it as comment). Once that's done, we can have a look at your bug again. [Less]
Posted 8 days ago
I pushed the patch to require resolution today, expect this to hit the general public with libinput 1.6. If your graphics tablet does not provide axis resolution we will need to add a hwdb entry. Please file a bug in systemd and CC me on it (@whot). ... [More] How do you know if your device has resolution? Run sudo evemu-describe against the device node and look for the ABS_X/ABS_Y entries: # Event code 0 (ABS_X)# Value 2550# Min 0# Max 3968# Fuzz 0# Flat 0# Resolution 13# Event code 1 (ABS_Y)# Value 1323# Min 0# Max 2240# Fuzz 0# Flat 0# Resolution 13if the Resolution value is 0 you'll need a hwdb entry or your tablet will stop working in libinput 1.6. You can file the bug now and we can get it fixed, that way it'll be in place once 1.6 comes out. [Less]
Posted 9 days ago
I missed last week's update, but with the holiday it ended up being a short week anyway.The multithreaded fragment shaders are now in drm-next and Mesa master.  I think this was the last big win for raw GL performance and we're now down to the level ... [More] of making 1-2% improvements in our commits.  That is, unless we're supposed to be using double-buffer non-MS mode and the closed driver was just missing that feature.  With the glmark2 comparisons I've done, I'm comfortable with this state, though.  I'm now working on performance comparisons for 3DMMES Taiji, which the HW team often uses as a benchmark.  I spent a day or so trying to get it ported to the closed driver and failed, but I've got it working on the open stack and have written a couple of little performance fixes with it.The first was just a regression fix from the multithreading patch series, but it was impressive that multithreading hid a 2.9% instruction count penalty and still showed gains.One of the new fixes I've been working on is folding ALU operations into texture coordinate writes.  This came out of frustration from the instruction selection research I had done the last couple of weeks, where all algorithms seemed like they would still need significant peephole optimization after selection.  I finally said "well, how hard would it be to just finish the big selection problems I know are easily doable with peepholes?" and it wasn't all that bad.  The win came out to about 1% of instructions, with a similar benefit to overall 3DMMES performance (it's shocking how ALU-bound 3DMMES is)I also started on a branch to jump to the end of the program when all 16 pixels in a thread have been discarded.  This had given me a 7.9% win on GLB2.7 on Intel, so I hoped for similar wins here.  3DMMES seemed like a good candidate for testing, too, with a lot of discards that are followed by reams of code that could be skipped, including texturing.  Initial results didn't seem to show a win, but I haven't actually done any stats on it yet.  I also haven't done the usual "draw red where we were applying the optimization" hack to verify that my code is really working, either.While I've been working on this, Jonas Pfeil (who originally wrote the multithreading support) has been working on a couple of other projects.  He's been trying to get instructions scheduled into the delay slots of thread switches and branching, which should help reduce any regressions those two features might have caused.  More exciting, he's just posed a branch for doing nearest-filtered raster textures (the primary operation in X11 compositing) using direct memory lookups instead of our shadow-copy fallback.  Hopefully I get a chance to review, test, and merge in the next week or two.On the kernel side, my branches for 4.10 have been pulled.  We've got ETC1 and multithread FS for 4.10, and a performance win in power management.  I've also been helping out and reviewing Boris Brezillon's work for SDTV output in vc4.  Those patches should be hitting the list this week. [Less]
Posted 17 days ago
The Fedora Change to retire the synaptics driver was approved by FESCO. This will apply to Fedora 26 and is part of a cleanup to, ironically, make the synaptics driver easier to install. Since Fedora 22, xorg-x11-drv-libinput is the preferred input ... [More] driver. For historical reasons, almost all users have the xorg-x11-drv-synaptics package installed. But to actually use the synaptics driver over xorg-x11-drv-libinput requires a manually dropped xorg.conf.d snippet. And that's just not ideal. Unfortunately, in DNF/RPM we cannot just say "replace the xorg-x11-drv-synaptics package with xorg-x11-drv-libinput on update but still allow users to install xorg-x11-drv-synaptics after that". So the path taken is a package rename. Starting with Fedora 26, xorg-x11-drv-libinput's RPM will Provide/Obsolete [1] xorg-x11-drv-synaptics and thus remove the old package on update. Users that need the synaptics driver then need to install xorg-x11-drv-synaptics-legacy. This driver will then install itself correctly without extra user intervention and will take precedence over the libinput driver. Removing xorg-x11-drv-synaptics-legacy will remove the driver assignment and thus fall back to libinput for touchpads. So aside from the name change, everything else works smoother now. Both packages are now updated in Rawhide and should be available from your local mirror soon. What does this mean for you as a user? If you are a synaptics user, after an update/install, you need to now manually install xorg-x11-drv-synaptics-legacy. You can remove any xorg.conf.d snippets assigning the synaptics driver unless they also include other custom configuration. See the Fedora Change page for details. Note that this is a Fedora-specific change only, the upstream change for this is already in place. [1] "Provide" in RPM-speak means the package provides functionality otherwise provided by some other package even though it may not necessarily provide the code from that package. "Obsolete" means that installing this package replaces the obsoleted package. [Less]
Posted 21 days ago
At this point, I haven't pushed a new release tag for xf86-video-freedreno to update to latest xserver ABI.  I'm inclined not to.  If you are using a modern xserver you probably want to be using xf86-video-modesetting + glamor.  It has more features ... [More] (dri3, xv, etc) and better performance.  And GL support on a3xx/a4xx is pretty solid.  So distros with a modern xserver might as well drop the xf86-video-freedreno package.The one case where xf86-video-freedreno is still useful is bringing up a new generation of adreno, since it can do dri2 with pure-sw fallbacks for all the EXA ops.  But if that is what you are doing, I guess you know how to git clone and build.The possible alternative is to push a patch that makes xf86-video-freedreno still build, but only probe (with latest xserver ABI) if some "ForceLoad" type option is given in xorg.conf, otherwise fallback to modesetting/glamor.  I can't think of a good reason to do this at the moment.  But as always, questions/comments/suggestions welcome. [Less]
Posted 22 days ago
As usual, it can be turned on and off at build-time and there is some configuration available as well to control how the effect works. Here are some screen-shots: Motion Blur Off Motion Blur Off Motion Blur On, intensity 12.5% ... [More] Motion Blur On, intensity 12.5% Motion Blur On, intensity 25% Motion Blur On, intensity 25% Motion Blur On, intensity 50% Motion Blur On, intensity 50% Motion blur is a technique used to make movement feel smoother than it actually is and is targeted at hiding the fact that things don’t move in continuous fashion, but rather, at fixed intervals dictated by the frame rate. For example, a fast moving object in a scene can “leap” many pixels between consecutive frames even if we intend for it to have a smooth animation at a fixed speed. Quick camera movement produces the same effect on pretty much every object on the scene. Our eyes can notice these gaps in the animation, which is not great. Motion blur applies a slight blur to objects across the direction in which they move, which aims at filling the animation gaps produced by our discrete frames, tricking the brain into perceiving a smoother animation as result. In my demo there are no moving objects other than the sky box or the shadows, which are relatively slow objects anyway, however, camera movement can make objects change screen-space positions quite fast (specially when we rotate the view point) and the motion- blur effect helps us perceive a smoother animation in this case. I will try to cover the actual implementation in some other post but for now I’ll keep it short and leave it to the images above to showcase what the filter actually does at different configuration settings. Notice that the smoothing effect is something that can only be perceived during motion, so still images are not the best way to showcase the result of the filter from the perspective of the viewer, however, still images are a good way to freeze the animation and see exactly how the filter modifies the original image to achieve the smoothing effect. [Less]