I Use This!
Activity Not Available

News

Analyzed 10 months ago. based on code collected 10 months ago.
Posted 7 days ago by nore...@blogger.com (Peter Hutterer)
TLDR: If you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging/configuration with xinput will not work. For many years, the xinput tool has been a useful tool to ... [More] debug configuration issues (it's not a configuration UI btw). It works by listing the various devices detected by the X server. So a typical output from xinput list under X could look like this: :: whot@jelly:~> xinput list⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ SynPS/2 Synaptics TouchPad id=22 [slave pointer (2)]⎜ ↳ TPPS/2 IBM TrackPoint id=23 [slave pointer (2)]⎜ ↳ ELAN Touchscreen id=20 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Lid Switch id=8 [slave keyboard (3)] ↳ Sleep Button id=9 [slave keyboard (3)] ↳ ThinkPad Extra Buttons id=24 [slave keyboard (3)] Alas, xinput is scheduled to go the way of the dodo. More and more systems are running a Wayland session instead of an X session, and xinput just doesn't work there. Here's an example output from xinput list under a Wayland session: $ xinput list⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ xwayland-pointer:13 id=6 [slave pointer (2)]⎜ ↳ xwayland-relative-pointer:13 id=7 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ xwayland-keyboard:13 id=8 [slave keyboard (3)] As you can see, none of the physical devices are available, the only ones visible are the virtual devices created by XWayland. On a Wayland session, the X server doesn't have access to the physical devices. Instead, it talks via the Wayland protocol to the compositor. This image from the Wayland documentation shows the architecture: In the above graphic, devices are known to the Wayland compositor (1), but not to the X server. The Wayland protocol doesn't expose physical devices, it merely provides a 'pointer' device, a 'keyboard' device and, where available, a touch and tablet tool/pad devices (2). XWayland wraps these into virtual devices and provides them via the X protocol (3), but they don't represent the physical devices. This usually doesn't matter, but when it comes to debugging or configuring devices with xinput we run into a few issues. First, configuration via xinput usually means changing driver-specific properties but in the XWayland case there is no driver involved - it's all handled by libinput inside the compositor. Second, debugging via xinput only shows what the wayland protocol sends to XWayland and what XWayland then passes on to the client. For low-level issues with devices, this is all but useless. The takeaway here is that if you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging with xinput will not work. If you're trying to configure a device, use the compositor's configuration system (e.g. gsettings). If you are debugging a device, use libinput-debug-events. Or compare the behaviour between the Wayland session and the X session to narrow down where the failure point is. [Less]
Posted 7 days ago
Just over a year ago Bastille security announced the discovery of a suite of vulnerabilities commonly referred to as MouseJack. The vulnerabilities targeted the low level wireless protocol used by Unifying devices, typically mice and keyboards. The ... [More] issues included the ability to: Pair new devices with the receiver without user prompting Inject keystrokes, covering various scenarios Inject raw HID commands This gave an attacker with $15 of hardware the ability to basically take over remote PCs within wireless range, which could be up to 50m away. This makes sitting in a café quite a dangerous thing to do when any affected hardware is inserted, which for the unifying dongle is quite likely as it’s explicitly designed to remain in an empty USB socket. The main manufacturer of these devices is Logitech, but the hardware is also supplied to other OEMs such as Amazon, Microsoft, Lenovo and Dell where they are re-badged or renamed. I don’t think anybody knows the real total, but by my estimations there must be tens of millions of affected-and-unpatched devices being used every day. Shortly after this announcement, Logitech prepared an update which mitigated some of these problems, and then again a few weeks later prepared another update that worked around and fixed the various issues exploited by the malicious firmware. Officially, Linux isn’t a supported OS by Logitech, so to apply the update you had to start Windows, and download and manually deploy a firmware update. For people running Linux exclusively, like a lot of Red Hat’s customers, the only choice was to stop using the Unifying products or try and find a Windows computer that could be borrowed for doing the update. Some devices are plugged in behind racks of computers forgotten, or even hot-glued into place and unremovable. The MouseJack team provided a firmware blob that could be deployed onto the dongle itself, and didn’t need extra hardware for programming. Given the cat was now “out of the bag” on how to flash random firmware to this proprietary hardware I asked Logitech if they would provide some official documentation so I could flash the new secure firmware onto the hardware using fwupd. After a few weeks of back-and-forth communication, Logitech released to me a pile of documentation on how to control the bootloader on the various different types of Unifying receiver, and the other peripherals that were affected by the security issues. They even sent me some of the affected hardware, and gave me access to the engineering team that was dealing with this issue. It took a couple of weeks, but I rewrote the previously-reverse-engineered plugin in fwupd with the new documentation so that it could update the hardware exactly according to the official documentation. This now matches 100% the byte-by-byte packet log compared to the Windows update tool. Magic numbers out, #define’s in. FIXMEs out, detailed comments in. Also, using the documentation means we can report sensible and useful error messages. There were other nuances that were missed in the RE’d plugin (for example, making sure the specified firmware was valid for the hardware revision), and with the blessing of Logitech I merged the branch to master. I then persuaded Logitech to upload the firmware somewhere public, rather than having to extract the firmware out of the .exe files from the Windows update. I then opened up a pull request to add the .metainfo.xml files which allow us to build a .cab package for the Linux Vendor Firmware Service. I created a secure account for Logitech and this allowed them to upload the firmware into a special testing branch. This is where you come in. If you would like to test this, you first need a version of fwupd that is able to talk to the hardware. For this, you need fwupd-0.9.2-2.fc26 or newer. You can get this from Koji for Fedora. Then you need to change the DownloadURI in /etc/fwupd.conf to the testing channel. The URI is in the comment in the config file, so no need to list it here. Then reboot, or restart fwupd. Then you can either just launch GNOME Software and click Install, or you can type on the command line fwupdmgr refresh && fwupdmgr update — soon we’ll be able to update more kinds of Logitech hardware. If this worked, or you had any problems please leave a comment on this blog or send me an email. Thanks should go to Red Hat for letting me work on this for so long, and even more thanks to Logitech to making it possible. [Less]
Posted 8 days ago
A long time ago I started looking at rewriting Tracker’s build system using Meson. Today those build instructions landed in the master branch in Git! Meson is becoming pretty popular now so I probably don’t need to explain why it’s such a big ... [More] improvement over Autotools. Here are some key benefits: It takes 2m37s for me to build from a clean Git tree with Autotools,  but only 1m08s with Meson. There are 2573 lines of meson.build files, vs. 5013 lines of Makefile.am, a 2898 line configure.ac file, and various other bits of debris needed for Autotools Only compile warnings are written to stdout by default, so they’re easy to spot Out of tree builds actually work Tracker is quite a challenging project to build, and I hit a number of issues in Meson along the way plus a few traps for the unwary. We have a huge number of external dependencies — Meson handles this pretty neatly, although autodetection of backends requires a bit of boilerplate. There’s a complex mix of Vala and C code in Tracker, including some libraries that are written in both. The Meson developers have put a lot of work into supporting Vala, which is much appreciated considering it’s a fairly niche language and in fact the only major problem we have left is something that’s just as broken with Autotools: failing to generate a single introspection repo for a combined C + Vala library Tracker also has a bunch of interdependent libraries. This caused continual problems because Meson does very little deduplication in the commandlines it generates, and so I’d get combinational explosions hitting fairly ridiculous errors like commandline too long (the limit is 262KB) or too many open files inside the ld   process. This is a known issue. For now I work around it by manually specifying some dependencies for individual targets instead of relying on them getting pulled in as transitive dependencies of a declare_dependency target. A related issue was that if the same .vapi file ends up on the valac commandline more than once it would trigger an error. This required some trickery to avoid. New versions of Meson work around this issue anyway. One pretty annoying issue is that generated files in the source tree cause Meson builds to fail. Out of tree builds seem to not work with our Autotools build system — something to do with the Vala integration — with the result that you need to make clean before running a Meson build even if the Meson build is in a separate build dir. If you see errors about conflicting types or duplicate definitions, that’s probably the issue. While developing the Meson build instructions I had a related problem of forgetting about certain files that needed to be generated because the Autotools build system had already generated them. Be careful! Meson users need to be aware that the rpath is not set automatically for you. If you previously used Libtool you probably didn’t need to care what an rpath was, but with Meson you have to manually set install_rpath for every program that depends on a library that you have installed into a non-standard location (such as a subdirectory of /usr/lib). I think rpaths are a bit of a hack anyway — if you want relocatable binary packages you need to avoid them — so I like that Meson is bringing this implementation detail to the surface. There are a few other small issues: for example we have a Gtk-Doc build that depends on the output of a program, which Meson’s gtk-doc module currently doesn’t handle so we have to rebuild that documentation on every build as a workaround. There are also some workarounds in the current Tracker Meson build instructions that are no longer needed — for example installing generated Vala headers used to require a custom install script, but now it’s supported more cleanly. Tracker’s Meson build rules aren’t quite ready for prime time: some tests fail when run under Meson that pass when run under Autotools, and we have to work out how best to create release tarballs. But it’s pretty close! All in all this took a lot longer to achieve than I originally hoped (about 9 months of part-time effort), but in the process I’ve found some bugs in both Tracker and Meson, fixed a few of them, and hopefully made a small improvement to the long process of turning GNU/Linux users into GNU/Linux developers. Meson has come a long way in that time and I’m optimistic for its future. It’s a difficult job to design and implement a new general purpose build system (plus project configuration tool, test runner, test infrastructure, documentation, etc. etc), and the Meson project have done so in 5 years without any large corporate backing that I know of. Maintaining open source projects is often hard and thankless. Ten thumbs up to the Meson team! [Less]
Posted 8 days ago
Recently I’ve been working on a project that makes use of Thales HSM devices to encrypt/decrypt data. There’s a number of ways to talk to the HSM, but the most straight-forward from Linux is via PKCS#11. There were a number of attempts to wrap the ... [More] PKCS#11 spec for Python, based on SWIG, cffi, etc., but they were all (a) low level, (b) not very Pythonic, (c) have terrible error handling, (d) broken, (e) inefficient for large files and (f) very difficult to fix. Anyway, given that nearly all documentation on how to actually use PKCS#11 has to be discerned from C examples and thus I’d developed a pretty good working knowledge of the C API, and I’ve wanted to learn Cython for a while, I decided I’d write a new binding based on a high level wrapper I’d put into my app. It’s designed to be accessible, pick sane defaults for you, use generators where appropriate to reduce work, stream large files, be introspectable in your programming environment and be easy to read and extend. https://github.com/danni/python-pkcs11 It’s currently a work in progress, but it’s now available on pip. You can get a session on a device, create a symmetric key, find objects, encrypt and decrypt data. The Cryptoki spec is quite large, so I’m focusing on the support that I need first, but it should be pretty straightforward for anyone who wanted to add something else they needed. I like to think I write reasonably clear, self-documenting code. At the moment it’s only tested on SoftHSMv2 and the Thales nCipher Edge, which is what I have access to. If someone at Amazon wanted this to work flawlessly on CloudHSM, send me an account and I’ll do it Then I can look at releasing my Django integrations for fields, storage, signing, etc. [Less]
Posted 8 days ago
I found a text file named TIL.md lying around on my computer, with one section dated 17th February 2016. Apparently I’d planned to keep a log of the weird or interesting computer things I learned each day, but forgot after a day. I’d also forgotten ... [More] all the facts in the file and was surprised afresh. Maybe you’ll be surprised too: Windows’ shell and user interface do not support filenames with trailing spaces, so if you have a directory called worstever.christmas˽ (where ˽ represents a space) on your Unix fileserver, and serve it to Windows over SMB, you’ll see a filename like CQHNYI~0. I think this is the DOS-style 8.3 compatibility filename but I’m not sure where it gets generated in this case – Samba? TIFF files can contain multiple images. If you have a multi-subfile TIFF, multi.tiff, and run convert multi.tiff multi.jpeg, you will not get back a file called multi.jpeg; convert will silently assume you meant convert multi.tiff multi-%d.jpeg and give you back multi-0.jpeg, multi-1.jpeg, etc. For some context: at the time, I was trying to work out why a script that imported a few tens of thousands of photographs into pan.do/ra – which doesn’t like TIFFs – had skipped some photographs, and imported others as a blank white rectangle; and why a Windows application pointed at the same fileserver showed a different number of photographs again. This was also the first time I encountered an inadvertent homoglyph attack: x.jpg and х.jpg are indistinguishable in most fonts. [Less]
Posted 8 days ago
We have once again a set of accurate, up-to-date documentation for GJS. Find it at devdocs.baznga.org! Many thanks are due to Everaldo Canuto, Patrick Griffis, and Dustin Falgout for helping get the website back online, and to Nuveo for sponsoring ... [More] the hosting. In addition, thanks to Patrick’s lead, we have a docker image if you want to run the documentation site yourself. If you find any inaccuracies in the documentation, please report a bug at this issue tracker. [Less]
Posted 8 days ago
One problem that I used to have is to hide two main windows such as Toolbox-Tool Options Layer Brushes, that make my GIMP seems with no accessible icons on it. Go to Windows -> Dockable Dialogs to have them back: After cutting the edges of the ... [More] original picture using sicssors select tools: I added another layer with a full black rectangle Edit – Fill FGBlack Color As well as another horizontal rectangle and I used Text tool to add words: To set in black and white, Colors – Components – Channel Mixer Check the Monochrome option. To color, I selected Color – Colorize Blur the edge of my hair gave a nice look at the end. Last and not least, I added another layer to the GNOME logo to put it inside my eye * Zoom out was done with SHIFT + and ESC in case CTRL Z is not working. ❤Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: blanco negro gimp, colorear gimp, fedora, GIMP, GNOME, GNOME Peru Challenge, Julita Inca, Julita Inca Chiroque, letras GIMP [Less]
Posted 9 days ago
Quick post to let you know that I just released frogr 1.3. This is mostly a small update to incorporate a bunch of updates in translations, a few changes aimed at improving the flatpak version of it (the desktop icon has been broken for a while ... [More] until a few weeks ago) and to remove some deprecated calls in recent versions of GTK+. Ah! I’ve also officially dropped support for OS X via gtk-osx, as I was systematically failing to update and use (I only use frogr from GNOME these days) since a loooong time ago,  and so it did not make sense for me to keep pretending that the mac version is something that is usable and maintained anymore. As usual, you can go to the main website for extra information on how to get frogr and/or how to contribute to it. Any feedback or help is more than welcome!   [Less]
Posted 10 days ago
Having a discipline to manage workshops includes controlling time, besides the organization of the topics, among other factors. One extension that GNOME offers is called GNOME Pomodoro. It is an easy sofware that let you set your breaks, focusing in ... [More] a specific task to accomplish your goals Installing GNOME Pomodoro Some dependences are required as follow: … et voilà 2. Configuring GNOME Pomodoro Now you can see the setting chart where you can set the Pomodoro duration as well as the break time. Click on an upright screen to handle it. On first attempt, I have tested the Pomodoro duration and the short break duration in 1 minute. I have also changed the appearance to have a less stressful clock on my screen, and set the ticking sound with birds on the background. Finally click ON on top to the GNOME Pomodoro to have ready the alarm! From now on, during my training Python 8 hour classes, I am going to set 25 hours to accomplish 3 pythons exercises with 5 minutes for break Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: challenge, challenge Peru, fedora, GNOME, GNOME Perú, GNOME Pomodoro, Julita Inca, Julita Inca Chiroque, Pomodoro [Less]
Posted 10 days ago
When we introduced HiDPI support in GNOME a few years ago, we took the simplest possible approach that was feasible to implement with the infrastructure we had available at the time. Some of the limitations: You either get 1:1 or 2:1 scaling ... [More] , nothing in between The cut-off point that is somewhat arbitrarily chosen and you don’t get a say in it In multi-monitor systems, all monitors share the same scale Each of these limitations had technical reasons. For example, doing different scales per-monitor is hard to do as long as you are only using a single, big framebuffer for all of them. And allowing scale factors such as 1.5 leads to difficult questions about how to deal with windows that have a size like 640.5×480.5 pixels. Over the years, we’ve removed the technical obstacles one-by-one, e.g. introduced per-monitor framebuffers. One of the last obstacles was the display configuration API that mutter exposes to the control-center display panel, which was closely modeled on XRANDR, and not suitable for per-monitor and non-integer scales. In the last cycle, we introduced a new, more suitable monitor configuration API, and the necessary support for it has just landed in the display panel. With this, all of the hurdles have been cleared away, and we are finally ready to get serious about fractional scaling! Yes, a hackfest! Jonas and Marco happen to both be in Taipei in early June, so what better to do than to get together and spend some days hacking on fractional scaling support: https://wiki.gnome.org/Hackfests/FractionalScaling2017 If you are a compositor developer (or plan on becoming one), or just generally interested in helping with this work, and are in the area, please check out the date and location by following the link. And, yes, this is a bit last-minute, but we still wanted to give others a chance to participate. [Less]