I Use This!
Activity Not Available

News

Analyzed 8 months ago. based on code collected 9 months ago.
Posted 6 days ago
As I wrote a few weeks ago in my post about Gnocchi 3.1 being released, one of the new feature available in this version it the S3 driver. Today I would like to show you how easy it is to use it and store millions of metrics into the simple, durable ... [More] and massively scalable object storage provided by Amazon Web Services. Installation The installation of Gnocchi for this use case is not different than the standard installation procedure described in the documentation. Simply install Gnocchi from PyPI using the following command: $ pip install gnocchi[s3,postgresql] gnocchiclient This will install Gnocchi with the dependencies for the S3 and PostgreSQL drivers and the command-line interface to talk with Gnocchi. Configuring Amazon RDS Since you need a SQL database for the indexer, the easiest way to get started is to create a database on Amazon RDS. You can create a managed PostgreSQL database instance in just a few clicks. Once you're on the homepage of Amazon RDS, pick PostgreSQL as a database: You can then configure your PostgreSQL instance: I've picked a dev/test instance with the basic options available within the RDS Free Tier, but you can pick whatever you think is needed for your production use. Set a username and a password and note them for later: we'll need them to configure Gnocchi. The next step is to configure the database in details. Just set the database name to "gnocchi" and leave the other options to their default values (I'm lazy). After a few minutes, your instance should be created and running. Note down the endpoint. In this case, my instance is gnocchi.cywagbaxpert.us-east-1.rds.amazonaws.com. Configuring Gnocchi for S3 access In order to give Gnocchi an access to S3, you need to create access keys. The easiest way to create them is to go to IAM in your AWS console, pick a user with S3 access and click on the big gray button named "Create access key". Once you do that, you'll get the access key id and secret access key. Note them down, we will need these later. Creating gnocchi.conf Now is time to create the gnocchi.conf file. You can place it in /etc/gnocchi if you want to deploy it system-wide, or in any other directory and add the --config-file option to each Gnocchi command.. Here are the values that you should retrieve and write in the configuration file: indexer.url: the PostgreSQL RDS instance endpoint and credentials (see above) to set into storage.s3_endpoint_url: the S3 endpoint URL – that depends on the region you want to use and they are listed here. storage.s3_region_name: the S3 region name matching the endpoint you picked. storage.s3_access_key_id and storage.s3_secret_acess_key: your AWS access key id and secret access key. Your gnocchi.conf file should then look like that: [indexer]url = postgresql://gnocchi:gn0cch1rul3z@gnocchi.cywagbaxpert.us-east-1.rds.amazonaws.com:5432/gnocchi [storage]driver = s3s3_endpoint_url = https://s3-eu-west-1.amazonaws.coms3_region_name = eu-west-1s3_access_key_id = s3_secret_access_key = Once that's done, you can run gnocchi-upgrade in order to initialize Gnocchi indexer (PostgreSQL) and storage (S3): $ gnocchi-upgrade --config-file gnocchi.conf 2017-02-07 15:35:52.491 3660 INFO gnocchi.cli [-] Upgrading indexer 2017-02-07 15:36:04.127 3660 INFO gnocchi.cli [-] Upgrading storage Then you can run the API endpoint using the test endpoint gnocchi-api and specifying its default port 8041: $ gnocchi-api --port 8041 -- --config-file gnocchi.conf 2017-02-07 15:53:06.823 6290 INFO gnocchi.rest.app [-] WSGI config used: /Users/jd/Source/gnocchi/gnocchi/rest/api-paste.ini ******************************************************************************** STARTING test server gnocchi.rest.app.build_wsgi_app Available at http://127.0.0.1:8041/ DANGER! For testing only, do not use in production ******************************************************************************** The best way to run Gnocchi API is to use uwsgi as documented, but in this case, using the testing daemon gnocchi-api is good enough. Finally, in another terminal, you can start the gnocchi-metricd daemon that will process metrics in background: $ gnocchi-metricd --config-file gnocchi.conf 2017-02-07 15:52:41.416 6262 INFO gnocchi.cli [-] 0 measurements bundles across 0 metrics wait to be processed. Once everything is running, you can use Gnocchi's client to query it and check that everything is OK. The backlog should be empty at this stage, obviously. $ gnocchi status +-----------------------------------------------------+-------+ | Field | Value | +-----------------------------------------------------+-------+ | storage/number of metric having measures to process | 0 | | storage/total number of measures to process | 0 | +-----------------------------------------------------+-------+ Gnocchi is ready to be used! $ # Create a generic resource "foobar" with a metric named "visitor" $ gnocchi resource create foobar -n visitor +-----------------------+-----------------------------------------------+ | Field | Value | +-----------------------+-----------------------------------------------+ | created_by_project_id | | | created_by_user_id | admin | | creator | admin | | ended_at | None | | id | b4d568e4-7af1-5aec-ac3f-9c09fa3685a9 | | metrics | visitor: 05f45876-1a69-4a64-8575-03eea5b79407 | | original_resource_id | foobar | | project_id | None | | revision_end | None | | revision_start | 2017-02-07T14:54:54.417447+00:00 | | started_at | 2017-02-07T14:54:54.417414+00:00 | | type | generic | | user_id | None | +-----------------------+-----------------------------------------------+ # Send the number of visitor at 2 different timestamps $ gnocchi measures add --resource-id foobar -m 2017-02-07T15:56@23 visitor $ gnocchi measures add --resource-id foobar -m 2017-02-07T15:57@42 visitor # Check the average number of visitor # (the --refresh option is given to be sure the measure are processed) $ gnocchi measures show --resource-id foobar visitor --refresh +---------------------------+-------------+-------+ | timestamp | granularity | value | +---------------------------+-------------+-------+ | 2017-02-07T15:55:00+00:00 | 300.0 | 32.5 | +---------------------------+-------------+-------+ # Now shows the minimum number of visitor $ gnocchi measures show --aggregation min --resource-id foobar visitor +---------------------------+-------------+-------+ | timestamp | granularity | value | +---------------------------+-------------+-------+ | 2017-02-07T15:55:00+00:00 | 300.0 | 23.0 | +---------------------------+-------------+-------+ And voilà! You're ready to store millions of metrics and measures on your Amazon Web Services cloud platform. I hope you'll enjoy it and feel free to ask any question in the comment section or by reaching me directly! [Less]
Posted 12 days ago
Knowing that collectd is a daemon that collects system and applications metrics and that Gnocchi is a scalable timeseries database, it sounds like a good idea to combine them together. Cheery on the cake: you can easily draw charts using Grafana. ... [More] While it's true that Gnocchi is well integrated with OpenStack, as it comes from this ecosystem, it actually works standalone by default. Starting with the 3.1 version, it is now easy to send metrics to Gnocchi using collectd. Installation What we'll need to install to accomplish this task is: collectd Gnocchi collectd-gnocchi How you install them does not really matter. If they are packaged by your operating system, go ahead. For Gnocchi and collectd-gnocchi, you can also use pip: # pip install gnocchi[file,postgresql] […] Successfully installed gnocchi-3.1.0 # pip install collectd-gnocchi Collecting collectd-gnocchi Using cached collectd-gnocchi-1.0.1.tar.gz […] Installing collected packages: collectd-gnocchi Running setup.py install for collectd-gnocchi ... done Successfully installed collectd-gnocchi-1.0.1 The detailed installation procedure for Gnocchi is detailed in the documentation. It among other things explains which flavors are available – here I picked PostgreSQL and the file driver to store the metrics. Configuration Gnocchi Gnocchi is simple to configure and is again documented. The default configuration file is /etc/gnocchi/gnocchi.conf – you can generate it with gnocchi-config-generator if needed. However, it also possible to specify another configuration file by appending the --config-file option to any command line In Gnocchi's configuration file, you need to set the indexer.url configuration option to point an existing PostgreSQL database and set storage.file_basepath to an existing directory to store your metrics (the default is /var/lib/gnocchi). That gives something like: [indexer]url = postgresql://root:p4assw0rd@localhost/gnocchi [storage]file_basepath = /var/lib/gnocchi Once done, just run the gnocchi-upgrade command to initialize the index and storage. collectd Collectd provides a default configuration file that loads a bunch of plugin by default, that will meter all sort of metrics on your computer. You can check the documentation online to see how to disable or enable plugins. As the collectd-gnocchi plugin is written in Python, you'll need to enable the Python plugin and load the collectd-gnocchi module: LoadPlugin python  python> Import "collectd_gnocchi" collectd_gnocchi> endpoint "http://localhost:8041" That is enough to enable the storage of metrics in Gnocchi. Running the daemons Once everything is configured, you can launch gnocchi-metricd and the gnocchi-api daemon: $ gnocchi-metricd 2017-01-26 15:22:49.018 15971 INFO gnocchi.cli [-] 0 measurements bundles across 0 metrics wait to be processed. […] # In another terminal $ gnocchi-api --port 8041 […] STARTING test server gnocchi.rest.app.build_wsgi_app Available at http://127.0.0.1:8041/ […] It's not recommended to run Gnocchi using Gnocchi API (as written in the documentation): using uwsgi is a better option. However for rapid testing, the gnocchi-api daemon is good enough. Once that's done, you can start collectd: $ collectd # Or to run in foreground with a different configuration file: # $ collectd -C collectd.conf -f If you have any problem launchding colllectd, check syslog for more information: there might be an issue loading a module or plugin. If no error are printed, then everythin's working fine and you soon should see gnocchi-api printing some requests such as: 127.0.0.1 - - [26/Jan/2017 15:27:03] "POST /v1/resource/collectd HTTP/1.1" 409 113 127.0.0.1 - - [26/Jan/2017 15:27:03] "POST /v1/batch/resources/metrics/measures?create_metrics=True HTTP/1.1" 400 91 Enjoying the result Once everything runs, you can access your newly created resources and metric by using the gnocchiclient. It should have been installed as a dependency of collectd_gnocchi, but you can also install it manually using pip install gnocchiclient. If you need to specify a different endpoint you can use the --endpoint option (which default to http://localhost:8041). Do not hesitate to check the --help option for more information. $ gnocchi resource list --details +---------------+----------+------------+---------+----------------------+---------------+----------+----------------+--------------+---------+-----------+ | id | type | project_id | user_id | original_resource_id | started_at | ended_at | revision_start | revision_end | creator | host | +---------------+----------+------------+---------+----------------------+---------------+----------+----------------+--------------+---------+-----------+ | dd245138-00c7 | collectd | None | None | dd245138-00c7-5bdc- | 2017-01-26T14 | None | 2017-01-26T14: | None | admin | localhost | | -5bdc-94f8-26 | | | | 94f8-263e236812f7 | :21:02.297466 | | 21:02.297483+0 | | | | | 3e236812f7 | | | | | +00:00 | | 0:00 | | | | +---------------+----------+------------+---------+----------------------+---------------+----------+----------------+--------------+---------+-----------+ $ gnocchi resource show collectd:localhost +-----------------------+-----------------------------------------------------------------------+ | Field | Value | +-----------------------+-----------------------------------------------------------------------+ | created_by_project_id | | | created_by_user_id | admin | | creator | admin | | ended_at | None | | host | localhost | | id | dd245138-00c7-5bdc-94f8-263e236812f7 | | metrics | interface-en0@if_errors-0: 5d60f224-2e9e-4247-b415-64d567cf5866 | | | interface-en0@if_errors-1: 1df8b08b-555a-4cab-9186-f9b79a814b03 | | | interface-en0@if_octets-0: 491b7517-7219-4a04-bdb6-934d3bacb482 | | | interface-en0@if_octets-1: 8b5264b8-03f3-4aba-a7f8-3cd4b559e162 | | | interface-en0@if_packets-0: 12efc12b-2538-45e7-aa66-f8b9960b5fa3 | | | interface-en0@if_packets-1: 39377ff7-06e8-454a-a22a-942c8f2bca56 | | | interface-en1@if_errors-0: c3c7e9fc-f486-4d0c-9d36-55cea855596a | | | interface-en1@if_errors-1: a90f1bec-3a60-4f58-a1d1-b3c09dce4359 | | | interface-en1@if_octets-0: c1ee8c75-95bf-4096-8055-8c0c4ec8cd47 | | | interface-en1@if_octets-1: cbb90a94-e133-4deb-ac10-3f37770e32f0 | | | interface-en1@if_packets-0: ac93b1b9-da71-4876-96aa-76067b35c6c9 | | | interface-en1@if_packets-1: 2f8528b2-12ae-4c4d-bec7-8cc987e7487b | | | interface-en2@if_errors-0: ddcf7203-4c49-400b-9320-9d3e0a63c6d5 | | | interface-en2@if_errors-1: b249ea42-01ad-4742-9452-2c834010df71 | | | interface-en2@if_octets-0: 8c23013a-604e-40bf-a07a-e2dc4fc5cbd7 | | | interface-en2@if_octets-1: 806c1452-0607-4b56-b184-c4ffd48f52c0 | | | interface-en2@if_packets-0: c5bc6103-6313-4b8b-997d-01930d1d8af4 | | | interface-en2@if_packets-1: 478ae87e-e56b-44e4-83b0-ed28d99ed280 | | | load@load-0: 5db2248d-2dca-401e-b2e2-bbaee23b623e | | | load@load-1: 6f74ac93-78fd-4a74-a47e-d2add487a30f | | | load@load-2: 1897aca1-356e-4791-907f-512e516992b5 | | | memory@memory-active-0: 83944a85-9c84-4fe4-b471-1a6cf8dce858 | | | memory@memory-free-0: 0ccc7cfa-26a5-4441-a15f-9ebb2aa82c6d | | | memory@memory-inactive-0: 63736026-94c4-47c5-8d6f-a9d89d65025b | | | memory@memory-wired-0: b7217fd6-2cdc-4efd-b1a8-a1edd52eaa2e | | original_resource_id | dd245138-00c7-5bdc-94f8-263e236812f7 | | project_id | None | | revision_end | None | | revision_start | 2017-01-26T14:21:02.297483+00:00 | | started_at | 2017-01-26T14:21:02.297466+00:00 | | type | collectd | | user_id | None | +-----------------------+-----------------------------------------------------------------------+ % gnocchi metric show -r collectd:localhost load@load-0 +------------------------------------+-----------------------------------------------------------------------+ | Field | Value | +------------------------------------+-----------------------------------------------------------------------+ | archive_policy/aggregation_methods | min, std, sum, median, mean, 95pct, count, max | | archive_policy/back_window | 0 | | archive_policy/definition | - timespan: 1:00:00, granularity: 0:05:00, points: 12 | | | - timespan: 1 day, 0:00:00, granularity: 1:00:00, points: 24 | | | - timespan: 30 days, 0:00:00, granularity: 1 day, 0:00:00, points: 30 | | archive_policy/name | low | | created_by_project_id | | | created_by_user_id | admin | | creator | admin | | id | 5db2248d-2dca-401e-b2e2-bbaee23b623e | | name | load@load-0 | | resource/created_by_project_id | | | resource/created_by_user_id | admin | | resource/creator | admin | | resource/ended_at | None | | resource/id | dd245138-00c7-5bdc-94f8-263e236812f7 | | resource/original_resource_id | dd245138-00c7-5bdc-94f8-263e236812f7 | | resource/project_id | None | | resource/revision_end | None | | resource/revision_start | 2017-01-26T14:21:02.297483+00:00 | | resource/started_at | 2017-01-26T14:21:02.297466+00:00 | | resource/type | collectd | | resource/user_id | None | | unit | None | +------------------------------------+-----------------------------------------------------------------------+ $ gnocchi measures show -r collectd:localhost load@load-0 +---------------------------+-------------+--------------------+ | timestamp | granularity | value | +---------------------------+-------------+--------------------+ | 2017-01-26T00:00:00+00:00 | 86400.0 | 3.2705004391254193 | | 2017-01-26T15:00:00+00:00 | 3600.0 | 3.2705004391254193 | | 2017-01-26T15:00:00+00:00 | 300.0 | 2.6022800611413044 | | 2017-01-26T15:05:00+00:00 | 300.0 | 3.561742940080275 | | 2017-01-26T15:10:00+00:00 | 300.0 | 2.5605337960379466 | | 2017-01-26T15:15:00+00:00 | 300.0 | 3.837517851142473 | | 2017-01-26T15:20:00+00:00 | 300.0 | 3.9625948392427883 | | 2017-01-26T15:25:00+00:00 | 300.0 | 3.2690042162698414 | +---------------------------+-------------+--------------------+ As you can see, the command line works smoothly and can show you any kind of metric reported by collectd. In this case, it was just running on my laptop, but you can imagine it's easy enough to poll thousands of hosts with collectd and Gnocchi. Bonus: charting with Grafana Grafana, a charting software, has a plugin for Gnocchi as detailed in the documentation. Once installed, you can just configure Grafana to point to Gnocchi this way: Grafana configuration screen You can then create a new dashboard by filling the forms as you wish. See this other screenshot for a nice example: Charts of my laptop's load average I hope everything is clear and easy enough. If you have any question, feel free to write something in the comment section! [Less]
Posted 15 days ago
Probably the biggest news of the last two weeks is that Boris's native HDMI audio driver is now on the mailing list for review.  I'm hoping that we can get this merged for 4.12 (4.10 is about to be released, so we're too late for 4.11).  We've tested ... [More] stereo audio so far, no compresesd audio (though I think it should Just Work), and >2 channel audio should be relatively small amounts of work from here.  The next step on HDMI audio is to write the alsalib configuration snippets necessary to hide the weird details of HDMI audio (stereo IEC958 frames required) so that sound playback works normally for all existing userspace, which Boris should have a bit of time to work on still.I've also landed the vc4 part of the DSI driver in the drm-misc-next tree, along with a fixup.  Working with the new drm-misc-next trees for vc4 submission is delightful -- once I get a review, I can just push code to the tree and it will magically (through Daniel Vetter's pull requests and some behind-the-scenes tools) go upstream at the appropriate time.  I am delighted with the work that Daniel has been doing to make the DRM subsystem more welcoming of infrequent contributors and reducing the burden on us all of participating in the Linux kernel process.In 3D land, the biggest news is that I've fixed a kernel oops (producing a desktop lockup) when CMA returns out of memory.  Unfortunately, VC4 doesn't have an MMU, so we require that memory allocations for graphics be contiguous (using the Contiguous Memory Allocator), and to make things worse we have a limit of 256MB for CMA due to an addressing bug of the V3D part, so CMA returning out of memory is a serious and unfortunately frequent problem.  I had a bug where a CMA failure would put the failed allocation into the BO cache, so if you asked for a BO of the same size soon enough, you'd get the bad BO back and crash.  I've been unable to construct a good minimal testcase for it, but the patch is on the mailing list and in the rpi-4.4.y tree now.I've also fixed a bug in the downstream tree's "firmware KMS" mode (where I use the closed source firmware for display, but open source 3D) where fullscreen 3D rendering would get a single frame displayed and then freeze.In userspace, I've fixed a bit of multisample rendering (copies of MSAA depth buffers) that gets more of our regression testing working, and worked on some potential rasterization problems (the texwrap regression tests are failing due to texture filtering troubles, and I'm not sure if we're actually doing anything wrong or not because we're near the cutoff for the "how accurate does the filtering have to be?").Coming up, I'm looking at fixing a cursor updates bug that Michael Zoran has found, and fixing up the DSI panel driver so that it can hopefully get into 4.12.  Also, after a recent discussion with Eben, I've realized that we can actually scan out tiled buffers from the GPU, so we may get big performance wins for un-composited X if I can have Mesa coordinate with the kernel on producing tiled buffers. [Less]
Posted 18 days ago
libinput has a couple of features that 'automagically' work on touchpads such as disable-while-typing and the lid switch triggered disabling of touchpads and disabling the touchpad when an external mouse is plugged in [1]. But not all of these ... [More] features make sense on all touchpads. For example, an Apple Magic Trackpad doesn't need disable-while-typing because unless you have a creative arrangement of input devices [2], the touchpad won't be where your palm is likely to hit it. Likewise, a Logitech T650 connected over a unifying receiver shouldn't get disabled when the laptop lid closes. For this to work, libinput has some code to figure out whether a touchpad is internal or external. Initially we had some code to detect this but eventually moved this to the ID_INPUT_TOUCHPAD_INTEGRATION property now set by udev's hwdb (systemd 231 and later). Having it in the hwdb makes it quite trivial to override locally where the current rules are insufficient (and until the hwdb is fixed, thanks for filing a bug). We still have the fallback code though in case the tag is missing. On a sufficiently modern distribution, udevadm info /sys/class/input/event4 for your touchpad device node should show something like ID_INPUT_TOUCHPAD_INTEGRATION=internal. So for any feature that libinput adds for touchpads, we only enable it where it makes sense. That's why your external touchpad doesn't trigger disable-while-typing or the lid switch. [1] ok, I admit, this is something we should've left to the client, but now we have the feature.[2] yes, I'm sure there's at least one person out there that uses the touchpad upside down in front of the keyboard and is now angry that libinput doesn't allow arbitrary rotation of the device combined with configurable dwt. I think of you every night I cry myself to sleep. [Less]
Posted 22 days ago
Last week-end, I was in Brussels, Belgium for the 2017 edition of the FOSDEM, one of the greatest open source developer conference. This year, I decided to propose a talk about Gnocchi which was accepted in the Python devroom. The track was very ... [More] wlell organized (thanks to Stéphane Wirtel) and I was able to present Gnocchi to a room full of Python developers! I've explained why we created Gnocchi and how we did it, and finally briefly explained how to use it with the command-line interface or in a Python application using the SDK. You can check the slides and the video of the talk. [Less]
Posted 25 days ago
Seems that there was a rift in the spacetime that sucked away the video of my LCA talk, but the awesome NextDayVideo team managed to pull it back out. And there’s still the writeup and slides available.
Posted 26 days ago
It's always difficult to know when to release, and we really wanted to do it earlier. But it seems that each week more awesome work was being done in Gnocchi, so we kept delaying it while having no pressure to push it out. A photo posted by ... [More] Julien Danjou (@juldanjou) on Jan 22, 2017 at 5:43am PST I've made my own gnocchis to celebrate! But now that the OpenStack cycle is finishing, even Gnocchi does not strictly follow it, it seemed to be a good time to cut the leash and leave this release be. There are again some major new changes coming from 3.0. The previous version 3.0 was tagged in October and had 90 changes merged from 13 authors since 2.2. This 3.1 version have 200 changes merged from 24 different authors. This is a great improvement of our contributor base and our rate of change – even if our delay to merge is very low. Once again, we pushed usage of release notes to document user visible changes, and they can be read online. Therefore, I am going to summary quickly the major changes: The REST API authentication mechanism has been modularized. It's now simple to provide any authentication mechanism for Gnocchi as a plugin. The default is now a HTTP basic authentication mechanism that does not implement any kind of enforcement. The Keystone authentication is still available, obviously. Batching has been improved and can now create metrics on the fly, reducing the latency needed when pushing measures to non-existing metrics. This is leveraged by the collectd-gnoccchi plugin for example. The performance of Carbonara based backend has been largely improved. This is not really listed as a change as it's not user-visible, but an amazing work of profiling and rewriting code from Pandas to NumPy has been done. While Pandas is very developer-friendly and generic, using NumPy directly offers way more performance and should decrease gnocchi-metricd CPU usage by a large factor. The storage has been split into two parts: the storage of incoming new measures to be processed, and the storage and archival of aggregated metrics. This allows to use e.g. file to store new measures being sent, and once processed store them into e.g. Ceph. Before that change, all the new measures had to go into Ceph. While there's no specific driver yet for incoming measures, it's easy to envision a driver for systems like Redis or Memcached. A new Amazon S3 driver has been merged. It works in the same way than the file or OpenStack Swift drivers. I will write more about some of these new features in the upcoming weeks, as they are very interesting for Gnocchi's users. We are planning to run a scalability test and benchmarks using the ScaleLab in a few weeks if everything goes as planned. I will obviously share the result here, but we also submitted a talk for the next OpenStack Summit in Boston to present the results of our scalability and performance tests – hoping the session will be accepted. I will also be talking about Gnocchi this Sunday at FOSDEM. We don't have a very determined roadmap for Gnocchi during the next weeks. Sure we do have a few ideas on what we want to implement, but we are also very easily influenced by the requests of our user: therefore feel free to ask for anything! [Less]
Posted 27 days ago
I merged a patchset from James Ye today to add support for switch events to libinput, specifically: lid switch events. This feature is scheduled for libinput 1.7. First, what are switches and how are they different so keys? A key's state is ... [More] transient with a neutral state of "key is up". The state itself is expected to change frequently. Switches don't always have a defined logical neutral state and the state changes only infrequently. This requires different handling in applications and thus libinput exposes a new interface (and capability) for switches. The interface itself is trivial. A switch event has two properties, the switch type (e.g. "lid") and the switch state (on/off). See the libinput-debug-events source code for a simple code to print the state and type. In libinput, we generally try to restrict ourselves to the cases we know how to handle. So in the first iteration, we'll support a single switch event: the lid switch. This is the toggle that changes when you close the lid on your laptop. But libinput uses this internally too: touchpads are disabled automatically whenever the lid is closed. Indeed, this functionally was the main motivation for this patchset. On a number of devices, we get ghost touches when the lid is closed. Even though the touchpad is unreachable by the user interference with the screen still causes events, moving the pointer in unexpected ways and generally being a nuisance. Some trackpoints suffer from the same issue. But now that libinput knows about the lid switch it can transparently disable the touchpad whenever the lid is closed and thus discard the events. Lid switches on some devices are unreliable. There are some devices where the lid is permanently closed and other devices where the lid can be closed, but we'll never see the open event. So we change behaviour based on a few factors. After all, no-one likes a dysfunctional touchpad because the lid switch is broken (if you do, seek help). For one, whenever we detect keyboard events while in logically closed state we'll assume that the lid is open after all and adjust state accordingly. Unless the lid switch is reliable, we don't sync the initial state. That's annoying for those who start libinput in closed mode, but it filters out all devices that set the lid switch to "on" and then never change again. On the Surface 3 devices we go even further: we know those devices needs a bit of hand-holding. So whenever we detect activity on the keyboard, we also write the EV_SW/SW_LID state to the device node, thus updating the kernel to be correct again (and thus help everyone else who may be listening). The exact behaviours will likely change slightly over time as we have to deal with corner-cases one-by-one. But meanwhile, it's even easier for compositors to listen to switch events and users don't have to deal with ghost touches anymore. Many thanks to James Ye for implementing this. [Less]
Posted 29 days ago
Most of last week was spent switching my development environment over to a setup with no SD cards involved at all.  This was triggered by yet another card failing, and I spent a couple of days off and on trying to recover it.  I now have three ... [More] scripts that build and swap my test environment between upstream 64-bit Pi3, upstream 32-bit Pi3, and downstream 32-bit Pi3, using just the closed source bootloader without u-boot or an SD card.  Previously I was on Pi2 only (much slower for testing), and running downstream kernels was really difficult.Once I got the new netboot system working, I tested and landed the NEON part of my tiling work (the big 208% download and 41% upload performance improvements).  I'm looking forward to fixing up the clever tiling math parts soon.I also tested and landed a few little compiler improvements. The nicest compiler improvement was turning on a switch that Marek added to gallium: We now run the GLSL compiler optimization loop exactly once (because it's required to avoid GLSL linker regressions), and rely on NIR to do the actual optimization after the GLSL linker has run.  The GLSL IR is a terrible IR for doing optimization on (only a bit better than Mesa or TGSI IRs), and it's made worse by the fact that I wrote a bunch of its optimizations back before we had good datastructures available in Mesa and before we had big enough shaders that using good datastructures mattered.  I'm horrified by my old code and can't wait to get it deleted (Timothy Arceri has been making progress on that front).  Until we can actually delete it, though, cutting down the number of times we execute my old optimization passes should improve our compile times on complicated shaders.Now that I have a good way to test the downstream kernel, I went ahead and made a giant backport of our current vc4 kernel code to the 4.9 branch.  I hope the Foundation can get that branch shipped soon -- backporting to 4.9 is so much easier for me than old 4.4, and the structure of the downstream DT files makes it much clearer what there is left to be moved upstream.Meanwhile, Michael Zoran has continued hacking on the staging VCHI code, and kernel reviewers were getting bothered by edits to code with no callers.  Michael decided to solve that by submitting the old HDMI audio driver that feeds through the closed source firmware (I'm hoping we can delete this soon once Boris's work lands, though), and I pulled the V4L2 camera driver out of rpi-4.9.y and submitted that to staging as well.  I unfortunately don't have the camera driver quite working yet, because when I modprobe it the network goes down.  There are a ton of variables that have changed since the last time I ran the camera (upstream vs downstream, 4.10 vs 4.4, pi3 vs pi2), so it's going to take a bit of debugging before I have it working again.Other news: kraxel from RH has resubmitted the SDHOST driver upstream, so maybe we can have wifi by default soon.  Baruch Siach has submitted some fixes that I suspect get BT working.  I've also passed libepoxy off to Emmanuele Bassi (long time GNOME developer) who has fixed it up to be buildable and usable on Linux again and converted it to the Meson build system, which appears to be really promising. [Less]
Posted 29 days ago
In order to read events and modify devices, libinput needs a file descriptor to the /dev/input/event node. But those files are only accessible by the root user. If libinput were to open these directly, we would force any process that uses libinput to ... [More] have sufficient privileges to open those files. But these days everyone tries to reduce a processes privileges wherever possible, so libinput simply delegates opening and closing the file descriptors to the caller.The functions to create a libinput context take a parameter of type struct libinput_interface. This is an non-opaque struct with two function pointers: "open_restricted" and "close_restricted". Whenever libinput needs to open or close a file, it calls the respective function. For open_restricted() libinput expects the caller to return an fd with the given flags. In the simplest case, a caller can merely call open() and close(). This is what the debugging tools do (and the test suite). But obviously this means you have to run those as root. The main wayland compositors (weston, mutter, kwin, ...) instead forward the request to systemd-logind. That then opens the event node and returns the fd which is passed to libinput. And voila, the compositors don't need to run as root, libinput doesn't have to know how the fd is opened and everybody wins. Plus, logind will mute the fd on VT-switch, so we can't leak keyboard events. In the X.org case it's a combination of the two. When the server runs with systemd-logind enabled, it will open the fd before the driver initialises the device. During the init stage, libinput asks the xf86-input-libinput driver to open the device node. The driver forwards the request to the server which simply returns the already-open fd. When the server runs without systemd-logind, the server opens the file normally with a standard open() call. So in summary: you can easily run libinput without systemd-logind but you'll have to figure out how to get the required privileges to open device nodes. For anything more than a test or debugging program, I recommend using systemd-logind. [Less]