I Use This!
Activity Not Available

News

Analyzed 4 months ago. based on code collected 9 months ago.
Posted almost 5 years ago by Krita News
About a year ago, we created the ask.krita.org website. We wanted to have a stack-exchange like place, where people could report problems, after searching whether their problems had already been discussed, where people could help each other. Maybe it ... [More] was the platform we were using, maybe it’s that people who are using Krita have a different mindset from people for whom stack-exchange like sites work, but we came to realize that ask.krita.org did not work out. Nobody seemed to be searching whether their problems had already been discussed and maybe solved, so the same questions were being asked again and again. Nobody seemed to stay around and engage with the people who were trying to help them, and nobody seemed to stay around to help other people. In the end, it was the same small group of people, Tiar from reddit, Ahabgreybeard from the forum, Scott, Wolthera and Boud from the Krita developer community who answered nearly all questions. The Ask website simply had become yet another place where the same questions were asked all the time. We still have a problem, though. Krita is growing with leaps and bounds. There are so many people using Krita that it’s becoming impossible for the Krita team to do proper user support. The bug reporting system is overflowing, not with bugs, but with support questions. We’re getting personal emails from people asking for help, which really should not happen. People complain the forum is outdated and hard to use, yet that also sees a lot of activity. Perhaps fortunately, nearly nobody is using the mailing list… We’re not sure what we will be putting in place of the ask.krita.org website, but to our mind, the following considerations are important: We want to consolidate the places where we are giving support, i.e., fewer channels The new “support channel” must be accessible world-wide (i.e., Reddit is not accessible in China) Krita users must be encouraged to help each other Search must be easy, and it must be easy to tell people that their question has, in fact, already be answered, so questions must be consolidated into one item. The solution must look familiar to most people We haven’t found our holy grail yet, but we’re looking for it! [Less]
Posted almost 5 years ago by Krita News
About a year ago, we created the ask.krita.org website. We wanted to have a stack-exchange like place, where people could report problems, after searching whether their problems had already been discussed, where people could help each other. Maybe it ... [More] was the platform we were using, maybe it’s that people who are using Krita have a different mindset from people for whom stack-exchange like sites work, but we came to realize that ask.krita.org did not work out. Nobody seemed to be searching whether their problems had already been discussed and maybe solved, so the same questions were being asked again and again. Nobody seemed to stay around and engage with the people who were trying to help them, and nobody seemed to stay around to help other people. In the end, it was the same small group of people, Tiar from reddit, Ahabgreybeard from the forum, Scott, Wolthera and Boud from the Krita developer community who answered nearly all questions. The Ask website simply had become yet another place where the same questions were asked all the time. We still have a problem, though. Krita is growing with leaps and bounds. There are so many people using Krita that it’s becoming impossible for the Krita team to do proper user support. The bug reporting system is overflowing, not with bugs, but with support questions. We’re getting personal emails from people asking for help, which really should not happen. People complain the forum is outdated and hard to use, yet that also sees a lot of activity. Perhaps fortunately, nearly nobody is using the mailing list… We’re not sure what we will be putting in place of the ask.krita.org website, but to our mind, the following considerations are important: We want to consolidate the places where we are giving support, i.e., fewer channels The new “support channel” must be accessible world-wide (e.g. Reddit is not accessible in China) Krita users must be encouraged to help each other Search must be easy, and it must be easy to tell people that their question has, in fact, already be answered, so questions must be consolidated into one item. The solution must look familiar to most people We haven’t found our holy grail yet, but we’re looking for it! [Less]
Posted almost 5 years ago by Tomaz Canabrava (tomaz)
Konsole has been ready for many many years, and got almost 10 years without anything really exciting being added, mostly because the software is ready, why should we modify something that works to add experimental features? But then the reality ... [More] kicked in, konsole missed some features that are quite nice and existed in Terminator, Tilix and other new terminals, but Tilix now lacks a developer, and Terminator is also not being actively developed (with the last release being it in 26 of February of 2017) Konsole is tougth as a powerhouse of the terminal emulators, having too many things that are quite specific for some use cases, and it’s really tricky to remove a feture (even when it’s half-broken) because some people do rely in that behavior, so my first idea is to modernize the codebase so I can actually start to code. It’s been around one year that I send patches regularly to Konsole now, and I don’t plan to stop. [Less]
Posted almost 5 years ago by Sirgienko Nikita (sirgienko)
Hello everyone! I'm participating in Google Summer of Code 2019, I am working on KDE Cantor project. The GSoC project is mentored by Alexander Semke - one of the core developers of LabPlot, Knights and Cantor. At first, let me introduce you into ... [More] Cantor and into my GSoC-project:Cantor is a KDE application providing a graphical interface to different open-source computer algebra systems and programming languages, like Octave, Maxima, Julia, Python etc. The main idea of this application is to provide one single, common and user-friendly interface for different systems instead of providing different GUIs for different systems. The details specific to the different languages are transparent to the end-user and are handled internally in the language specific parts of Cantor's code.There is another project following this idea - the project Jupyter. As a result of its very big popularity, user base and the community around this project, there is a lot of content available for this project created and contributed by users from different scientific and educational areas, as documented in the gallery of interesting Jupyter Notebooks.At the moment, Cantor has its own format for projects. Though this format is good enough to manage Cantor projects, there is not a lot of content created and published by Cantor users and the user base is still not at the level which this application would deserve. Furthermore, sharing of the content stored in Cantor's native format requires the availability of Cantor on the target system, which is available for linux only at the moment. This all complicates the attempts to make Cantor more popular and known to a broader user base. Adding the possibility to import/export Jupyter Notebook worksheets in Cantor will address the problems described above.If you are interested in a more the technical and detailed description of the project, you can check out my proposal.Actually, it's not my first contribution to Cantor. I am contributing to this project for roughly one year already. As a developer interested in C++, Qt and applications relevant for scientific purposes, I started to contribute to Cantor last year by working on smaller bug fixes first. With time and with more understanding about the overall architecture of Cantor I could work on bigger topics like new features, more complicated bug fixes and refactorings in the code and this year I'm happy to contribute yet another big and very important functionality to Cantor as part of GSoC.To start I selected couple of well structured Jupyter notebooks from a gallery of interesting Jupyter Notebooks. Those notebooks were selected based on three criteria: they should be self-sufficient they should contain commands and results of different types they should have a reasonable size sufficient for testing the new code and for demoing the results Below you can see the screenshots of the notebooks I decided to use: The notebooks will be used for testing functionality and also for showing a progress of this project and in the final post I will summarize and report on Cantor being able to successfully process such files.In the next post I plan to already show a working first version of the Jupyter importer. [Less]
Posted almost 5 years ago by Qt Dev Loop
When working on Qt, we need to write code that builds and runs on multiple platforms, with various compiler versions and platform SDKs, all the time. Building code, running tests, reproducing reported bugs, or testing packages is at best cumbersome ... [More] and time consuming without easy access to the various machines locally. Keeping actual hardware around is an option that doesn’t scale particularly well. Maintaining a bunch of virtual machines is often a better option – but we still need to set those machines up, and find an efficient way to build and run our local code on them. Building my local Qt 5 clone on different platforms to see if my latest local changes work (or at least compile) should be as simple as running “make”, perhaps with a few more options needed. Something like qt5 $ minicoin run windows10 macos1014 ubuntu1804 build-qt should bring up three machines, configure them using the same steps that we ask Qt developers to follow when they set up their local machines (or that we use in our CI system Coin – hence the name), and then run the build job for the code in the local directory. This (and a few other things) is possible now with minicoin. We can define virtual machines in code that we can share with each other like any other piece of source code. Setting up a well-defined virtual machine within which we can build our code takes just a few minutes. minicoin is a set of scripts and conventions on top of Vagrant, with the goal to make building and testing cross-platform code easy. It is now available under the MIT license at https://git.qt.io/vohilshe/minicoin. A small detour through engineering of large-scale and distributed systems While working with large-scale (thousands of hosts), distributed (globally) systems, one of my favourite, albeit somewhat gruesome, metaphors was that of “servers as cattle” vs “servers as pets”. Pet-servers are those we groom manually, we keep them alive, and we give them nice names by which to remember and call (ie ssh into) them. However, once you are dealing with hundreds of machines, manually managing their configuration is no longer an option. And once you have thousands of machines, something will break all the time, and you need to be able to provision new machines quickly, and automatically, without having to manually follow a list of complicated instructions. When working with such systems, we use configuration management systems such as CFEngine, Chef, Puppet, or Ansible, to automate the provisioning and configuration of machines. When working in the cloud, the entire machine definition becomes “infrastructure as code”. With these tools, servers become cattle which – so the rather unvegetarian idea – is simply “taken behind the barn and shot” when it doesn’t behave like it should. We can simply bring a new machine, or an entire environment, up by running the code that defines it. We can use the same code to bring production, development, and testing environments up, and we can look at the code to see exactly what the differences between those environments are. The tooling in this space is fairly complex, but even so there is little focus on developers writing native code targeting multiple platforms. For us as developers, the machine we write our code on is most likely a pet. Our primary workstation dying is the stuff for nightmares, and setting up a new machine will probably keep us busy for many days. But this amount of love and care is perhaps not required for those machines that we only need for checking whether our code builds and runs correctly. We don’t need our test machines to be around for a long time, and we want to know exactly how they are set up so that we can compare things. Applying the concepts from cloud computing and systems engineering to this problem lead me (back) to Vagrant, which is a popular tool to manage virtual machines locally and to share development environments. Vagrant basics Vagrant gives us all the mechanisms to define and manage virtual machines. It knows how to talk to a local hypervisor (such as VirtualBox or VMware) to manage the life-cycle of a machine, and how to apply machine-specific configurations. Vagrant is written in Ruby, and the way to define a virtual machine is to write a Vagrantfile, using Ruby code in a pseudo-declarative way: Vagrant.configure("2") do |config|     config.vm.box = "generic/ubuntu1804"     config.vm.provision "shell",         inline: "echo Hello, World!" end Running “vagrant up” in a directory with that Vagrantfile will launch a new machine based on Ubuntu 18.04 (downloading the machine image from the vagrantcloud first), and then run “echo Hello, World!” within that machine. Once the machine is up, you can ssh into it and mess it up; when done, just kill it with “vagrant destroy”, leaving no traces. For provisioning, Vagrant can run scripts on the guest, execute configuration management tools to apply policies and run playbooks, upload files, build and run docker containers, etc. Other configurations, such as network, file sharing, or machine parameters such as RAM, can be defined as well, in a more or less hypervisor-independent format. A single Vagrantfile can define multiple machines, and each machine can be based on a different OS image. However, Vagrant works on a fairly low level and each platform requires different provisioning steps, which makes it cumbersome and repetitive to do essentially the same thing in several different ways. Also, each guest OS has slightly different behaviours (for instance, where uploaded files end up, or where shared folders are located). Some OS’es don’t fully support all the capabilities (hello macOS), and of course running actual tasks is done different on each OS. Finally, Vagrant assumes that the current working directory is where the Vagrantfile lives, which is not practical for developing native code. minicoin status minicoin provides various abstractions that try to hide many of the various platform specific details, works around some of the guest OS limitations, and makes the definition of virtual machines fully declarative (using a YAML file; I’m by no means the first one with that idea, so shout-out to Scott Lowe). It defines a structure for providing standard provisioning steps (which I call “roles”) for configuring machines, and for jobs that can be executed on a machine. I hope the documentation gets you going, and I’d definitely like to hear your feedback. Implementing roles and jobs to support multiple platforms and distributions is sometimes just as complicated as writing cross-platform C++ code, but it’s still a bit less complex than hacking on Qt. We can’t give access to our ready-made machine images for Windows and macOS, but there are some scripts in “basebox” that I collected while setting up the various base boxes, and I’m happy to share my experiences if you want to set up your own (it’s mostly about following the general Vagrant instructions about how to set up base boxes). Of course, this is far from done. Building Qt and Qt applications with the various compilers and toolchains works quite well, and saves me a fair bit of time when touching platform specific code. However, working within the machines is still somewhat clunky, but it should become easier with more jobs defined. On the provisioning side, there is still a fair bit of work to be done before we can run our auto-tests reliably within a minicoin machine. I’ve experimented with different ways of setting up the build environments; from a simple shell script to install things, to “insert CD with installed software”, and using docker images (for example for setting up a box that builds a web-assembly, using Maurice’s excellent work with Using Docker to test WebAssembly). Given the amount of discussions we have on the mailing list about “how to build things” (including documentation, where my journey into this rabbit hole started), perhaps this provides a mechanism for us to share our environments with each other. Ultimately, I’d like coin and minicoin to converge, at least for the definition of the environments – there are already “coin nodes” defined as boxes, but I’m not sure if this is the right approach. In the end, anyone that wants to work with or contribute to Qt should be able to build and run their code in a way that is fairly close to how the CI system does things. The post Building and testing on multiple platforms – introducing minicoin appeared first on Qt Blog. [Less]
Posted almost 5 years ago by Johan Thelin
So, foss-north 2019 happened. 260 visitors. 33 speakers. Four days of madness. During my opening of the second day I mentioned some social media statistics. Only 7 of our speakers had mastodon accounts, but 30 had twitter accounts. Day two of ... [More] #fossnorth2019 is starting! @e8johan is giving a quick opening before the key notes. pic.twitter.com/4fScLfY9Y6— (((Niclas Zeising))) (@niclaszeising) April 9, 2019 Given the current situation with an ongoing centralization of services to a few large provides, I feel that the Internet is moving in the wrong direction. The issue is that without these central repository-like services, it is hard to find contents. For instance, twitter would be boring if you had noone one to tweet to. That is where federated services enter the picture. Here you get the best of two worlds. For instance mastodon. This is a federated micro blogging network. This means that everyone can host their own instance (or simply join one), and all instances together create the larger mastodon society. It even has the benefit of creating something of a micro-community at each server instance – so you can interact with the larger world (all federated mastodon servers), and your local community (the users of your instance). There are multiple federated solutions out there. Everything from nextcloud, WordPress, pixelfed, matrix, to peertube. The last one, peertube, is a federated alternative to YouTube. It has similar benefits to mastodon, so you have the larger federated space, but also your local community. Discussing the foss-north videos with Reg over Mastodon, we realized that there is a gap for a peertube instance for conference videos. Sorry for the Swedish. We basically agree that a peertube instance for conference videos is a great idea. I really hate to say that something should happen, and not having the time to do something about it. There are so many things that really should happen. Luckily, Reg reached out to me and said – what about me setting it up. Said and done, he went to spacebear and created an instance of peertube. Got the domain conf.tube. I started migrating videos and tested it out. You can try it yourself. For instance, here is an embedded video from the lightning talks: If you help organizing a conference and want to use federated video hosting, contact Reg to get an account at conf.tube. If you’re interested in free and open source, drop in at conf.tube and check out the videos. [Less]
Posted almost 5 years ago by Carl Schwan (ognarb)
I had a nice surprise last Monday, I learned that the city where I live Saarbrücken (Germany) is hosting the 2019 edition of the nice Libre Graphics Meeting (lgm). So I took the opportunity to attend my first FOSS event. The event took place at the ... [More] Hochschule der Bildenden Künste Saar from the Wed 29.05 to Sunday 02.06. I really enjoyed, I meet a lot of other Free Software contributors (not only devs), and discovered some nice programming and artistic projects. There were some really impressive presentations and workshops. Thursday 30.05. GEGL (GIMP new ‘rendering engine’) maintainer Øyvind Kolås presented, how to use GEGL effect from the command line and the same commands can be used directly from GIMP. This is helpful, when we want to automate some workflow. In the afternoon, I discovered PraxisLIVE, an awesome live coding IDE where you can create effect with Java and a graph editor and showing the effect instantly on for example a webcam stream or a music track. Ana Isabel Carvalho and Ricardo Lafuente explained their past workshop in Porto where the participants create pixel art fonts with git and the gitlab-ci. Friday, 31.05. On Friday, I took part to two workshops. The first was GIMP, there I met a lot of GIMP/GEGL developers. But it was more development meeting than a workshop where I could get my hand dirty. I also took part in the Inkscape workshop, where I learned about all of the nice features coming in Inkscape 1.0 (a new alpha version was released during the LGM 2019 and users are encouraged to reports bugs and regressions). I also learned that Inkscape can be used to create nice wood works: The model is published in the Thingiverse under CC BY-NC-SA 3.0. After this productive day, most of the LGM participants went to the ‘Kneipentour’ (bar-hopping) and enjoyed some good Zwickel (the local beer). Saturday, 01.06. After last night, it was a bit difficult to get up, but I was able be only one minute late to see Boudewijn Rempt talk “HDR Support in Krita”. In the afternoon, I took part in the Paged.js workshop, where we were able to create a book layout with CSS and HTML. Paged.js could be interesting for generating nice KDE handbooks with a professional looking feel, because it’s only using web standards (not implemented in any web browsers), and we could generate the pdf from the already existing html version. Sunday, 02.06. Sunday I took part in the Blender workshop, and Julian Eisel did an excellent job explaining the internal of how “Blender DNA and RNA system” archives great backward compatibility for .blend files and make it painless to write UI in Python almost directly connected to the DNA. Conclusion In summarize, LGM was a great event, I really enjoyed it and I hope I will be able to attend the next edition in Rennes (France) and see all these nice people again. Oh, and now I have now more stickers on my laptop. You can comment to this post in Mastodon. Many thanks to Drew DeVault for proofreading this blog post. [Less]
Posted almost 5 years ago by Piyush Aggarwal | brute4s99
This blog post recaps the last year, and how I ended up getting selected in GSoC’2019 as a student with an awesome project in KDE. Read more on the blog post here .
Posted almost 5 years ago by Nate Graham (ngraham)
Week 73 in Usability & Productivity initiative is here! We have all sorts of cool stuff to announce, and eagle-eyed readers will see bits and pieces of Plasma 5.16’s new wallpaper “Ice Cold” in the background! New Features Kate now has a menu ... [More] item with standard keyboard shortcut (ctrl+0) to reset the font size to the default value (Kishore Gopalakrishnan, KDE Frameworks 5.59) On X11, when Dolphin is already running and another app asks it to display some folder, it now opens a new tab to show that folder rather than creating a whole new window (Alexander Saoutkin, Dolphin 19.08.0) Bugfixes & Performance Improvements Spectacle can now take fullscreen screenshots on 4k screens (Vlad Zagorodniy, KDE plasma 5.12.9) The Home button in Discover’s toolbar now activates on click-and-release, not on click (Björn Feber, KDE Plasma 5.16.0) More fixes and polish for the upcoming Notifications rewrite in Plasma 5.16: The Panel no longer shows a blue icon when there are active notifications on the screen; KDE Connect notifications are now configurable; notifications from the Snap version of the Discord app now appear properly; when plugging in the mouse is configured to disable the touchpad, the notification shown when this happens now removes itself when the mouse is unplugged; apps that send multiple notifications but don’t tell Plasma their app IDs are now correctly grouped in the history; “Show even in Do Not Disturb Mode” now works for Spectacle; play/pause/more info buttons in file transfer notifications now have better spacing; and consecutive identical notifications are no longer discarded, which was causing various issues (Kai Uwe Broulik, KDE Plasma 5.16.0 Kate and other KTexteditor-based apps no longer reset the syntax highlighting method when saving remote files using the sftp:// or fish:// protocol (Nibaldo Gonzalez, KDE Frameworks 5.59) Horizontal separators in Kirigami and QML-based user interfaces now have an equal amount of space above and below them (Marco Martin and Filip Fila, KDE Frameworks 5.59): Kate’s “Quick Open” feature once again has the top item selected by default (Michal Humpula, Kate 19.08.0) Akonadi-using apps like KMail can now automatically and silently recover from the dreaded “Multiple Merge Candidates” error (Daniel Vrátil, KDE Applications 19.08.0) User Interface Improvements The System Settings page to configure Baloo now has an improved user interface (Kishore Gopalakrishnan, KDE Plasma 5.16.0): Huge improvements for font display: slight RGB hinting is now the default, Plasma no longer overrides distro font rendering defaults, and also the font settings page in System Settings now displays the actual value instead of a confusing “Vendor Default” label (Bhushan Shah and Julian Wolff, KDE Plasma 5.17.0) Menus and combobox pop-ups in QML-and Kirigami-based apps no longer animate their highlight effects when hovered over, bringing them into visual consistency with their QWidgets equivalents (Björn Feber, KDE Frameworks 5.59) When you use Kate or other KTextEditor-based apps to save over another file, they now delegate the confirmation prompt to the file dialog itself, so there’s never a double prompt or an overwrite without confirmation (Méven Car, KDE Frameworks 5.59 When using the Breeze Light or Dark themes, the panel now reads accent, highlight, and hover colors from the active color scheme rather than using hardcoded colors (Noah Davis, KDE Frameworks 5.59): File metadata in decimal form now limits the decimals to three significant figures for readability (Alexander Stippich, KDE Frameworks 5.59) Kolourpaint now uses a better icon when using a dark theme (Noah Davis, KDE Frameworks 5.60) Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite! If you find KDE software useful, consider making a donation to the KDE e.V. foundation. [Less]
Posted almost 5 years ago by Volker Krause
A lot has happened again around KDE Itinerary since the last two month summary. A particular focus area at the moment is achieving “Akademy readiness”, that is being able to properly support trips to KDE Akademy in Milan early September ... [More] , understanding the tickets of the Italian national railway is a first step into that direction. New Features The timeline view in KDE Itinerary now highlights the current element(s), to make it easier to find the relevant information. Active elements also got an “are we there yet?” indicator, a small bar showing the progress of the current leg of a trip, taking live data into account. Trip element being highlighted and showing a progress indicator. Another clearly visible addition can be found in the trip group summary elements in the timeline. Besides expanding or collapsing the trip, these elements now also show information concerning the entire trip when available, such as the weather forecast, or any power plug incompatibility you might encounter during the trip. Trip group summary showing weather forecast for the entire trip. Trip group summary showing power plug compatibility warnings. Less visible but much more relevant for “Akademy readiness” was adding support for Trenitalia tickets. That required some changes and additions to how we deal with barcodes, as well as an (ongoing) effort to decode the undocumented binary codes used on those tickets. More details can be found in a recent post on this subject. Infrastructure Work A lot has also happened behind the scenes: The ongoing effort to promote KContacts and KCalCore yields some improvements we benefit from directly as well, such as the Unicode diacritic normalization applied during the country name detection in KContacts (reducing the database size and detecting country names also with slight spelling variations) or the refactoring of KCalCore::Person and KCalCore::Attendee (which will make those types easily accessible by extractor scripts). The train reservation data model now contains the booked class, which is a particular useful information when you don’t have a seat reservation and need to pick the right compartment. The RCT2 extractor (relevant e.g. for DSB, NS, ÖBB, SBB) got support for more variations of seat reservations and more importantly now preserves the ticket token for ticket validation with the mobile app. The train station knowledge database is now also indexed by UIC station codes, which became necessary to support the Trenitalia tickets. Extractor scripts got a new utility class for dealing with unaligned binary data in barcodes. We also finally found the so far illusive mapping table for the station identifiers used in SNCF barcodes, provided by Trainline as Open Data. This yet has to find its way into Wikidata though, together with more UIC station codes for train stations in Italy. Help welcome :) Performance Optimizations Keeping an eye on performance while the system becomes more complex is always a good idea, and a few things have been addressed in this area too: The barcode decoder so far was exposed more or less directly to the data extractors, resulting in possibly performing the expensive decoding work twice on the same document, e.g. when both the generic extractor and one or more custom extractors processed a PDF document. Additionally, each of those were applying their own heuristics and optimizations to avoid expensive decoding attempts where they are unlikely to succeed. Those optimizations now all moved to the barcode decoder directly, together with a positive and negative decoding result cache. That simplifies the code using this, and it speeds up extraction of PDF documents without a context (such as a sender address) by about 15%. Kirigami’s theme color change compression got further optimized, which in the case of KDE Itinerary avoids the creation of a few hundred QTimer objects. The compiled-in knowledge database got a more space-efficient structure for storing unaligned numeric values, cutting down the size of the 24bit wide IBNR and UIC station code indexes by 25%. Fixes & Improvements There’s plenty of smaller changes that are noteworthy too of course: We fixed a corner case in KF5::Prison’s Aztec encoder that can trigger UIC 918.3 tickets, producing invalid barcodes. The data extractors for Brussels Airlines, Deutsche Bahn and SNCF got fixes for various booking variants and corner cases. Network coverage for KPublicTransport increased, including operators in Ireland, Poland, Sweden, parts of Australia and more areas in Germany. More usage of emoji icons in KDE Itinerary got replaced by “real” icons, which fixes rendering glitches on Android and produces a more consistent look there. Lock inhibition during barcode scanning now also works on Linux. PkPass files are now correctly detected on Android again when opened as a content: URL. The current trip group in the KDE Itinerary timeline is now always expanded, which fixes various confusions in the app when “now” or “today” don’t exist due to being in a collapsed time range. Multi-day event reservations are now split in begin and end elements in the timeline as already done for hotel bookings. Rental car bookings with a drop-off location different from the pick-up location are now treated as location changes in the timeline, which is relevant e.g. for the weather forecasts. Extracting times from PkPass boarding passes now converts those to the correct timezone. Contribute A big thanks to everyone who donated test data again, this continues to be essential for improving the data extraction. If you want to help in other ways than donating test samples too, see our Phabricator workboard for what’s on the todo list, for coordinating work and for collecting ideas. For questions and suggestions, please feel free to join us on the KDE PIM mailing list or in the #kontact channel on Matrix or Freenode. [Less]