I Use This!
Activity Not Available

News

Analyzed 4 months ago. based on code collected 9 months ago.
Posted over 4 years ago by Adriaan de Groot ([ade])
As part of migrating this blog from a defunct hosting company and a Wordpress installation, to a non-defunct hosting company and Jekyll, I’m re-visiting a lot of old posts. Assuming the RSS generator is ok, that won’t bother the feed aggregators (the ... [More] KDE planet in particular). The archives are slowly being filled in, and one entry from 2004 struck me: Ok, my new machine is installed (an amd64 running FreeBSD -CURRENT, which puts me firmly at the forefront of things-unstable). Not much has changed in 15 years, except maybe the “unstable” part. Oh, and I tend to run -STABLE now, because that’s more convenient for packaging. Something else I spotted: in 2004 I was working on KPilot as a hobby project (alongside my PhD and whatever else was paying the bills then), so there’s lots of links to the old site. Problem is, I let the domain registration expire long ago when Palm, Inc., the Palm Pilot, and KDE 4 ceased to be a going concern. So, that domain has been hijacked, or squatted, or whatever, with techno bla-bla-bla and recognizable scraps of text from the ancient website. Presumably downloading anything from there that pretends to be KPilot will saddle you with plenty of malware. In any case it’s a reminder that links from (very) old blog posts are not to be trusted, particularly. Since the archives are being updated (from old Wordpress backups, and from the Internet Archive) I’ll try to fix links or point them somewhere harmless if I spot something, but no guarantess. [Less]
Posted over 4 years ago by Kuntal Majumder (hellozee)
The sprint has officially ended yesterday and most of the participants have already left, except me, Ivan, Wolthera and Jouni. Well I would have also left as planned but I read my flight timings wrong and it would leave after 3 hours of what I thought the departure time was.
Posted over 4 years ago by Kate News
The default configuration for the Kate LSP client does now support more stuff than just C/C++ and Python out of the box. In addition to the recently added Rust support we now support Go and LaTeX/BibTeX, too. Configuration The default supported ... [More] server are configured via some JSON settings file we embed in our plugin resources. Currently this looks like: { "servers": { "bibtex": { "use": "latex" }, "c": { "command": ["clangd", "-log=error", "--background-index"], "commandDebug": ["clangd", "-log=verbose", "--background-index"], "url": "https://clang.llvm.org/extra/clangd/" }, "cpp": { "use": "c" }, "latex": { "command": ["texlab"], "url": "https://texlab.netlify.com/" }, "go": { "command": ["go-langserver"], "commandDebug": ["go-langserver", "-trace"], "url": "https://github.com/sourcegraph/go-langserver" }, "python": { "command": ["python3", "-m", "pyls", "--check-parent-process"], "url": "https://github.com/palantir/python-language-server" }, "rust": { "command": ["rls"], "rootIndicationFileNames": ["Cargo.lock", "Cargo.toml"], "url": "https://github.com/rust-lang/rls" } } } The file is located at kate.git/addons/lspclient/settings.json. Merge requests to add additional languages are welcome. I assume we need still to improve what we allow to specify in the configuration. Currently supported configuration keys At the moment, the following keys inside the per-language object are supported: use Tell the LSP client to use the LSP server for the given language for this one, too. Useful to dispatch stuff to a server supporting multiple languages, like clangd for C and C++. command Command line to start the LSP server. commandDebug Command line to start the LSP server in debug mode. This is used by Kate if the LSPCLIENT_DEBUG environment var is set to 1. If this variable is set, the LSP client itself will output debug information on stdout/stderr and the commandDebug command line should try to trigger the same for the LSP server, like e.g. using -log=verbose for clangd. rootIndicationFileNames For the Rust rls LSP server we added the possibility to specify a list of file names that will indicate which folder is the root for the language server. Our client will search upwards for the given file names based on the file path of the document you edit. For Rust that means we first try to locate some Cargo.lock, if that failed, we do the same for Cargo.toml. url URL of the home page of the LSP server implementation. At the moment not used internally, later should be shown in the UI to give people hints where to find further documentation for the matching LSP server (and how to install it). Current State For C/C++ with clangd the experience is already good enough for day-to-day working. What is possible can be seen in one of my previous posts, video included. I and some colleagues use the master version of Kate at work for daily coding. Sometimes Kate confuses clangd during saving of files but otherwise, no larger hiccups occur. For Rust with rls many things work, too. We now discover the root directory for it more easily thanks to hints to look for the Cargo files. We adapted the client to support the Hover message type rls emits, too. For the other languages: Beside some initial experiments that the servers start and you get some completion/…, not much work went into that. Help is welcome to improve their configuration and our client code to get a better experience. Just give Kate from the master branch a test drive, here is our build it how-to. We are open for feedback on [email protected] or directly via patches on invent.kde.org. Btw., if you think our how-to or other stuff on this website is lacking, patches are welcome for that, too! The complete page is available via our GitHub instance, to try changes locally, see our README.md. [Less]
Posted over 4 years ago by Albert Astals Cid (TSDgeos)
If you want an Akademy 2019 t-shirt you have until Monday 12th Aug at 1100CEST (i.e. in 2 days and a bit) to order it.Head over to https://akademy.kde.org/2019/akademy-2019-t-shirt and get yourself one of the exclusive t-shirts with Jen's awesome design :)
Posted over 4 years ago by Filip Fila
Previously: 1st GSoC post 2nd GSoC post 3rd GSoC post 4th GSoC post In this GSoC entry I’ll mention two things implemented since the last blog post: syncing of scaling and NumLock settings. Aside from that, I’ll reflect on syncing of ... [More] locally-installed files. Even thought I thought scaling would require changes on the SDDM side...... Continue Reading → [Less]
Posted over 4 years ago by Thomas Fischer
So far, most of my blog postings that appeared on Planet KDE were release announcements for KBibTeX. Still, I had always planned to write more about what happens on the development side of KBibTeX. Well, here comes my first try to shed light on ... [More] KBibTeX&aposs internal working … Active development of KBibTeX happens in its master branch. There are other branches created from time to time, mostly for bug fixing, i. e. allowing bug reporters to compile and test a bug fix before before the change is merged into master or a release branch. Speaking of release branches, those get forked from master every one to three years. At the time of writing, the most recent release branch is kbibtex/0.9. Actual releases, including alpha or beta releases, are tagged on those release branches. KBibTeX is developed on Linux; personally I use the master branch on Gentoo Linux and Arch Linux. KBibTeX compiles and runs on Windows with the help of Craft (master better than kbibtex/0.9). It is on my mental TODO list to configure a free Windows-based continuous integration service to build binary packages and installers for Windows; suggestions and support are welcome. Craft supports macOS, too, to some extend as well, so I gave KBibTeX a shot on this operating system (I happen to have access to an old Mac from time to time). Running Craft and installing packages caused some trouble, as macOS is the least tested platform for Craft. Also, it seems to be more difficult to find documentation on how to solve compilation or linking problems on macOS than it is for Windows (let alone Linux). However, with the help of the residents in #kde-craft and related IRC channels, I was eventually able to start compiling KBibTeX on macOS (big thanks!). The main issue that came up when crafting KBibTeX on macOS was the problem of linking against ICU (International Components for Unicode). This library is shipped on macOS as it is used in many other projects, but seemingly even if you install Xcode, you don't get any headers or other development files. Installing a different ICU version via Craft doesn't seem to work either. However, I am no macOS expert, so I may have gotten the details wrong … Discussing in Craft&aposs IRC channel how to get KBibTeX installed on macOS despite its dependency on ICU, I got asked why KBibTeX needs to use ICU in the first place, given that Qt ships QTextCodec which covers most text encoding needs. My particular need is to transliterate a given Unicode text like ‘äåツ’ into a 7-bit ASCII representation. This is used among others to rewrite identifiers for BibTeX entries from whatever the user wrote or an imported BibTeX file contained to an as close as possible 7-bit ASCII representation (which is usually the lowest common denominator supported on all systems) in order to reduce issues if the file is fed into an ancient bibtex or shared with people using a different encoding or keyboard layout. Such a transliteration is also useful in other scenarios such as if filenames are supposed to be based on a person&aposs name but still must be transcribed into ASCII to be accessible on any filesystem and for any user irrespective of keyboard layout. For example, if a filename needs to have some resemblance the Scandinavian name ‘Ångström’, the name&aposs transliteration could be ‘Angstrom’, thus a file could be named Angstrom.txt. So, if ICU is not available, what are the alternatives? Before I adopted ICU for the transliteration task, I had used iconv. Now, my first plan to avoid hard-depending on ICU was to test for both ICU and iconv during the configuration phase (i. e. when cmake runs) and use ICU if available and fall back to iconv if no ICU was available. Depending on the chosen alternative, paths and defines (to enable or disable specific code via #ifdefs) were set. See commit 2726f14ee9afd525c4b4998c2497ca34d30d4d9f for the implementation. However, using iconv has some disadvantages which motivated my original move to ICU: There are different iconv implementations out there and not all support transliteration. The result of a transliteration may depend on the current locale. For example, ‘ä’ may get transliterated to either ‘a’ or ‘ae’. Typical iconv implementations know less Unicode symbols than ICU. Results are acceptable for European or Latin-based scripts, but for everything else you far too often get ‘?’ back. Is there a third option? Actually, yes. Qt&aposs Unicode code supports only the first 216 symbols anyway, so it is technically feasible to maintain a mapping from Unicode character (essentially a number between 0 and 65535) to a short ASCII string like AE for ‘Æ’ (0x00C6). This mapping can be built offline with the help of a small program that does link against ICU, queries this library for a transliteration for every Unicode code point from 0 to 65535, and prints out a C/C++ source code fragment containing the mapping (almost like in the good old days with X PixMaps). This source code fragment can be included into KBibTeX to enable transliteration without requiring/depending on either ICU or iconv on the machines where KBibTeX is compiled or run. Disadvantages include the need to drag along this mapping as well as to updated it from time to time in order to keep up with updates in ICU&aposs own transliteration mappings. See commit 82e15e3e2856317bde0471836143e6971ef260a9 where the mapping got introduced as the third option. The solution I eventually settled with is to still test for ICU during the configuration phase and make use of it in KBibTeX as I did before. However, in case no ICU is available, the offline-generated mapping will be used to offer essentially the same functionality. Switching between both alternatives is a compile-time thing, both code paths are separated by #ifdefs. Support for iconv has been dropped as it became the least complete solution (see commit 47485312293de32595146637c96784f83f01111e). Now, how does this generated mapping look like? In order to minimize the data structure&aposs size I came up with the following approach: First, there is a string called const char *unidecode_text that contains any occurring plain ASCII representation once, for example only one single a that can be used for ‘a’, ‘ä’, ‘å’, etc. This string is about 28800 characters long for 65536 Unicode code points where a code point&aposs ASCII representation may be several characters long. So, quite efficient. Second, there is an array const unsigned int unidecode_pos[] that holds a number for every of the 65536 Unicode code points. Each number contains both a position and a length telling which substring to extract from unidecode_text to get the ASCII representation. As the observed ASCII representations' lengths never exceed 31, the array&aposs unsigned ints contain the representations' lengths in their lower (least significant) five bits, the remaining more significant bits contain the positions. For example, to get the ASCII representation for ‘Ä’, use the following approach: const char16_t unicode = 0x00C4; ///< 'A' with two dots above (diaeresis) const int pos = unidecode_pos[unicode] >> 5; const int len = unidecode_pos[unicode] & 31; const char *ascii = strndup(unidecode_text + pos, len); If you want to create a QString object, use this instead of the last line above: const QString ascii = QString::fromLatin1(unidecode_text + pos, len); If you would go through this code step-by-step with a debugger, you would see that unidecode_pos[unicode] has value 876481 (this value may change if the generated source code changes). Thus, pos becomes 27390 and len becomes 1. Indeed and not surprisingly, in unidecode_text at this position is the character A. BTW, value 876481 is not just used for ‘Ä’, but also for ‘À’ or ‘Â’, for example. Above solution can be easily adjusted to work with plain C99 or modern C++. It is in no way specific to Qt or KDE, so it should be possible to use it as a potential solution to musl (a libc implementation) to implement a //TRANSLIT feature in their iconv implementation (I have not checked their code if that is possible at all). comments [Less]
Posted over 4 years ago by Kubuntu News
As you may have been made aware on some news articles, blogs, and social media posts, a vulnerability to the KDE Plasma desktop was recently disclosed publicly. This occurred without KDE developers/security team or distributions being informed of the ... [More] discovered vulnerability, or being given any advance notice of the disclosure. KDE have responded quickly and responsibly and have now issued an advisory with a ‘fix’ [1]. Kubuntu is now working on applying this fix to our packages. Packages in the Ubuntu main archive are having updates prepared [2], which will require a period of review before being released. Consequently if users wish to get fixed packages sooner, packages with the patches applied have been made available in out PPAs. Users of Xenial (out of support, but we have provided a patched package anyway), Bionic and Disco can get the updates as follows: If you have our backports PPA [3] enabled: The fixed packages are now in that PPA, so all is required is to update your system by your normal preferred method. If you do NOT have our backports PPA enabled: The fixed packages are provided in our UPDATES PPA [4]. sudo add-apt-repository ppa:kubuntu-ppa/ppa sudo apt update sudo apt full-upgrade As a precaution to ensure that the update is picked up by all KDE processes, after updating their system users should at the very least log out and in again to restart their entire desktop session. Regards Kubuntu Team [1] – https://kde.org/info/security/advisory-20190807-1.txt [2] – https://bugs.launchpad.net/ubuntu/+source/kconfig/+bug/1839432 [3] – https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/backports [4] – https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/ppa [Less]
Posted over 4 years ago by Adriaan de Groot ([ade])
This is a short description of a workflow I apply in git repositories that I “own”; it mostly gets applied to Calamares, the Linux installer framework, because I spend most of my development hours on that. But it also goes into ARPA2 projects and ... [More] home experiments. It’s a variation on “always summer in master”, and I call it the Git Alligator because when you draw the resulting tree in ASCII-art, horizontally (I realise that’s a pretty niche artform), you get something like this: /-o-o-\ /-o-o-o-\ /-o-\ o--o-------o-o---------o-----o--o To me, that looks like the bumps on an alligator’s back. If I were a bigger fan of Antoine de Saint-Exupéry, I would probably see it as a python that has eaten multiple elephants. Anyway, the idea is twofold: master is always in a good state I work on (roughly) one thing at a time For each thing that I work on, I make a branch; if it’s attached to a Calamares issue, I’ll name it after the issue number. If it’s a different bit of work, I’ll name it more creatively. The branch is branched off of master (which is always in a good state). Then I go and work on the branch – commit early, commit often – until the issue is resolved or the feature implemented or whatever. In a codebase where I’m the only contributor, or the gatekeeper for it so that I know that master remains unchanged, I know a merge can go in painlessly. In a codebase with more contributors, I might merge upstream master into my branch right at the end as a sanity check (right at the end because most of these branches are short-lived, a day or two at most for any given issue). The alligator effect comes in when merging back to master: I always use --no-ff and I try to write an additional summary description of the branch in the merge commit. Here’s a screenshot of Calamares history, from qgit, turned on its side like an alligator crawling to the right, (cropped a little so you don’t see where I don’t follow my own precepts and annotated with branch names). Aside from the twofold ideas of “always summer in master” and “focus on one thing” I see a couple of other benefits: History if desired; this approach preserves history (all the little steps, although I do rebase and fixup and amend stuff as I go along, I don’t materially squash things). Conciseness when needed; having all the history is nice, but if you follow the “alligator’s tummy branch” (that is, master, along the bottom of the diagrams) you get only merge nodes with a completed bugfix or feature and a little summary: in other words, following that line of commits gives you a squashed view of what happened. Visual progress; each “bump” on the alligator’s back is a unit of progress. If I were to merge without --no-ff the whole thing would be smooth like a garter snake, and then it’s much harder to see the “things” that I’ve done. Instead I’d need to look at the log and untangle commit messages to see what I was working on. This has a “positivity” benefit: I can point and say “I did a thing!” I won’t claim this approach works for everybody, or for larger teams, but it keeps me happy most days of the week, and as a side benefit I get to think about ol’ Albert the Alligator. [Less]
Posted over 4 years ago by Andrea Del Sarto
…and I’ve made a new wallpaper! Yes, finally i’m back on my favourite application, Inkscape. Hope this is a cool presentation I called this wallpaper Mountain, because … well, there are mountains with a sun made with the KDE Neon logo. hope you like it You can find it HERE See you soon with other wallpapers …    
Posted over 4 years ago by Andrea Del Sarto
…and I’ve made a new wallpaper! Yes, finally i’m back on my favourite application, Inkscape. Hope this is a cool presentation  I called this wallpaper Mountain, because … well, there are mountains with a sun made with the KDE Neon logo. hope you like it See you soon with other wallpapers …