I Use This!
Activity Not Available

News

Analyzed 4 months ago. based on code collected 5 months ago.
Posted 2 days ago
If there's something that I hate, it's doing things manually when I know I could automate them. Am I alone in this situation? I doubt so. Nevertheless, every day, they are thousands of developers using GitHub that are doing the same thing over ... [More] and over again: they click on this button: This does not make any sense. Don't get me wrong. It makes sense to merge pull requests. It just does not make sense that someone has to push this damn button every time. It does not make any sense because every development team in the world has a known list of pre-requisite before they merge a pull request. Those requirements are almost always the same, and it's something along those lines: Is the test suite passing? Is the documentation up to date? Does this follow our code style guideline? Have N developers reviewed this? As this list gets longer, the merging process becomes more error-prone. "Oops, John just clicked on the merge button while there were not enough developer that reviewed the patch." Rings a bell? In my team, we're like every team out there. We know what our criteria to merge some code into our repository are. That's why we set up a continuous integration system that runs our test suite each time somebody creates a pull request. We also require the code to be reviewed by 2 members of the team before it's approbated. When those conditions are all set, I want the code to be merged. Without clicking a single button. That's exactly how Mergify started. Mergify is a service that pushes that merge button for you. You define rules in the .mergify.yml file of your repository, and when the rules are satisfied, Mergify merges the pull request. No need to press any button. Take a random pull request, like this one: This comes from a small project that does not have a lot of continuous integration services set up, just Travis. In this pull request, everything's green: one of the owners reviewed the code, and the tests are passing. Therefore, the code should be already merged: but it's there, hanging, chilling, waiting for someone to push that merge button. Someday. With Mergify enabled, you'd just have to put this .mergify.yml a the root of the repository: rules: default: protection: required_status_checks: contexts: - continuous-integration/travis-ci required_pull_request_reviews: required_approving_review_count: 1 With such a configuration, Mergify enables the desired restrictions, i.e., Travis passes, and at least one project member reviewed the code. As soon as those conditions are positive, the pull request is automatically merged. We built Mergify as a free service for open-source projects. The engine powering the service is also open-source. Now go check it out and stop letting those pull requests hang out one second more. Merge them! If you have any question, feel free to ask us or write a comment below! And stay tuned — as Mergify offers a few other features that I can't wait to talk about! [Less]
Posted 2 days ago
For weeks I have had problems with Google Chrome. It would work very few times and then for reasons I didn’t understand, would stop working. On the command line you would get several screens of text, but never would the Chrome window appear. So I ... [More] tried the Beta, and it worked… once. Deleted all the cache and configuration and it worked… once. Every time the process would be in an infinite loop listening to a Unix socket (fd 7) but no window for the second and subsequent starts of Chrome. By sheer luck in the screenfulls of spam I noticed this: Gkr-Message: 21:07:10.883: secret service operation failed: The name org.freedesktop.secrets was not provided by any .service files Hmm, so I noticed every time I started a fresh new Chrome, I logged into my Google account. So, once again clearing things I started Chrome, didn’t login and closed and reopened.  I had Chrome running the second time! Alas, not with all the stuff synchronised. An issue for Mailspring put me onto the right path. installing gnome-keyring (or the dependencies p11-kit and gnome-keyring-pkcs11) fixed Chrome. So if Chrome starts but you get no window, especially if you use cinnamon, try that trick.     [Less]
Posted 2 days ago
Dates I’m going to DebCamp18! I should arrive at NCTU around noon on Saturday, 2018-07-21. My Agenda DebConf Video: Research if/how MediaDrop can be used with existing Debian video archive backends (basically, just a bunch of files on http). ... [More] DebConf Video: Take a better look at PeerTube and prepare a summary/report for the video team so that we better know if/how we can use it for publishing videos. Debian Live: I have a bunch of loose ideas that I’d like to formalize before then. At the very least I’d like to file a bunch of paper cut bugs for the live images that I just haven’t been getting to. Live team may also need some revitalization, and better co-ordination with packagers of the various desktop environments in terms of testing and release sign-offs. There’s a lot to figure out and this is great to do in person (might lead to a DebConf BoF as well). Debian Live: Current live weekly images have Calamares installed, although it’s just a test and there’s no indication yet on whether it will be available on the beta or final release images, we’ll have to do a good assessment on all the consequences and weigh up what will work out the best. I want to put together an initial report with live team members who are around. AIMS Desktop: Get core AIMS meta-packages in to Debian… no blockers on this but just haven’t had enough quite time to do it (And thanks to AIMS for covering my travel to Hsinchu!) Get some help on ITPs that have been a little bit more tricky than expected: gamemode – Adjust power saving and cpu governor settings when launching games notepadqq – A linux clone of notepad++, a popular text editor on Windows Possibly finish up zram-tools which I just don’t get the time for. It aims to be a set of utilities to manage compressed RAM disks that can be used for temporary space, compressed in-memory swap, etc. Debian Package of the Day series: If there’s time and interest, make some in-person videos with maintainers about their packages. Get to know more Debian people, relax and socialize! [Less]
Posted 3 days ago
This is my fifth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below: Post 1: My Google Summer of Code 2018 project Post 2: Setting up a local OBS development environment Post 3: Running OBS Workers and ... [More] OBS staging instance Post 4: Notes on the OBS Documentation My GSoC contributions can be seen at the following links https://github.com/athos-ribeiro/salt-obs https://github.com/openSUSE/obs-docu/commits?author=athos-ribeiro Debian builds on OBS OBS supports building Debian packages. To do so, one must properly configure a project so OBS knows it is building a .deb package and to have the packages needed to handle and build debian packages installed. openSUSE’s OBS instance has repositories for Debian 8, Debian 9, and Debian testing. We will use base Debian projects in our OBS instance as Download on Demand projects and use subprojects to achieve our final goal (build packages agains Clang). By using the same configurations as the ones in the openSUSE public projects, we could perform builds in Debian 8 and Debian 9 in our local OBS deploys. However, builds for Debian Testing and Unstable were failing. With further investigation, we realized the OBS version packaged in Debian cannot decompress control.tar.xz files in .deb packages, which is the default compression format for the control tarball since dpkg-1.19 (it used to be control.tar.gz before that). This issue was reported on the OBS repositories and was fixed on a Pull Request that is not included in the current Debian OBS version yet. For now, we apply this patch in our OBS instance on our salt states. After applying the patch, the builds on Debian 8 and 9 are still finishing with success, but builds against Debian Testing and Unstable are getting stuck in a blocked state: dependencies are being downloaded, the OBS scheduler stalls for a while, the downloaded packages get cleaned up, and then the dependencies are downloaded again. OBS backend enters in a loop doing the described procedure and never assigns a build to a worker. No logs with hints leading to a possible issue are issued, giving us no clue of the current problem. Although I am inclined to believe we have a problem with our dependencies list, I am still debugging this issue during this week and will bring more news on my next post. Refactoring project configuration files Reshabh opened a Pull Request in our salt repository with the OBS configuration files for Ubuntu, also based on the openSUSE’s OBS public configurations. Based on Sylvestre comments, I have been refactoring the Debian configuration files based on the OBS docuemtation. One of the proposed improvements is to use debootstrap to mount the builder chroot. This will allow us to reduce the number of dependencies listed in the projects configuration files. The issue which generated debootstrap support in OBS is available at https://github.com/openSUSE/obs-build/issues/111 and may lead to more interesting resources on the matter. Next steps (A TODO list to keep on the radar) Fix OBS builds on Debian Testing and Unstable Write patches for the OBS worker issue described in post 3 Change the default builder to perform builds with clang Trigger new builds by using the dak/mailing lists messages Verify the rake-tasks.sh script idempotency and propose patch to opencollab repository Separate salt recipes for workers and server (locally) Properly set hostnames (locally) [Less]
Posted 3 days ago
Several months ago, I gave the closing keynote address at LibrePlanet 2018. The talk was about the thing that scares me most about the future of free culture, free software, and peer production. A video of the talk is online on Youtube and ... [More] available as WebM video file (both links should skip the first 3m 19s of thanks and introductions). Here’s a summary of the talk: App stores and the so-called “sharing economy” are two examples of business models that rely on techniques for the mass aggregation of distributed participation over the Internet and that simply didn’t exist a decade ago. In my talk, I argue that the firms pioneering these new models have learned and adapted processes from commons-based peer production projects like free software, Wikipedia, and CouchSurfing. The result is an important shift: A decade ago,  the kind of mass collaboration that made Wikipedia, GNU/Linux, or Couchsurfing possible was the exclusive domain of people producing freely and openly in commons. Not only is this no longer true, new proprietary, firm-controlled, and money-based models are increasingly replacing, displacing, outcompeting, and potentially reducing what’s available in the commons. For example, the number of people joining Couchsurfing to host others seems to have been in decline since Airbnb began its own meteoric growth. In the talk, I talk about how this happened and what I think it means for folks of that are committed to working in commons. I talk a little bit about the free culture and free software should do now that mass collaboration, these communities’ most powerful weapon, is being used against them. I’m very much interested in feedback provided any way you want to reach me including in person, over email, in comments on my blog, on Mastodon, on Twitter, etc. Work on the research that is reflected and described in this talk was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). Some of the initial ideas behind this talk were developed while working on this paper (official link) which was led by Maximilian Klein and contributed to by Jinhao Zhao, Jiajun Ni, Isaac Johnson, and Haiyi Zhu. [Less]
Posted 3 days ago
Here’s what I’m planning to work on – please get in touch if you want to get involved with any of these items. DebCamp work Debian Policy rolling sprint Adding pbuilder support to dgit Any general package maintainance issues that may have ... [More] arisen Throughout DebCamp and DebConf Debian Policy: sticky bugs; process; participation; translations Helping people use dgit and git-debrebase Writing up or following up on feature requests and bugs Design work with Ian and others [Less]
Posted 3 days ago
Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In May, about 202 work hours have been dispatched among 12 paid contributors. Their reports are available: Abhijith PA did 6 hours (out ... [More] of 10 hours allocated + 5 extra hours, he gave back the 9 remaining hours). Antoine Beaupré did nothing (out of 12 hours allocated, thus keeping 12 extra hours for June). Ben Hutchings did 15 hours. Brian May did 10 hours. Chris Lamb did 18 hours. Emilio Pozuelo Monfort did 33.75 hours (out of 24 hours allocated + 9.75 remaining hours). Holger Levsen did 6.5h (out of 32.75 remaining hours, the unused hours have been put back in the pool). Hugo Lefeuvre did 24.25 hours. Markus Koschany did 24.25 hours. Ola Lundqvist did 9 hours (out of 14 hours allocated + 12.5 remaining hours, thus keeping 17.5 extra hours for June). Roberto C. Sanchez did 6.5 hours (out of 18 hours allocated, thus keeping 11.5 extra hours for June). Santiago Ruano Rincón did 1 hour (out of 8 hours allocated, thus keeping 7 extra hours for June). Thorsten Alteholz did 24.25 hours. Evolution of the situation The number of sponsored hours increased to 190 hours per month thanks to a few new sponsors who joined to benefit from Wheezy’s Extended LTS support. We are currently in a transition phase. Wheezy is no longer supported by the LTS team and the LTS team will soon take over security support of Debian 8 Jessie from Debian’s regular security team. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 32 months) GitHub (for 23 months) Gold sponsors: The Positive Internet (for 48 months) Blablacar (for 47 months) Linode (for 37 months) Babiel GmbH (for 26 months) Plat’Home (for 26 months) Silver sponsors: Domeneshop AS (for 48 months) Université Lille 3 (for 47 months) Trollweb Solutions (for 45 months) Nantes Métropole (for 42 months) Dalenys (for 38 months) Univention GmbH (for 33 months) Université Jean Monnet de St Etienne (for 33 months) Ribbon Communications, Inc. (for 27 months) maxcluster GmbH (for 21 months) Exonet B.V. (for 17 months) Leibniz Rechenzentrum (for 11 months) Vente-privee.com (for 8 months) CINECA Bronze sponsors: David Ayers – IntarS Austria (for 48 months) Evolix (for 48 months) Seznam.cz, a.s. (for 48 months) Freeside Internet Service (for 47 months) MyTux (for 47 months) Intevation GmbH (for 45 months) Linuxhotel GmbH (for 45 months) Daevel SARL (for 44 months) Bitfolk LTD (for 42 months) Megaspace Internet Services GmbH (for 42 months) NUMLOG (for 42 months) Greenbone Networks GmbH (for 41 months) WinGo AG (for 41 months) Ecole Centrale de Nantes – LHEEA (for 37 months) Sig-I/O (for 35 months) Entr’ouvert (for 32 months) Adfinis SyGroup AG (for 30 months) GNI MEDIA (for 24 months) Laboratoire LEGI – UMR 5519 / CNRS (for 24 months) Quarantainenet BV (for 24 months) RHX Srl (for 21 months) Bearstech (for 16 months) LiHAS (for 16 months) People Doc (for 12 months) Catalyst IT Ltd (for 10 months) Supagro (for 6 months) Demarcq SAS (for 4 months) TrapX Security No comment | Liked this article? Click here. | My blog is Flattr-enabled. [Less]
Posted 3 days ago
I got spammed again by SciencePG (“Science Publishing Group”). One of many (usually Chinese or Indian) fake publishers, that will publish anything as long as you pay their fees. But, unfortunately, once you published a few papers, you inevitably ... [More] land on their spam list: they scrape the websites of good journals for email adresses, and you do want your contact email address on your papers. However, this one is particularly hilarious: They have a spelling error right at the top of their home page! Fail. Speaking of fake publishers. Here is another fun example: Kim Kardashian, Satoshi Nakamoto, Tomas Pluskal Wanion: Refinement of RPCs. Drug Des Int Prop Int J 1(3)- 2018. DDIPIJ.MS.ID.000112. Yes, that is a paper in the “Drug Designing & Intellectual Properties” International (Fake) Journal. And the content is a typical SciGen generated paper that throws around random computer buzzword and makes absolutely no sense. Not even the abstract. The references are also just made up. And so are the first two authors, VIP Kim Kardashian and missing Bitcoin inventor Satoshi Nakamoto… In the PDF version, the first headline is “Introductiom”, with “m”… So Lupine Publishers is another predatory publisher, that does not peer review, nor check if the article is on topic for the journal. Via Retraction Watch Conclusion: just because it was published somewhere does not mean this is real, or correct, or peer reviewed… [Less]
Posted 3 days ago
Here’s what happened in the Reproducible Builds effort between Sunday June 10 and Saturday June 16 2018: Tom Yates published a post titled Toward a fully reproducible Debian (NB. non-subscriber “guest” link) on LWN based on Chris Lamb’s ... [More] keynote presentation at FLOSSUK 2018 in Edinburgh, Scotland earlier this year. On Wednesday 13th June, Chris Lamb presented at foss-backstage.de in Berlin, Germany on reproducible builds and how they prevent developers being targets for malicious attacks (links). Elio Qoshi of Ura Design wrote about the new reproducible builds style guide on their blog (preview). Chris Lamb made a number of changes to the reproducible-builds.org website including importing presentations from the Debian wiki, adding a missing SEAGL talk and updating the contribution page to link to our Debian Installer tracking issue. Paul Wise filed Debian bug #901300 (bls: warn about strip-nondeterminism output in build logs) requesting that the scanner detects when strip-nondeterminism locates some non-determinism and warn about it in the build logs. Chris Lamb filed a wishlist bug #901473 to request that the Reproducible Builds testing framework varies on a merged /usr when comparing packages. This week, 15 package reviews were added, 16 were updated and 19 were removed adding to our knowledge about identified issues. strip-nondeterminism version 0.042-1 was uploaded to Debian unstable by Chris Lamb. It included contributions already covered in previous weeks as well as new ones from, respect nocheck build profile in DEB_BUILD\OPTIONS. diffoscope development diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week, version 96 was uploaded to Debian unstable by Chris Lamb. It includes contributions already covered by posts in previous weeks as well as new ones from: Chris Lamb: Drop dependency on pdftk as it relies on GCJ, relying on the pdftotext fallback. (Closes: #893702) Xavier Briand: Add merge request info to contributing documentation. tests.reproducible-builds.org development There were a number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including: Chris Lamb: Correct “diffscope” typo on HTML breakage page. Eli Schwartz: Update Arch Linux’s reproducibility status. Move old artifact cleanup to before the results HTML generated. Holger Levsen: Update copyright year. Update an URL now that the security-tracker has moved to Salsa. Jelle van der Waa: Mention Arch Linux Reproducible Build IRC channel. Mattia Rizzolo: Stop the worker and don’t try to build anything if any of a pair is offline. Don’t start the worker if a node is marked as offline in the “black file”. Bring up nodes on-demand. Configure Apache to serve the Reproducible Builds style guide and add a job to build it from Git. Huge number of changes splitting reproducible_common.py into a separate Python module including making a slew of attribute evaluations lazy, moving the UDD and bug-gathering logic in a separated module, removing the NotedPkg class and attach the notes to Build instead & moving various helper functions. Packages reviewed and fixed, and bugs filed Bernhard M. Wiedemann: curl (merged, FTBFS-2025) cardpeek (.tar.gz, orphaned package) enigmail (sort readdir(2)) ncftp (uname) nant (date) Chris Lamb: #901307 filed against sphinx-gallery (forwarded upstream). #901428 filed against pyraf. #901481 filed against cpl-plugin-uves. #901587 filed against allegro4.4. #901611 filed against enigmail. #901615 filed against log4cxx. Misc. This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. [Less]
Posted 4 days ago
In the past month I have been working on my GSoC project in Debian’s Distro Tracker. This project aims at designing and implementing new features in Distro Tracker to better support Debian teams to track the health of their packages and to ... [More] prioritize their work efforts. In this post, I will describe the current status of my contributions, highlight the main challenges, and point the next steps. Work Management and Communication I communicate with Lucas Kanashiro (my mentor) constantly via IRC and personally at least once a week as we live in the same city. We have a weekly meeting with Raphael Hertzog at #debian-qa IRC channel to report advances, collect feedback, solve technical doubts, and planning the next steps. I created a new repository in Salsa to save the log of our IRC meetings and to track my tasks through the repository’s issue tracker.Besides that, once a month I’ll post a new status update in my blog, such as this one, with more details regarding my contributions. Advances When GSoC officially started, Distro Tracker already had some team-related features. Briefly, a team is an entity composed by one or more users that are interested in the same set of packages. Teams are created manually by users and anyone may join public teams. The team page aggregates some basic information about the team and the list of packages of interest. Distro Tracker offers a page to enable users to browser public teams which shows a paginated, sorted list of names. It used to be hard to find a team based on this list since Distro Tracker has more 110 teams distributed over 6 pages. In this sense, I created a new search field with auto-complete on the top of teams page to enable users to find a team’s page faster, as show in the following figure: Also, I have been working on improving the current teams infrastructure to enable Debian’s teams to better track the health of their packages. Initially, we decided to use the current data available in Distro Tracker to create the first version of a new team’s page based on PET. Presenting team’s packages data in a table on the team’s page would be a relatively trivial task. However, Distro Tracker architecture aims to provide a generic core which can be extended through specific distro applications, such as Kali Linux. The core source code provides generic infrastructure to import data related to deb packages and also to present them in HTML pages. Therefore, we had to consider this Distro Tracker requirement to properly provide a extensible infrastructure to show packages data through tables in so that it would be easy to add new table fields and to change the default behavior of existing columns provided by the core source code. So, based on the previously existing panels feature and on Hertzog’s suggestions, I designed and developed a framework to create customizable package tables for teams. This framework is composed of two main classes: BaseTableField - A base class representing fields to be displayed on package tables. Among other things, it must define the column name and a template to render the cell content for a package. BasePackageTable - A base class representing package tables which are displayed on a team page. It may have several BaseTableFields to display package’s information. Different tables may show a different list of packages based on its scope. We have been discussing my implementation in an open Merge Request, although we are very close to the version that should be incorporated. The following figures show the comparison between the earlier PET’s table and our current implementation. PET Packages Table Distro Tracker Packages Table Currently, the team’s page only have one table, which displays all packages related to that team. We are already presenting a very similar set of data to PET’s table. More specifically, the following columns are shown: Package - displays the package name on the cell. It is implemented by the core’s GeneralInformationTableField class VCS - by default, it displays the type of package’s repository (i.e. GIT, SVN) or Unknown. It is implemented by the core’s VcsTableField class. However, Debian app extend this behavior by adding the changelog version on the latest repository tag and displaying issues identified by Debian’s VCS Watch. Archive - displays the package version on distro archive. It is implemented by the core’s ArchiveTableField class. Bugs - displays the total number of bugs of a package. It is implemented by the core’s BugsTableField class. Ideally, each third-party apps should extend this field table to both add links for their bug tracker system. Upstream - displays the upstream latest version available. This is a specific table field implemented by Debian app since this data is imported through Debian-specific tasks. In this sense, it is not available for other distros. As the table’s cells are small to present detailed information, we have added Popper.js, a javascript library to display popovers. In this sense, some columns show a popover with more details regarding its content which is displayed on mouse hover. The following figure shows the popover to the Package column: In additional to designing the table framework, the main challenge were to avoid the N+1 problem which introduces performance issues since for a set of N packages displayed in a table, each field element must perform 1 or more lookup for additional data for a given package. To solve this problem, each subclass of BaseTableField must define a set of Django’s Prefetch objects to enable BasePackageTable objects to load all required data in batch in advance through prefetch_related, as listed bellow. class BasePackageTable(metaclass=PluginRegistry): @property def packages_with_prefetch_related(self): """ Returns the list of packages with prefetched relationships defined by table fields """ package_query_set = self.packages for field in self.table_fields: for l in field.prefetch_related_lookups: package_query_set = package_query_set.prefetch_related(l) additional_data, implemented = vendor.call( 'additional_prefetch_related_lookups' ) if implemented and additional_data: for l in additional_data: package_query_set = package_query_set.prefetch_related(l) return package_query_set @property def packages(self): """ Returns the list of packages shown in the table. One may define this based on the scope """ return PackageName.objects.all().order_by('name') class ArchiveTableField(BaseTableField): prefetch_related_lookups = [ Prefetch( 'data', queryset=PackageData.objects.filter(key='general'), to_attr='general_archive_data' ), Prefetch( 'data', queryset=PackageData.objects.filter(key='versions'), to_attr='versions' ) ] @cached_property def context(self): try: info = self.package.general_archive_data[0] except IndexError: # There is no general info for the package return general = info.value try: info = self.package.versions[0].value general['default_pool_url'] = info['default_pool_url'] except IndexError: # There is no versions info for the package general['default_pool_url'] = '#' return general Finally, it is worth noticing that we also improved the team’s management page by moving all team management features to a single page and improving its visual structure: Next Steps Now, we are moving towards adding other tables with different scopes, such as the tables presented by PET: To this end, we will introduce the Tag model class to categorize the packages based on their characteristics. Thus, we will create an additional task responsible for tagging packages based on their available data. The relationship between packages and tags should be ManyToMany. In the end, we want to perform a simple query to define the scope of a new table, such as the following example to query all packages with Release Critical (RC) bugs: class RCPackageTable(BasePackageTable): def packages(self): tag = Tag.objects.filter(name='rc-bugs') return tag.packages.all() We probably will need to work on Debian’s VCSWatch to enable it to receive update through Salsa’s webhook, especially for real-time monitoring of repositories. Let’s get moving on! \m/ [Less]