Fedora People

Renewing the Modularity objective

Posted by Fedora Community Blog on September 18, 2019 07:18 PM

Now that Modularity is available for all Fedora variants, it’s time to address issues discovered and improve the experience for packagers and users. The Modularity team identified a number of projects that will improve the usefulness of Modularity and the experience of creating modules for packagers. We are proposing a renewed objective to the Fedora Council.

You can read the updated objective in pull request #61. Please provide feedback there or on the devel mailing list. The Council will vote on this in two weeks.

The post Renewing the Modularity objective appeared first on Fedora Community Blog.

Epiphany Technology Preview Users: Action Required

Posted by Michael Catanzaro on September 18, 2019 02:19 PM

Epiphany Technology Preview has moved from https://sdk.gnome.org to https://nightly.gnome.org. The old Epiphany Technology Preview is now end-of-life. Action is required to update. If you installed Epiphany Technology Preview prior to a couple minutes ago, uninstall it using GNOME Software and then reinstall using this new flatpakref.

Apologies for this disruption.

The main benefit to end users is that you’ll no longer need separate remotes for nightly runtimes and nightly applications, because everything is now hosted in one repo. See Abderrahim’s announcement for full details on why this transition is occurring.

[F31] Participez à la journée de test consacrée à GNOME 3.34

Posted by Charles-Antoine Couret on September 18, 2019 06:00 AM

Aujourd'hui, ce mercredi 18 septembre, est une journée dédiée à un test précis : sur l'environnement de bureau GNOME. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Nous juste après la diffusion de la Fedora 31 beta. L'environnement de bureau GNOME est celui par défaut depuis les débuts de Fedora.

L'objectif est de s'assurer que l'ensemble de l'environnement et que ses applications sont fonctionnels.

Les tests du jour couvrent :

  • La détection de la mise à niveau de Fedora par GNOME Logiciels ;
  • Le bon fonctionnement du navigateur Web ;
  • La connexion / déconnexion et changement d'utilisateurs ;
  • Le fonctionnement du son, notamment détection de la connexion ou déconnexion d'écouteurs ou casques audios ;
  • Possibilité de lancer les applications graphiques depuis le menu.
  • Et tant d'autres.

Comme vous pouvez le constater, ces tests sont assez simples et peuvent même se dérouler sans se forcer en utilisant simplement GNOME comme d'habitude. Donc n'hésitez pas de prendre quelques minutes pour vérifier les comportements et rapporter ce qui fonctionne ou non comme attendu.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Announcing Kanidm - A new IDM project

Posted by William Brown on September 17, 2019 10:00 PM

Announcing Kanidm - A new IDM project

Today I’m starting to talk about my new project - Kanidm. Kanidm is an IDM project designed to be correct, simple and scalable. As an IDM project we should be able to store the identities and groups of people, authenticate them securely to various other infrastructure components and services, and much more.

You can find the source for kanidm on github.

For more details about what the project is planning to achieve, and what we have already implemented please see the github.

What about 389 Directory Server

I’m still part of the project, and working hard on making it the best LDAP server possible. Kanidm and 389-ds have different goals. 389 Directory Server is a globally scalable, distributed database, that can store huge amounts of data and process thousands of operations per second. 389-ds let’s you build a system ontop, in any way you want. If you want an authentication system today, use 389-ds. We are even working on a self-service web portal soon too (one of our most requested features!). Besides my self, no one on the (amazing) 389 DS team has any association with kanidm (yet?).

Kanidm is an opinionated IDM system, and has strong ideas about how authentication and users should be processed. We aim to be scalable, but that’s a long road ahead. We also want to have more web integrations, client tools and more. We’ll eventually write a kanidm to 389-ds sync tool.

Why not integrate something with 389? Why something new?

There are a lot of limitations with LDAP when it comes to modern web-focused auth processes such as webauthn. Because of this, I wanted to make something that didn’t have the same limitations, and had different ideas about data storage and apis. That’s why I wanted to make something new in parallel. It was a really hard decision to want to make something outside of 389 Directory Server (Because I really do love the project, and have great pride in the team), but I felt like it was going to be more productive to build in parallel, than ontop.

When will it be ready?

I think that a single-server deployment will be usable for small installations early 2020, and a fully fledged system with replication would be late 2020. It depends on how much time I have and what parts I implement in what order. Current rough work order (late 2019) is indexing, RADIUS integration, claims, and then self-service/web ui.

Towards a UX Strategy for GNOME (Part 3)

Posted by Allan Day on September 17, 2019 02:41 PM

This post is part of a series on UX strategy. In my previous two posts, I described what I hope are the beginnings of a UX strategy for GNOME. In the first post, I described some background research and analysis. In the second post, I introduced what I think ought to be the high-level goals and principles for the UX strategy.

Now it’s time for the fun bit! For this instalment, I’m going to go over recent work that the GNOME design team has been doing. I’m doing this for two reasons. First: I want to show off some of the great work that the design team has been doing! Second, I want to show this design work fits into the strategic approach that I’ve previously described. A key element of that plan was to prioritise on areas which will have the biggest impact, and I’m going to be using the prioritisation word a lot in what follows.

This post is intended as an overview and, as such, I’m not going to go into the designs in great detail. However, there are detailed designs behind everything I’m presenting, and there’s a list of links at the end of the post, for those who want to learn more.

Core System

In my previous post, I argued that prioritisation ought to be a key part of our UX strategy. This is intended to help us drive up quality, as well as deliver impactful improvements. One way that we can prioritise is by focusing on those parts of GNOME that people use all the time.

The obvious place to start with this is the core elements of the GNOME system: those parts of the software that make up the most basic and common interactions, like login, app launching, window switching, notifications, and so on. I believe that improving the level of polish of these basic features would go a long way to elevating the standing of the entire platform.

Unlock and Login

Login screen mockup

The design team has longstanding ambitions to update GNOME’s unlock and login experience. The designs have continued to evolve since I last blogged about them, and we continue to return to and improve them.

System unlock is a classic example of a touchstone experience. People unlock their computers all the time and, as the entry point to the system, it comes to define the overall experience. It’s the face of the system. It is therefore critical that unlock and login leave a good impression.

The new designs are intended to reduce the amount of friction that people experience when they unlock. They require users to take fewer steps and involve going through fewer transitions, so that people can get to their session faster and more seamlessly. This will in turn make the experience feel more comfortable.

The designs are also intended to be beautiful! As an emblematic part of the GNOME UX, we want unlock and login to look and feel fantastic.


Notifications popover mockup

The design team has been systematically reviewing almost all parts of the core GNOME system, with a view to polish and refine them. Some of this work has already landed in GNOME 3.34, where you will see a collection of visual style improvements.

One area where we want to develop this work is the calendar and notifictions list. The improvements here are mostly visual – nicer layout, better typography, and so on – but there are functional improvements too. Our latest designs include a switch for do not disturb mode, for example.

There are other functional improvements that we’d like to see in subsequent iterations to the notification list, such as grouping notifications by application, and allowing notification actions to be accessed from the list.

Notifications are another great example where we can deliver clear value for our users: they’re something that users encounter all the time, and which are almost always relevant, irrespective of the apps that someone uses.

System Polish

System menu and dialog mockup

Our core system polish and refinement drive knows no bounds! We have designs for an updated system menu, which are primarily intended to resolve some long-standing discoverability issues. We’ve also systematically gone through all of the system dialogs, in order to ensure that each one is consistent and beautiful (something that is sadly lacking at the moment).

These aren’t the only parts of the core system that the design team is interested in improving. One key area that we are planning on working on in the near future is application launching. We’ve already done some experimental work in this area, and are planning on building on the drag and drop work that Georges Stavracas landed for GNOME 3.34.


The principle of prioritisation can also be applied to GNOME’s applications. The design team already spends a lot of time on the most essential applications, like Settings, Software and Files. Following the principle of prioritisation, we’ve also been taking a fresh look at some of the really basic apps that people use every day.

Two key examples of this are the document and image viewers. These are essential utilities that everyone uses. Such basic features ought to look and feel great and be firmly part of the GNOME experience. If we can’t get them right, then people won’t get a great impression.

Today our document and image viewers do their jobs reasonably well, but they lack refinement in some areas and they don’t always feel like they belong to the rest of the system. They also lack a few critical features.

<figure aria-describedby="caption-attachment-7193" class="wp-caption alignnone" id="attachment_7193" style="width: 1280px">Document viewer mockup<figcaption class="wp-caption-text" id="caption-attachment-7193">Document viewer mockup</figcaption></figure> <figure aria-describedby="caption-attachment-7190" class="wp-caption alignnone" id="attachment_7190" style="width: 1280px">Image viewer mockup<figcaption class="wp-caption-text" id="caption-attachment-7190">Image viewer mockup</figcaption></figure>

This is why the design team has created updated designs for both the document and image viewers. These use the same design patterns, so they will feel like they belong together (as well as to the rest of the system). They also include some additional important features, like basic image editing (from talking to GNOME users, we know that this is a sorely missed feature).

It would be great to extend this work to look at some of the other basic, frequently-used apps, like the Text Editor and Videos.

There’s a lot of other great application design work that I could share here, but am not going to, because I do think that focusing on these core apps first makes the most strategic sense.

Development Platform

Another way that we can prioritise is by working on the app development platform. Improvements in this area make it easier for developers to create apps. They also have the potential to make every GNOME app look and behave better, and can therefore be an extremely effective way to improve the GNOME UX.

Again, this is an area where the design team has already been doing a lot of work, particularly around our icon system. This is part of the application platform, and a lot of work has recently gone into making it easier than ever to create new icons as well as consume the ones that GNOME provides out of the box. If you’re interested in this topic, I’d recommend Jakub’s GUADEC talk on the subject.

<figure aria-describedby="caption-attachment-7187" class="wp-caption alignnone" id="attachment_7187" style="width: 1280px">GTK widget mockup<figcaption class="wp-caption-text" id="caption-attachment-7187">Mockups for menus, dropdown lists, reorderable lists, and in-app notifications</figcaption></figure>

Aside from the icon system, we have also been working to ensure that all the key design patterns are fully supported by the application development platform. The story here is patchy: not all of the design patterns have corresponding widgets in GTK and, in some cases it can be a lot of work to implement standard GNOME application designs. The result can also lack the quality that we’d like to see.

This is why the design team has been reviewing each of our design patterns, with a view to ensuring that each one is both great quality, and is fully supported. We want each pattern to look great, function really well, and be easy for application developers to use. So far, we have new designs for menus, dropdown lists, listboxes and in-app notifications, and there’s more to come. This initiative is ongoing, and we need help from platform and toolkit developers to drive it to completion.

What Next?

UX is more than UI: it is everything that makes up the user’s experience. As such, what I’ve presented here only represents a fraction of what would need to be included in a comprehensive UX strategy. That said, I do think that the work I’ve described above is of critical importance. It represents a programme to drive up the quality of the experience we provide, in a way that I believe would really resonate with users, because it focuses on features that people use every day, and aims to deliver tangible improvements.

As an open, upstream project, GNOME doesn’t have direct control over who works on what. However, it is able to informally influence where resources go, whether it’s by advertising priorities, encouraging contributions in particular areas, or tracking progress towards goals. If we are serious about wanting to compete in the marketplace, then doing this for the kind of UX programme that I’ve described seems like it could be an important step forward.

If there’s one thing I’d like to see come out of this series, it would be a serious conversation about how GNOME can be more strategic in its outlook and organisation.

This post marks the end of the “what” part of the series. In the next and final part, I’ll be moving onto the “how”: rather than talking about what we should be working on and what our priorities should be, I’ll set out how we ought to be working. This “how” part of the equation is critical: you can have the best strategy in the world, but still fail due to poor processes. So, in the final instalment, we’ll be discussing development process and methodology!

Further Reading

More information about the designs mentioned in this post:

Fedora 31 Beta est de sortie

Posted by Charles-Antoine Couret on September 17, 2019 02:22 PM

En ce mardi 17 septembre, les utilisateurs du Projet Fedora seront ravis d'apprendre la disponibilité de la version Beta Fedora 31.

Malgré les risques concernant la stabilité d’une version Beta, il est important de la tester ! En rapportant les bogues maintenant, vous découvrirez les nouveautés avant tout le monde, tout en améliorant la qualité de Fedora 31 et réduisant du même coup le risque de retard. Les versions en développement manquent de testeurs et de retours pour mener à bien leurs buts.

La version finale est pour le moment fixée pour le 22 ou 29 octobre. Voici les nouveautés annoncées pour cette version :

Expérience utilisateur

  • Passage à GNOME 3.34.
  • La roue tourne pour Xfce avec la version 4.14.
  • Mise à jour de DeepinDE 15.11.
  • Firefox utilise Wayland nativement par défaut, bien entendu si la session de bureau le permet.
  • Les applications Qt utiliseront de manière analogue Wayland lors d'une session GNOME sous Wayland.
  • Les paquets RPM utilisent le format de compression zstd au lieu de xz. Le temps de décompression est bien plus rapide d'un facteur trois ou quatre pour le paquet Firefox par exemple. Mais générer un paquet est légèrement plus long.

Gestion du matériel

  • Le noyau Linux i686 n'est plus généré et les dépôts associés sont également supprimés. De fait il n'y aura plus d'images de Fedora pour cette architecture, ni mise à niveau possible depuis Fedora 30 pour ces utilisateurs. Des paquets i686 peuvent subsister dans les dépôts à destination des utilisateurs ayant l'architecture x86_64 uniquement.
  • Le spin Xfce de Fedora dispose d'une image pour l'architecture AArch64.
  • Sur les machines avec la fonctionnalité Secure Boot de l'UEFI activé, GRUB peut maintenant utiliser ses modules de sécurité nativement.


  • Les paquets langpacks sont subdivisés avec une partie langpacks-core qui ne propose que la police par défaut et la locale correspondante. L'utilisateur a donc plus de flexibilité à ce niveau.
  • Mise à jour d'IBus 1.5.21.
  • Les polices Google Noto Variables auront maintenant la priorité sur les polices non variables du même fournisseur.

Administration système

  • Le binaire /usr/bin/python fait référence dorénavant à Python 3 et non Python 2. En effet, Python 2 ne sera plus supportée par le projet officiel en janvier 2020, le projet Fedora respecte donc la PEP 394 pour entamer la transition. En cas de problèmes vous pouvez créer le lien symbolique ~/.local/bin/python pour un utilisateur ou /usr/local/bin/python pour le système entier afin de restaurer le comportement habituel.
  • De fait, il y a une suppression massive de paquets Python 2 pour ne garder essentiellement que les derniers projets non convertis à Python 3 aujourd'hui.
  • La fonction des politiques de sécurité, introduite peu à peu dans Fedora ces dernières années, offre maintenant la possibilité aux administrateurs de personnaliser les règles comme le choix des protocoles de sécurité utilisables ou non sur le système.
  • Le noyau propose les cgroups 2 au lieu de la version 1 utilisée jusqu'alors.
  • OpenSSH refuse par défaut les identifications par mot de passe pour le compte super utilisateur.
  • Tous les groupes utilisateurs ont la possibilité native de faire des ping sur le réseau sans binaire setuid. Cela est surtout à destination des environnements avec conteneur ou Fedora Silverblue.
  • Le compteur RPM atteint la version 4.15.
  • DNF émettra une erreur par défaut si un dépôt est non accessible au lieu d'émettre seulement un avertissement. Cela est surtout à destination des dépôts tiers qui n'activaient pas forcément cette option dans leur configuration.
  • YUM 3 tire sa révérence, uniquement un lien symbolique vers DNF est maintenu. Son API n'est également plus accessible.
  • Les paquets liés à 389-console sont retirés au profit d'une nouvelle interface web.


  • Mise à jour de la bibliothèque C glibc vers la version 2.30.
  • Gawk passe à la branche 5.0.
  • Node.js en est à son 12e nœud.
  • Le générateur de documentation Sphinx passe à la version 2 et abandonne la prise en charge de Python 2.
  • Les tests Python passent du paquet python3-libs au paquet python3-test.
  • Le langage Go fonce vers la version 1.13.
  • Le langage Perl reluit à la version 5.30.
  • Mise à jour du langage Erlang et OTP à la version 22.
  • Alors que le compilateur Haskell GHC et Stackage LTS passent respectivement à la version 8.6 et 13.
  • La pile .Net libre Mono bénéficie de la version 5.20.
  • L'environnement et la chaine de compilation MinGW passent la 6e.
  • Le projet Fedora propose une configuration alternative de l'éditeur de lien, pour passer aisément de celui du projet GNU LD à celui de LLVM LDD et vice versa sans changer l'environnement de développement.
  • L'éditeur de lien GOLD de binutils, développé par Google mais maintenu par GNU maintenant a son propre paquet binutils-gold pour facilement s'en séparer si la maintenance s'arrête. Le projet n'étant plus développé activement.

Projet Fedora

  • L'image Cloud de Fedora bénéficiera d'une nouvelle image chaque mois.
  • Dans la continuité de rendre Rawhide plus stable et d'améliorer l'assurance qualité, Rawhide a maintenant Bodhi qui est activé. Cela signifie qu'un paquet doit suivre le même processus pour une mise à jour sur Rawhide que pour une version stable.
  • Les sources RPM peuvent avoir des dépendances lors de la compilation générée dynamiquement. En effet de plus en plus de langages comme Rust ou Go gèrent eux mêmes les dépendances pour compiler un projet. Ainsi, l'empaqueteur n'a plus pour ces projets à recopier les dépendances que le projet a déjà lui même renseigné.
  • De nouvelles règles d'empaquetage pour les projets utilisant Go ont été édictées.
  • L'environnement de compilation de Fedora, le buildroot, utilise un gdb minimal pour gagner en efficience. Il ne dispose plus de la gestion du XML ou de Python.
  • Les dépendances autour du langage R peuvent maintenant être résolues automatiquement.
  • Le paquet glibc i686 nécessaire pour le buildroot de Fedora bénéficie d'une amélioration de sa compilation pour être plus maintenable et garantir le respect de la licence LGPL.


Durant le développement d'une nouvelle Fedora, comme cette version Beta, quasiment chaque semaine le projet propose des journées de tests. Le but est pendant une journée de tester une fonctionnalité précise comme le noyau, Fedora Silverblue, la mise à niveau, GNOME, l’internationalisation, etc. L'équipe de qualité élabore et propose une série de tests en général simples à exécuter. Suffit de les suivre et indiquer si le résultat est celui attendu. Dans le cas contraire, un bogue devra être ouvert pour permettre l'élaboration d'un correctif.

C'est très simple à suivre et requiert souvent peu de temps (15 minutes à une heure maximum) si vous avez une Beta exploitable sous la main.

Les tests à effectuer et les rapports sont à faire via la page suivante. J'annonce régulièrement ici quand une journée de tests est planifiée.

Si l'aventure vous intéresse, les images sont disponibles par Torrent ou via le site officiel.

Si vous avez déjà Fedora 30 ou 29 sur votre machine, vous pouvez faire une mise à niveau vers la Beta. Cela consiste en une grosse mise à jour, vos applications et données sont préservées.

Nous vous recommandons dans les deux cas de procéder à une sauvegarde de vos données au préalable.

En cas de bogue, n'oubliez pas de relire la documentation pour signaler les anomalies sur le BugZilla ou de contribuer à la traduction sur Zanata.

Bons tests à tous !

Announcing the release of Fedora 31 Beta

Posted by Fedora Magazine on September 17, 2019 01:47 PM

The Fedora Project is pleased to announce the immediate availability of Fedora 31 Beta, the next step towards our planned Fedora 31 release at the end of October.

Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:

Beta Release Highlights

GNOME 3.34 (almost)

The newest release of the GNOME desktop environment is full of performance enhancements and improvements. The beta ships with a prerelease, and the full 3.34 release will be available as an update. For a full list of GNOME 3.34 highlights, see the release notes.

Fedora IoT Edition

Fedora Editions address specific use-cases the Fedora Council has identified as significant in growing our userbase and community. We have Workstation, Server, and CoreOS — and now we’re adding Fedora IoT. This will be available from the main “Get Fedora” site when the final release of F31 is ready, but for now, get it from iot.fedoraproject.org.

Read more about Fedora IoT in our Getting Started docs.

Fedora CoreOS

Fedora CoreOS remains in a preview state, with a planned generally-available release planned for early next year. CoreOS is a rolling release which rebases periodically to a new underlying Fedora OS version. Right now, that version is Fedora 30, but soon there will be a “next” stream which will track Fedora 31 until that’s ready to become the “stable” stream.

Other updates

Fedora 31 Beta includes updated versions of many popular packages like Node.js, the Go language, Python, and Perl. We also have the customary updates to underlying infrastructure software, like the GNU C Library and the RPM package manager. For a full list, see the Change set on the Fedora Wiki.

Farewell to bootable i686

We’re no longer producing full media or repositories for 32-bit Intel-architecture systems. We recognize that this means newer Fedora releases will no longer work on some older hardware, but the fact is there just hasn’t been enough contributor interest in maintaining i686, and we can provide greater benefit for the majority of our users by focusing on modern architectures. (The majority of Fedora systems have been 64-bit x86_64 since 2013, and at this point that’s the vast majority.)

Please note that we’re still making userspace packages for compatibility when running 32-bit software on a 64-bit systems — we don’t see the need for that going away anytime soon.

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in #fedora-qa on Freenode. As testing progresses, common issues are tracked on the Common F31 Bugs page.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora, but Linux and free software as a whole.

More information

For more detailed information about what’s new on Fedora 31 Beta release, you can consult the Fedora 31 Change set. It contains more technical information about the new packages and improvements shipped with this release.

Changing code to use fedora-messaging instead of fedmsg

Posted by Karsten Hopp on September 17, 2019 01:40 PM

A good start to find out about the required code changes to get rid of fedmsg in favour of fedora-messaging is to check the changes that are already done in other components. For example waiverdb

Apart from some changes to the tests, the required code change boils down to creating a msg with a new format and then pushing it to the message bus with 'publish' from fedora_messaging.api instead of 'fedmsg.publish'

-                 fedmsg.publish(topic='waiver.new', msg=marshal(row, waiver_fields))
+                 msg = Message(
+                     topic='waiverdb.waiver.new',
+                     body=marshal(row, waiver_fields)
+                 )
+                 publish(msg)

OFC exceptions need to be modified slightly, too, but altogether this seems pretty straightforward.

-             except Exception:
-                 _log.exception('Couldn\'t publish message via fedmsg')
+             except PublishReturned as e:
+                 _log.exception('Fedora Messaging broker rejected message %s: %s', msg.id, e)
+                 monitor.messaging_tx_failed_counter.inc()
+             except ConnectionException as e:
+                 _log.exception('Error sending message %s: %s', msg.id, e)

Fedocal and Nuancier are looking for new maintainers

Posted by Fedora Community Blog on September 17, 2019 06:53 AM

Recently the Community Platform Engineering (CPE) team announced that we need to focus on key areas and thus let some of our applications go. So we started Friday with Infra to find maintainers for some of those applications. Unfortunately the first few occurrences did not seem to raise as much interest as we had hoped. As a result we are still looking for new maintainers for Fedocal and Nuancier.

What will be the responsibilities of new maintainer?

New maintainer will be completely responsible for the code base and communishift instance:

  • Managing application life cycle
  • Fixing bugs
  • Implementing new features
  • Managing OpenShift playbooks
  • Maintaining running pods in OpenShift
  • Deployment of new versions in OpenShift

In other words the application will belong completely to you.

What you will get as a reward?

To take maintainership of the application is not without its rewards. So when you will choose to take it over, you will be rewarded by:

  • Learning useful and marketable programming skills (Python, PostgreSQL, Ansible)
  • Learning how to write, deploy, and manage applications in OpenShift!
  • Making significant contributions to the Fedora Community (and often others)
  • Good feeling for helping the Fedora Community and the open source world
  • Experience with managing open source applications
  • Large user base (Fedocal is used by almost every team in Fedora, Nuancier is used by plenty of Fedora users and contributors to vote for new wallpapers)
  • A warm glow of accomplishment 🙂

What role does CPE play in this?

This can look like plenty of work at first glance, but we are here to help you start with this. CPE Team will provide you with guidance, help and as part of Friday with Infra, we will help you to get everything set up and fixing the most urgent issues. More information can be found on Friday with Infra wiki.

Sounds interesting, where can I sign up?

If you think you are the right person for this work, send an email to Fedora Infrastructure mailing list or ask in #fedora-apps channel on Freenode. See you soon!

The post Fedocal and Nuancier are looking for new maintainers appeared first on Fedora Community Blog.

Permanent Record: the life of Edward Snowden

Posted by Kushal Das on September 17, 2019 04:44 AM

book cover

The personal life and thinking of the ordinary person who did an extraordinary thing.

A fantastic personal narrative of his life and thinking process. The book does not get into technical details, but, it will make sure that people relate to the different events mentioned in the book. It tells the story of a person who is born into the system and grew up to become part of the system, and then learns to question the same system.

I bought the book at midnight on Kindle (I also ordered the physical copies), slept for 3 hours in between and finished it off in the morning. Anyone born in 80s will find so many similarities as an 80s kid. Let it be the Commodore 64 as the first computer we saw or basic as the first-ever programming language to try. The lucky ones also got Internet access and learned to roam around of their own and build their adventure along with the busy telephone lines (which many times made the family members unhappy).

If you are someone from the technology community, I don't think you will find Ed's life was not as much different than yours. It has a different scenario and different key players, but, you will be able to match the progress in life like many other tech workers like ourselves.

Maybe you are reading the book just to learn what happened, or maybe you want to know why. But, I hope this book will help to think about the decisions you make in your life and how that affects the rest of the world. Let it be a group picture posted on Facebook or writing the next new tool for the intelligence community.

Go ahead and read the book, and when you are finished, make sure you pass it across to your friend, or buy them new copies. If you have some free time, you may consider to run a Tor relay or a bridge, a simple step will help many around the world.

On a side note, the book mentions SecureDrop project at the very end, and today is also the release of SecureDrop 1.0.0 (the same day of the book release).

Fedora 30 : Interactive learning and reinventing the wheel in programming.

Posted by mythcat on September 16, 2019 07:49 PM
Today I returned from an activity that prompted me to find a solution to display logos.
I found this GitHub repo that I read and then turned it into a single script.
It took about an hour.

EPEL Bug: Bash errors on recent EL-8 systems.

Posted by Stephen Smoogen on September 16, 2019 05:47 PM
Last week, I got asked about a problem with using EPEL-8 on Oracle Enterprise Linux 8 where trying to install packages failed due to bad license file. I duplicated the problem on RHEL-8 which had not happened before some recent updates.

[smooge@localhost ~]$ repoquery
bash: repoquery: command not found...
Failed to search for file: Failed to download gpg key for repo 'epel': Curl error (37): Couldn't read a file:// file for file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8.0 [Couldn't open file /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8.0]
The problem seems to be that the EPEL release package uses the string $releasever for various short-cut strings. Take for example:

name=Extra Packages for Enterprise Linux $releasever - Playground - $basearch

The problem is that when I wrote new versions of the EPEL-8 repo file, I replaced the old key phrase gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 with gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$releasever .  When I tested things with the dnf command it worked fine but I didn't check to see where things like bash completion would show up.

Moving back to the format that EPEL-6 and EPEL-7 used fixes the problem, so I will be pushing an updated release file out this week.  My apologies for people seeing the errors.

Onboarding Fedora Infrastructure

Posted by Karsten Hopp on September 16, 2019 02:02 PM

I'm using / working on Fedora since FC-1 and just recently joined the Infrastructure team.

My first task was getting up to speed wrt ansible as all of the repeating tasks of the team are automated. Fortunately there are lots of tutorials and documents available on the web, as usual some of them outdated or incorrect, but where's the fun in having everything working out of the box ?  A good starting point is Ansible's own documentation, i.e. https://docs.ansible.com/ansible/latest/user_guide/intro_getting_started.html , but I've also read some german docs like https://jaxenter.de/multi-tier-deployment-mit-ansible-76731 or the very basic intro https://www.biteno.com/tutorial/ansible-tutorial-einfuehrung/

There are a couple of hundred  playbooks in the infrastructure ansible git repository (https://infrastructure.fedoraproject.org/cgit/ansible.git/) with an inventory of almost 900 hosts. I run into problems when I tried to fix https://pagure.io/fedora-infrastructure/issue/8156 , which is about cleaning up ansible_distribution conditionals. Many playbooks already check for ansible_distribution == "RedHat" or ansible_distribution == "Fedora", but as a newbie in Fedora-Infrastructure I have no idea which distribution a certain host is currently running and if a given playbook will ever be run for this host. Does adding checks for the distribution even make sense when conditions maybe never become true ? What seems to be missing (at least I haven't found it) is a huge map with all our machines together with a description of the OS they are running and which playbooks apply to them. There is an ancient (3yrs) old issue open (https://pagure.io/fedora-infrastructure/issue/5290) that already requests and implements part of this, but unfortunately there has been no progress for some time now.


Looking at the nagios map of all our hosts, there already seem to be  host groups 'RedHat' 'CentOS' and 'Fedora', so at least part of the required  information is already available.

Copying large files with Rsync, and some misconceptions

Posted by Fedora Magazine on September 16, 2019 08:00 AM

There is a notion that a lot of people working in the IT industry often copy and paste from internet howtos. We all do it, and the copy-and-paste itself is not a problem. The problem is when we run things without understanding them.

Some years ago, a friend who used to work on my team needed to copy virtual machine templates from site A to site B. They could not understand why the file they copied was 10GB on site A but but it became 100GB on-site B.

The friend believed that rsync is a magic tool that should just “sync” the file as it is. However, what most of us forget is to understand what rsync really is, and how is it used, and the most important in my opinion is, where it come from. This article provides some further information about rsync, and an explanation of what happened in that story.

About rsync

rsync is a tool was created by Andrew Tridgell and Paul Mackerras who were motivated by the following problem:

Imagine you have two files, file_A and file_B. You wish to update file_B to be the same as file_A. The obvious method is to copy file_A onto file_B.

Now imagine that the two files are on two different servers connected by a slow communications link, for example, a dial-up IP link. If file_A is large, copying it onto file_B will be slow, and sometimes not even possible. To make it more efficient, you could compress file_A before sending it, but that would usually only gain a factor of 2 to 4.

Now assume that file_A and file_B are quite similar, and to speed things up, you take advantage of this similarity. A common method is to send just the differences between file_A and file_B down the link and then use such list of differences to reconstruct the file on the remote end.

The problem is that the normal methods for creating a set of differences between two files rely on being able to read both files. Thus they require that both files are available beforehand at one end of the link. If they are not both available on the same machine, these algorithms cannot be used. (Once you had copied the file over, you don’t need the differences). This is the problem that rsync addresses.

The rsync algorithm efficiently computes which parts of a source file match parts of an existing destination file. Matching parts then do not need to be sent across the link; all that is needed is a reference to the part of the destination file. Only parts of the source file which are not matching need to be sent over.

The receiver can then construct a copy of the source file using the references to parts of the existing destination file and the original material.

Additionally, the data sent to the receiver can be compressed using any of a range of common compression algorithms for further speed improvements.

The rsync algorithm addresses this problem in a lovely way as we all might know.

After this introduction on rsync, Back to the story!

Problem 1: Thin provisioning

There were two things that would help the friend understand what was going on.

The problem with the file getting significantly bigger on the other size was caused by Thin Provisioning (TP) being enabled on the source system — a method of optimizing the efficiency of available space in Storage Area Networks (SAN) or Network Attached Storages (NAS).

The source file was only 10GB because of TP being enabled, and when transferred over using rsync without any additional configuration, the target destination was receiving the full 100GB of size. rsync could not do the magic automatically, it had to be configured.

The Flag that does this work is -S or –sparse and it tells rsync to handle sparse files efficiently. And it will do what it says! It will only send the sparse data so source and destination will have a 10GB file.

Problem 2: Updating files

The second problem appeared when sending over an updated file. The destination was now receiving just the 10GB, but the whole file (containing the virtual disk) was always transferred. Even when a single configuration file was changed on that virtual disk. In other words, only a small portion of the file changed.

The command used for this transfer was:

rsync -avS vmdk_file syncuser@host1:/destination

Again, understanding how rsync works would help with this problem as well.

The above is the biggest misconception about rsync. Many of us think rsync will simply send the delta updates of the files, and that it will automatically update only what needs to be updated. But this is not the default behaviour of rsync.

As the man page says, the default behaviour of rsync is to create a new copy of the file in the destination and to move it into the right place when the transfer is completed.

To change this default behaviour of rsync, you have to set the following flags and then rsync will send only the deltas:

--inplace               update destination files in-place
--partial               keep partially transferred files
--append                append data onto shorter files
--progress              show progress during transfer

So the full command that would do exactly what the friend wanted is:

rsync -av --partial --inplace --append --progress vmdk_file syncuser@host1:/destination

Note that the sparse flag -S had to be removed, for two reasons. The first is that you can not use –sparse and –inplace together when sending a file over the wire. And second, when you once sent a file over with –sparse, you can’t updated with –inplace anymore. Note that versions of rsync older than 3.1.3 will reject the combination of –sparse and –inplace.

So even when the friend ended up copying 100GB over the wire, that only had to happen once. All the following updates were only copying the difference, making the copy to be extremely efficient.

Fedora 31 Gnome Test Day 2019-09-18

Posted by Fedora Community Blog on September 16, 2019 07:26 AM
F30 Modularity test day

Wednesday, 2019-09-18 is the Fedora 31 Gnome Test Day! As part of changes Gnome 3.34 in Fedora 31, we need your help to test if everything runs smoothly!

Why Gnome Test Day?

We try to make sure that all the Gnome features are performing as they should. So it’s to see whether it’s working well enough and catch any remaining issues. It’s also pretty easy to join in: all you’ll need is Fedora 31 (which you can grab from the wiki page).

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Fedora 31 Gnome Test Day 2019-09-18 appeared first on Fedora Community Blog.

Wrestling with the register allocator: LuaJIT edition

Posted by Siddhesh Poyarekar on September 16, 2019 12:20 AM

For some time now, I kept running into one specific piece of code in luajit repeatedly for various reasons and last month I came across a fascinating bug in the register allocator pitfall that I had never encountered before. But then as is the norm after fixing such bugs, I concluded that it’s too trivial to write about and I was just stupid to not have found it sooner; all bugs are trivial once they’re fixed.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

After getting over my imposter syndrome (yeah I know, took a while) I finally have the courage to write about it, so like a famous cook+musician says, enough jibberjabber…

Looping through a hash table

One of the key data structures that luajit uses to implement metatables is a hash table. Due to its extensive use, the lookup loop into such hash tables is optimised in the JIT using an architecture-specific asm_href function. Here is what a snippet of the arm64 version of the function looks like. A key thing to note is that the assembly code is generated backwards, i.e. the last instruction is emitted first.

  /* Key not found in chain: jump to exit (if merged) or load niltv. */
  l_end = emit_label(as);
  as->invmcp = NULL;
  if (merge == IR_NE)
    asm_guardcc(as, CC_AL);
  else if (destused)
    emit_loada(as, dest, niltvg(J2G(as->J)));

  /* Follow hash chain until the end. */
  l_loop = --as->mcp;
  emit_n(as, A64I_CMPx^A64I_K12^0, dest);
  emit_lso(as, A64I_LDRx, dest, dest, offsetof(Node, next));
  l_next = emit_label(as);

  /* Type and value comparison. */
  if (merge == IR_EQ)
    asm_guardcc(as, CC_EQ);
    emit_cond_branch(as, CC_EQ, l_end);

  if (irt_isnum(kt)) {
    if (isk) {
      /* Assumes -0.0 is already canonicalized to +0.0. */
      if (k)
        emit_n(as, A64I_CMPx^k, tmp);
        emit_nm(as, A64I_CMPx, key, tmp);
      emit_lso(as, A64I_LDRx, tmp, dest, offsetof(Node, key.u64));
    } else {
      Reg ftmp = ra_scratch(as, rset_exclude(RSET_FPR, key));
      emit_nm(as, A64I_FCMPd, key, ftmp);
      emit_dn(as, A64I_FMOV_D_R, (ftmp & 31), (tmp & 31));
      emit_cond_branch(as, CC_LO, l_next);
      emit_nm(as, A64I_CMPx | A64F_SH(A64SH_LSR, 32), tisnum, tmp);
      emit_lso(as, A64I_LDRx, tmp, dest, offsetof(Node, key.n));
  } else if (irt_isaddr(kt)) {
    Reg scr;
    if (isk) {
      int64_t kk = ((int64_t)irt_toitype(irkey->t) << 47) | irkey[1].tv.u64;
      scr = ra_allock(as, kk, allow);
      emit_nm(as, A64I_CMPx, scr, tmp);
      emit_lso(as, A64I_LDRx, tmp, dest, offsetof(Node, key.u64));
    } else {
      scr = ra_scratch(as, allow);
      emit_nm(as, A64I_CMPx, tmp, scr);
      emit_lso(as, A64I_LDRx, scr, dest, offsetof(Node, key.u64));
    rset_clear(allow, scr);
  } else {
    Reg type, scr;
    lua_assert(irt_ispri(kt) && !irt_isnil(kt));
    type = ra_allock(as, ~((int64_t)~irt_toitype(ir->t) << 47), allow);
    scr = ra_scratch(as, rset_clear(allow, type));
    rset_clear(allow, scr);
    emit_nm(as, A64I_CMPw, scr, type);
    emit_lso(as, A64I_LDRx, scr, dest, offsetof(Node, key));

  *l_loop = A64I_BCC | A64F_S19(as->mcp - l_loop) | CC_NE;

Here, the emit_* functions emit assembly instructions and the ra_* functions allocate registers. In the normal case everything is fine and the table lookup code is concise and effective. When there is register pressure however, things get interesting.

As an example, here is what a typical type lookup would look like:

0x100	ldr x1, [x16, #52]
0x104	cmp x1, x2
0x108	beq -> exit
0x10c	ldr x16, [x16, #16]
0x110	cmp x16, #0
0x114	bne 0x100

Here, x16 is the table that the loop traverses. x1 is a key, which if it matches, results in an exit to the interpreter. Otherwise the loop moves ahead until the end of the table. The comparison is done with a constant stored in x2.

The value of x is loaded later (i.e. earlier in the code, we are emitting code backwards, remember?) whenever that register is needed for reuse through a process called restoration or spilling. In the restore case, it is loaded into register as a constant or expressed as another constant (look up constant rematerialisation) and in the case of a spill, the register is restored from a slot in the stack. If there is no register pressure, all of this restoration happens at the head of the trace, which is why if you study a typical trace you will notice a lot of constant loads at the top of the trace.

Like the Spill that ruined your keyboard the other day…

Things get interesting when the allocation of x2 in the loop above results in a restore. Looking at the code a bit closer:

    type = ra_allock(as, ~((int64_t)~irt_toitype(ir->t) << 47), allow);
    scr = ra_scratch(as, rset_clear(allow, type));
    rset_clear(allow, scr);
    emit_nm(as, A64I_CMPw, scr, type);
    emit_lso(as, A64I_LDRx, scr, dest, offsetof(Node, key));

The x2 here is type, which is a constant. If a register is not available, we have to make one available by either rematerializing or by restoring the register, which would result in something like this:

0x100   ldr x1, [x16, #52]
0x104   cmp x1, x2
0x108	mov x2, #42
0x10c   beq -> exit
0x110   ldr x16, [x16, #16]
0x114   cmp x16, #0
0x118   bne 0x100

This ends up breaking the loop because the allocator restore/spill logic assumes that the code is linear and the restore will affect only code that follows it, i.e. code that got generated earlier. To fix this, all of the register allocations should be done before the loop code is generated.

Making things right

The result of this analysis was this fix in my LuaJIT fork that allocates registers for operands that will be used in the loop before generating the body of the loop. That is, if the registers have to spill, they will do so after the loop (we are generating code in reverse order) and leave the loop compact. The fix is also in the luajit2 repository in the OpenResty project. The work was sponsored by OpenResty as this wonderfully vague bug could only be produced by some very complex scripts that are part of the OpenResty product.

Episode 161 - Human nature and ad powered open source

Posted by Open Source Security Podcast on September 16, 2019 12:00 AM
Josh and Kurt start out discussing human nature and how it affects how we view security. A lot of things that look easy are actually really hard. We also talk about the npm library Standard showing command line ads. Are ads part of the future of open source?

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/11260388/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

    It's time to talk about post-RMS Free Software

    Posted by Matthew Garrett on September 14, 2019 11:57 AM
    Richard Stallman has once again managed to demonstrate incredible insensitivity[1]. There's an argument that in a pure technical universe this is irrelevant and we should instead only consider what he does in free software[2], but free software isn't a purely technical topic - the GNU Manifesto is nakedly political, and while free software may result in better technical outcomes it is fundamentally focused on individual freedom and will compromise on technical excellence if otherwise the result would be any compromise on those freedoms. And in a political movement, there is no way that we can ignore the behaviour and beliefs of that movement's leader. Stallman is driving away our natural allies. It's inappropriate for him to continue as the figurehead for free software.

    But I'm not calling for Stallman to be replaced. If the history of social movements has taught us anything, it's that tying a movement to a single individual is a recipe for disaster. The FSF needs a president, but there's no need for that person to be a leader - instead, we need to foster an environment where any member of the community can feel empowered to speak up about the importance of free software. A decentralised movement about returning freedoms to individuals can't also be about elevating a single individual to near-magical status. Heroes will always end up letting us down. We fix that by removing the need for heroes in the first place, not attempting to find increasingly perfect heroes.

    Stallman was never going to save us. We need to take responsibility for saving ourselves. Let's talk about how we do that.

    [1] There will doubtless be people who will leap to his defense with the assertion that he's neurodivergent and all of these cases are consequences of that.

    (A) I am unaware of a formal diagnosis of that, and I am unqualified to make one myself. I suspect that basically everyone making that argument is similarly unqualified.
    (B) I've spent a lot of time working with him to help him understand why various positions he holds are harmful. I've reached the conclusion that it's not that he's unable to understand, he's just unwilling to change his mind.

    [2] This argument is, obviously, bullshit

    comment count unavailable comments

    Cascade – a turn-based text arcade game

    Posted by Richard W.M. Jones on September 14, 2019 09:00 AM


    I wrote this game about 20 years ago. Glad to see it still compiled out of the box on the latest Linux distro! Download it from here. If anyone can remember the name or any details of the original 1980s MS-DOS game that I copied the idea from, please let me know in the comments.

    PulseCaster 0.9 released!

    Posted by Paul W. Frields on September 14, 2019 02:57 AM

    The post PulseCaster 0.9 released! appeared first on The Grand Fallacy.

    It says… It says, uh… “Virgil Brigman back on the air”.

    The Abyss, 1989 (J. Cameron)

    OK, I feel slightly guilty using a cheesy quote from James Cameron for this post. But not as guilty as I feel for leaving development of this tool to hang out so long.

    That’s right, there’s a brand new release of PulseCaster available out there — 0.9 to be exact. There are multiple fixes and enhancements in this version.

    (By the way… I don’t have experience packaging for Debian or Ubuntu. If you’re maintaining PulseCaster there and have questions, don’t hestitate to get in touch. And thank you for helping make PulseCaster available for users!)

    For starters, PulseCaster is now ported to Python 3. I used Python 3.6 and Python 3.7 to do the porting. Nothing in the code should be particular to either version, though. But you’ll need to have Python 3 installed to use it, as most Linux bistros do these days.

    Another enhancement is that PulseCaster now relies on the excellent pulsectl library for Python, by George Filipkin and Mike Kazantsev. Hats off to them for doing a great job, which allowed me to remove many, many lines of code from this release.

    Also, due the use of PyGObject3 in this release, there are numerous improvements that make it easier for me to hack on. Silly issues with the GLib mainloop and other entrance/exit stupidity are hopefully a bit better now.

    Also, the code for dealing with temporary files is now a bit less ugly. I still want to do more work on the overall design and interface, and have ideas. I’ve gotten way better at time management since the last series of releases and hope to do some of this over the USA holiday season this late fall and winter (but no promises).

    A new release should be available in Fedora’s Rawhide release by the time you read this, and within a few days in Fedora 31. Sorry, although I could bring back a Fedora 30 package, I’m hoping this will entice one or two folks to get on Fedora 31 sooner. So grab that when the Beta comes out and I’ll see you there!

    If you run into problems with the release, please file an issue in Github. I have fixed mail filters so that I’ll be more responsive to them in the future.files

    Photo by neil godding on Unsplash.

    FPgM report: 2019-37

    Posted by Fedora Community Blog on September 13, 2019 08:41 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week. Fedora 31 Beta is go!

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.


    Help wanted

    Upcoming meetings & test days

    Fedora 31


    • 17 September — Beta release
    • 8 October — Final freeze begins
    • 22 October — Final release preferred target


    Blocker bugs

    Bug IDBlocker statusComponentBug status
    1749433Accepted (final)gnome-control-centerNEW
    1750394Accepted (final)gnome-control-centerNEW
    1749086Accepted (final)kdeMODIFIED
    1751438Accepted (final)LiveCDNEW
    1728240Accepted (final)sddmNEW
    1750805Proposed (final)gnome-control-centerNEW
    1751673Proposed (final)gnome-sessionNEW
    1749868Proposed (final)gnome-softwareNEW
    1747408Proposed (final)libgit2NEW
    1750036Proposed (final)selinux-policyASSIGNED
    1750345Proposed (final)webkit2gtk3NEW
    1751852Proposed (final)xfdesktopASSIGNED

    Fedora 32



    Submitted to FESCo


    The post FPgM report: 2019-37 appeared first on Fedora Community Blog.

    GNOME Firmware 3.34.0 Release

    Posted by Richard Hughes on September 13, 2019 01:12 PM

    This morning I tagged the newest fwupd release, 1.3.1. There are a lot of new things in this release and a whole lot of polishing, so I encourage you to read the release notes if this kind of thing interests you.

    Anyway, to the point of this post. With the new fwupd 1.3.1 you can now build just the libfwupd library, which makes it easy to build GNOME Firmware (old name: gnome-firmware-updater) in Flathub. I tagged the first official release 3.34.0 to celebrate the recent GNOME release, and to indicate that it’s ready for use by end users. I guess it’s important to note this is just a random app hacked together by 3 engineers and not something lovelingly designed by the official design team. All UX mistakes are my own :)

    GNOME Firmware is designed to be a not-installed-by-default power-user tool to investigate, upgrade, downgrade and re-install firmware.
    GNOME Software will continue to be used for updates as before. Vendor helpdesks can ask users to install GNOME Firmware rather than getting them to look at command line output.

    We need to polish up GNOME Firmware going forwards, and add the last few features we need. If this interests you, please send email and I’ll explain what needs doing. We also need translations, although that can perhaps wait until GNOME Firmware moves to GNOME proper, rather than just being a repo in my personal GitLab. If anyone does want to translate it before then, please open merge requests, and be sure to file issues if any of the strings are difficult to translate or ambigious. Please also file issues (or even better merge requests!) if it doesn’t build or work for you.

    If you just want to try out a new application, it takes 10 seconds to install it from Flathub.

    PHP version 7.2.23RC1 and 7.3.10RC1

    Posted by Remi Collet on September 13, 2019 08:10 AM

    Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

    RPM of PHP version 7.3.10RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30-31 or remi-php73-test repository for Fedora 29 and Enterprise Linux.

    RPM of PHP version 7.2.23RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 29 or remi-php72-test repository for Enterprise Linux.


    emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

    emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

    Parallel installation of version 7.3 as Software Collection:

    yum --enablerepo=remi-test install php73

    Parallel installation of version 7.2 as Software Collection:

    yum --enablerepo=remi-test install php72

    Update of system version 7.3:

    yum --enablerepo=remi-php73,remi-php73-test update php\*

    or, the modular way (Fedora and RHEL 8):

    dnf module enable php:remi-7.3
    dnf --enablerepo=remi-modular-test update php\*

    Update of system version 7.2:

    yum --enablerepo=remi-php72,remi-php72-test update php\*

    or, the modular way (Fedora and RHEL 8):

    dnf module enable php:remi-7.2
    dnf --enablerepo=remi-modular-test update php\*

    Notice: version 7.3.10RC1 in Fedora rawhide for QA.

    emblem-notice-24.pngEL-7 packages are built using RHEL-7.6.

    emblem-notice-24.pngPackages of 7.4.0RC1 are also available

    emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

    Software Collections (php72, php73)

    Base packages (php)

    nbdkit supports exportnames

    Posted by Richard W.M. Jones on September 13, 2019 08:02 AM

    (You’ll need the very latest version of libnbd and nbdkit from git for this to work.)

    The NBD protocol lets the client send an export name string to the server. The idea is a single server can serve different content to clients based on a requested export. nbdkit has largely ignored export names, but we recently added basic support upstream.

    One consequence of this is you can now write a shell plugin which reflects the export name back to the client:

    $ cat export.sh
    #!/bin/bash -
    case "$1" in
        open) echo "$3" ;;
        get_size) LC_ALL=C echo ${#2} ;;
        pread) echo "$2" | dd skip=$4 count=$3 iflag=skip_bytes,count_bytes ;;
        *) exit 2 ;;
    $ chmod +x export.sh
    $ nbdkit -f sh export.sh

    The size of the disk is the same as the export name:

    $ nbdsh -u 'nbd://localhost/fooooo' -c 'print(h.get_size())'

    The content and size of the disk is the exportname:

    │ f o o o o o │

    Not very interesting in itself. But we can now pass the content of small disks entirely in the export name. Using a slightly more advanced plugin which supports base64-encoded export names (so we can pass in NUL bytes):

    $ cat export-b64.sh
    #!/bin/bash -
    case "$1" in
        open) echo "$3" ;;
        get_size) echo "$2" | base64 -d | wc -c ;;
        pread) echo "$2" | base64 -d |
               dd skip=$4 count=$3 iflag=skip_bytes,count_bytes ;;
        can_write) exit 0 ;;
        pwrite) exit 1 ;;
        *) exit 2 ;;
    $ chmod +x export-b64.sh
    $ nbdkit -f sh export-b64.sh

    We can pass in an entire program to qemu:

    qemu-system-x86_64 -fda 'nbd:localhost:10809:exportname=uBMAzRD8uACgjtiOwLQEo5D8McC5SH4x//OriwVAQKuIxJK4AByruJjmq7goFLsQJbELq4PAFpOr/sST4vUFjhWA/1x167+g1LEFuAQL6IUBg8c84vW+lvyAfAIgci3+xYD9N3SsrZetPCh0CzwgdQTGRP4o6F4Bgf5y/XXbiPAsAnLSNAGIwojG68qAdAIIRYPlB1JWVXUOtADNGjsWjPx09okWjPy+gPy5BACtPUABl3JD6BcBge9CAYoFLCByKVZXtAT25AHGrZfGBCC4CA7oAgFfXusfrQnAdC09APCXcxTo6ACBxz4BuAwMiXz+gL1AAQt1BTHAiUT+gD0cdQbHBpL8OAroxgDizL6S/K0IwHQMBAh1CLQc/g6R/HhKiUT+izzorgB1LrQCzRaoBHQCT0+oCHQCR0eoA3QNgz6A/AB1Bo1FCKOA/Jc9/uV0Bz0y53QCiQRdXlqLBID6AXYKBYACPYDUchvNIEhIcgODwARQ0eixoPbx/syA/JRYcgOAzhaJBAUGD5O5AwDkQDz8cg2/gvyDPQB0A6/i+Ikd6cL+GBg8JDx+/yQAgEIYEEiCAQC9234kPGbDADxa/6U8ZmYAAAAAAAAAAHICMcCJhUABq8NRV5xQu6R9LteTuQoA+Ij4iPzo4f/Q4+L1gcdsAlhAqAd14J1fWcNPVao='


    GNOME 3.34 released — coming soon in Fedora 31

    Posted by Fedora Magazine on September 13, 2019 08:00 AM

    Today the GNOME project announced the release of GNOME 3.34. This latest release of GNOME will be the default desktop environment in Fedora 31 Workstation. The Beta release of Fedora 31 is currently expected in the next week or two, with the Final release scheduled for late October.

    GNOME 3.34 includes a number of new features and improvements. Congratulations and thank you to the whole GNOME community for the work that went into this release! Read on for more details.

    <figure class="wp-block-image"><figcaption> GNOME 3.34 desktop environment at work</figcaption></figure>

    Notable features

    The desktop itself has been refreshed with a pleasing new background. You can also compare your background images to see what they’ll look like on the desktop.

    There’s a new custom application folder feature in the GNOME Shell Overview. It lets you combine applications in a group to make it easier to find the apps you use.

    You already know that Boxes lets you easily download an OS and create virtual machines for testing, development, or even daily use. Now you can find sources for your virtual machines more easily, as well as boot from CD or DVD (ISO) images more easily. There is also an Express Install feature available that now supports Windows versions.

    Now that you can save states when using GNOME Games, gaming is more fun. You can snapshot your progress without getting in the way of the fun. You can even move snapshots to other devices running GNOME.

    More details

    These are not the only features of the new and improved GNOME 3.34. For an overview, visit the official release announcement. For even more details, check out the GNOME 3.34 release notes.

    The Fedora 31 Workstation Beta release is right around the corner. Fedora 31 will feature GNOME 3.34 and you’ll be able to experience it in the Beta release.

    Update on Easy PXE boot testing post: minus PXELINUX

    Posted by Dusty Mabe on September 13, 2019 12:00 AM
    Introduction This is an update to my previous post about easily testing PXE booting by using libvirt + iPXE. Several people have notified me (thanks Lukas Zapletal and others) that instead of leveraging PXELINUX that I could just use an iPXE script to do the same thing. I hadn’t used iPXE much so here’s an update on how to achieve the same goal using an iPXE script instead of a PXELINUX binary+config.

    Insider 2019-09: syslog-ng basics; relays; NGINX; Tic-Tac-Toe; sudo; Elastic stack 7; GitHub;

    Posted by Peter Czanik on September 12, 2019 11:02 AM

    Dear syslog-ng users,

    This is the 75th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.


    Building blocks of syslog-ng

    Recently I gave a syslog-ng introductory workshop at Pass the SALT conference in Lille, France. I got a lot of positive feedback, so I decided to turn all that content into a blog post. Naturally, I shortened and simplified it, but still managed to get enough material for multiple blog posts.

    This one gives you an overview of syslog-ng, its major features and an introduction to its configuration.


    What syslog-ng relays are good for

    While there are some users who run syslog-ng as a stand-alone application, the main strength of syslog-ng is central log collection. In this case the central syslog-ng instance is called the server, while the instances sending log messages to the central server are called the clients. There is a (somewhat lesser known) third type of instance called the relay, too. The relay collects log messages via the network and forwards them to one or more remote destinations after processing (but without writing them onto the disk for storage). A relay can be used for many different use cases. We will discuss a few typical examples below.


    Visualizing NGINX or Apache access logs in Kibana

    This tutorial shows you how to parse NGINX or Apache access logs with syslog-ng and create ECS compatible data in Elasticsearch.


    syslog-ng Tic-Tac-Toe

    You can play the game of Tic-Tac-Toe using syslog-ng 3.22.1 or later. Learn how to configure syslog-ng for that:


    Alerting on sudo events using syslog-ng

    Why use syslog-ng to alert on sudo events? At the moment, alerting in sudo is limited to E-mail. Using syslog-ng, however, you can send alerts (more precisely, selected logs) to a wide variety of destinations. Logs from sudo are automatically parsed by recent (3.13+) syslog-ng releases, enabling fine-grained alerting. There is a lot of hype around our new Slack destination, so that is what I’ll show here. Naturally, there are many others available as well, including Telegram and, of course, good old E-mail. If something is not yet directly supported by syslog-ng, you can often utilize an HTTP API or write some glue code in Python.

    From this blog post you can learn how to build up a syslog-ng configuration step by step and how to use different filters to make sure that you only receive logs (i.e. alerts) that are truly relevant for you.


    GitHub and syslog-ng

    As many of you know, the source code of syslog-ng is available on GitHub, just like its issue tracker. We just learned that GitHub itself is running syslog-ng as part of its stack: https://help.github.com/en/enterprise/2.18/admin/installation/log-forwarding

    syslog-ng with Elastic Stack 7

    For many years, anything I wrote about syslog-ng and Elasticsearch was valid for all available versions. Well, not anymore. With version 7 of Elasticsearch, there are some breaking changes. These changes are mostly related to the fact that Elastic is phasing out type support. This effects mapping (as the _default_ keyword is not used any more) and the syslog-ng configuration as well (as even though type() is a mandatory parameter, you should leave it empty).

    This blog post is a rewrite of one of my earlier blog posts (about creating a heat map using syslog-ng + Elasticsearch + Kibana), focusing on the changes and the new elasticsearch-http() destination:




    Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/

    GSoC summer 2019: Fedora Gooey Karma

    Posted by Fedora Community Blog on September 12, 2019 06:41 AM
    Fedora Summer Coding 2019

    This blog post is to summarise my journey for Google Summer of Code (GSoC) with the Fedora community, The journey started the day I mailed my mentor about the project, and it was a hell of a ride for sure. Let’s get started.

    About Me

    Name – Zubin Choudhary



    I am a 3rd-year B.tech student at Bennett university (India). I was interested in making and breaking stuff since forever, Gaming got me into programming and I’ve been writing small scripts for automating stuff for the last couple of years, I stumbled upon GitHub just to get amazed by the concept of sharing your code with strangers on the internet for no direct profit whatsoever. But later realising the beauty of opensource made me fell in love with it, I switched to Linux, switched to opensource alternatives of all the software I used then. Years later I decided to give back to the community. GSoC was the perfect opportunity for that.

    Getting started

    The day GSoC projects list was published I started sorting out all the organizations that I’d enjoy working with. Being a Linux user/enthusiast I filtered down to a bunch of Linux distros and desktop managers. Sorting out all the projects, Fedora-Gooey-Karma seemed to be a project that suited the skills I have.

    Once I was sure that Fedora Gooey Karma is a project that I would love to work on during the summer, I mailed @sumantro about the project. We talked about the project on mails.


    My project was Fedora Gooey Karma. The aim of this project was to provide a simple fast user interface for testers to submit karma on the Bodhi.

    Using the web interface for Bodhi isn’t very efficient, we all end up opening a bunch of tabs, verify out package multiple times, crosschecking and submit takes a ton of extra effort, which eventually holds a lot of people from testing packages.

    This application has an intuitive UI that allows users to look for packages to test, look up the details, bugs and let them post karma on bodhi platform.

    The user interface looks like this, Thanks to the Fedora Design team for the mockup.

    <figure class="wp-block-image"></figure>

    This project was previously built on python2 and the old bodhi2 bindings, The project originally was to port the application from python2 to python3 and make it work again, but later looking at the codebase I decided that re-writing the code would be much easier.


    The decided aims for this project were:

    1. Application ported to Bodhi2 API and Python3
    2. Revamped UI built with QT
    3. Application ready to be packaged and ship

    But now that I decided to rewrite everything from scratch there were different challenges that I had to face.

    Some of them would be:

    1. The Bodhi bindings changed a lot, a bunch of features were added and removed that were required in the gooey karma project.
    2. Being new to Qt UI I took some time to get used to it.


    Rough timeline for the project goes like this:

    Week 1 and Week 2

    • Initialising python3 port
    • Learning about the bodhi and FAS bindings

    Week 3 and Week 4

    • Initialising the Qt user interface
    • Writing the bodhi bindings

    Week 5 and Week 6

    • Writing login controller
    • Writing functions to fetch all data and display

    Week 7 and Week 8

    • Writing functions to post karma
    • Fixing bugs

    Week 9 and Week 10


    The project isn’t fully completed yet, I will keep working on this project for a while implementing features like fetching related packages, listing only installed packages, etc.

    Also while presenting the project on showcase Stephen Gallagher requested that project gets ported to Cockpit as well.


    I’ve had a wonderful time at Fedora since I began contributing this summer, I was later invited to the Flock to Fedora conference where I met a lot of contributors. This was the summer I will never forget.

    The post GSoC summer 2019: Fedora Gooey Karma appeared first on Fedora Community Blog.

    Unit-testing static functions in C

    Posted by Peter Hutterer on September 12, 2019 04:21 AM

    An annoying thing about C code is that there are plenty of functions that cannot be unit-tested by some external framework - specifically anything declared as static. Any larger code-base will end up with hundreds of those functions, many of which are short and reasonably self-contained but complex enough to not trust them by looks only. But since they're static I can't access them from the outside (and "outside" is defined as "not in the same file" here).

    The approach I've chosen in the past is to move the more hairy ones into separate files or at least declare them normally. That works but is annoying for some cases, especially those that really only get called once. In case you're wondering whether you have at least one such function in your source tree: yes, the bit that parses your commandline arguments is almost certainly complicated and not tested.

    Anyway, this week I've finally found the right combination of hacks to make testing static functions easy, and it's:

    • #include the source file in your test code.
    • Mock any helper functions you'd need to trick the called functions
    • Instruct the linker to ignore unresolved symbols
    And boom, you can write test cases to only test a single file within your source tree. And without any modifications to the source code itself.

    A more detailed writeup is available in this github repo.

    For the impatient, the meson snippet for a fictional source file example.c would look like this:

    'example.c', 'test-example.c',
    dependencies: [dep_ext_library],
    link_args: ['-Wl,--unresolved-symbols=ignore-all',
    install: false),

    There is no restriction on which test suite you can use. I've started adding a few of test cases based on this approach to libinput and so far it's working well. If you have a better approach or improvements, I'm all ears.

    Wherefore Art Thou CentOS 8?

    Posted by Scott Dowdle on September 11, 2019 09:04 PM
    Wherefore Art Thou CentOS 8? Scott Dowdle Wed, 09/11/2019 - 15:04

    UPDATE: CentOS announced on their twitter account that CentOS 8 will be released on Sept. 24th.

    IBM's Red Hat Enterprise Linux 8 (and I'm not sure if Red Hat likes me putting IBM in front of it or not) was released on May 7th, 2019.  I write this on Sept. 11th, 2019 and CentOS 8 still isn't out.  RHEL 7.7 came out on August 6, 2019.  In an effort to be transparent, CentOS does have wiki pages for both Building_8 and Building_7 where they enumerate the various steps they have to go through to get the final product out the door.

    Up until early August they were making good progress on CentOS 8.  In fact they had made it to the last step which was titled, "Release work" which had a Started date of "YYYY-MM-DD", an Ended date of "YYYY-MM-DD", and a Status  "NOT STARTED YET".  That was fine for a while and then almost a month had passed with the NOT STARTED YET status.  If you are like me, when they completed every step but the very last, you are thinking that the GA release will be available Real-Soon-Now but after waiting a month, not so much.

    It was also obvious that CentOS had started work on the 7.7 update and the status indicators for that have progressed nicely but they still have a ways to go.  Of course one of the hold ups is that they have quite a few arches to support (more than Red Hat themselves) even though their most used platform (x86_64) had its Continuous Release (CR) repository populated and released on August 30th, 2019.  There is still a ways to go on 7.7 but they are generally much quicker with the point update releases.

    Users started complaining on the CentOS Devel mailing list harkening back to an earlier time in CentOS' history where they lagged way behind.  There were lots of responses to that thread, many thanking the CentOS developers for all of their hard work, some name calling, and a lot of back and forth with plenty of repetition.  Everyone understands that it takes a while for a major new release to come out and it'll be done when it is good and ready... however... the main complaint was that the development team (which long-time CentOS developer Johnny Hughes Jr. said numbered 3 people) wasn't being transparent enough given the fact that the wiki pages hadn't been updated in some time.  Johnny Hughes finally explained the reason 8 has stalled:

    WRT CentOS 8 .. it has taken a back seat to 7.7.1908.  Millions of users already use CentOS Linux 7.  Those people needs updates.

    That totally makes sense, doesn't it?  Everyone was happy with that answer... and I updated the Building_8 wiki page to reflect that by changing the status to, "Deferred for 7.7 work" and adding a note that said, "2019-09-10 According to this thread, work was stopped on CentOS 8 after upstream released 7.7. Since so many more users have CentOS 7.x in production, and no one has 8 yet, priority has been given to the 7.7 update... and once it is done, work will continue on 8."

    Someone asked JH Jr. if they could use some help and he said that building the packages was easy enough and there wasn't really a way to speed it up... but testing all of the packages, especially all of the various arches, was a way the greater community could help.  That was a poor summary so if interested I encourage you to read the full thread.

    While I'm definitely looking forward to the release of CentOS 8, I understand the 7.7 release takes priority and I now better know what to expect.  As has been said so many times, thanks for all of the hard work devs, it is appreciated.

    <section class="field field--name-comment field--type-comment field--label-above comment-wrapper" rel="schema:comment"></section>

    Please welcome Acer to the LVFS

    Posted by Richard Hughes on September 11, 2019 11:44 AM

    Acer has now officialy joined the LVFS, promoting the Aspire A315 firmware to stable.

    Acer has been testing the LVFS for some time and now all the legal and technical checks have been completed. Other models will follow soon!

    Announcing Linux Autumn 2019

    Posted by Rafał Lużyński on September 11, 2019 09:59 AM

    Summer is not yet over (in my climate zone) but it’s time to think about the autumn. Yes, I mean the Linux Autumn, the annual Polish conference of Linux and free software enthusiasts organized by PLUG. I wrote about this event many times in the past, I don’t want to make you bored by the same things again. This year we hope to invite more foreign guests and make the conference more international, possibly with one day full of English talks.

    It’s going to be the 17th edition. The venue is exactly the same as the last year: Gwarek Hotel in Ustroń, southern Poland, and dates, as usually a weekend, November 29th til December 1st. Attendees registration is open until November 21st but the talk proposals must be submitted until September 15th. It’s little time, you must hurry.

    Remember that the conference is paid for attendees. The money is spent to pay for the accommodation and food for everyone. Why do I ever write in the article for Fedora Planet about a paid and not strictly Fedora-oriented event? First of all, the participation (including accommodation and food) is fully refunded for speakers. I’m not encouraging you to attend a paid event, although if you want you are most welcome. I’m encouraging you to give your talks and participate in a three-days long event for free. Second, this is a Linux event and Fedora is still a Linux distribution. Third, as we all know, many Fedora contributors live and work in the Czech Republic, especially in Brno, and this event is organized in Poland just across the Czech border. It cannot be closer.

    How to arrive

    There are more details on the organizer’s page but here is a small summary:

    • From Poland: Ustroń is a tourist resort so it has good public transport connection with the rest of the country. It’s very easy to arrive by train or by bus from Katowice. Alternatively you can reach Bielsko-Biała or Cieszyn first and then continue by some local public transport. Of course, if you travel by your own car nothing limits you.
    • From the Czech Republic: first you must reach Český Těšín, there is only 25 km (16 miles) left to Ustroń. It’s easy to find a local public transport. You can also ask organizers for help (e.g., maybe someone gives a ride by car, that’s not a problem indeed).
    • From Slovakia: you may choose to travel via the Czech Republic and Český Těšín but you can also choose Skalité/Zwardoń border crossing.
    • From other countries: if you travel by plane you should choose Krakow or Katowice airport. Both are located about 120 km (75 miles) from Ustroń. Actually the nearest airport is Ostrava but it’s a small airport and I doubt you will find a convenient flight. Of course, you may also prefer to fly to Prague, Brno, or even Vienna and travel across the Czech Republic by train or by bus according to the guidelines above.

    Please come because it’s really worth. Those who participated already said the conference was really interesting and it’s too bad that so few people knew about it.

    In order to register please visit the organizers’ website.

    How to set up a TFTP server on Fedora

    Posted by Fedora Magazine on September 11, 2019 08:00 AM

    TFTP, or Trivial File Transfer Protocol, allows users to transfer files between systems using the UDP protocol. By default, it uses UDP port 69. The TFTP protocol is extensively used to support remote booting of diskless devices. So, setting up a TFTP server on your own local network can be an interesting way to do Fedora installations, or other diskless operations.

    TFTP can only read and write files to or from a remote system. It doesn’t have the capability to list files or make any changes on the remote server. There are also no provisions for user authentication. Because of security implications and the lack of advanced features, TFTP is generally only used on a local area network (LAN).

    TFTP server installation

    The first thing you will need to do is install the TFTP client and server packages:

    dnf install tftp-server tftp -y

    This creates a tftp service and socket file for systemd under /usr/lib/systemd/system.


    Next, copy and rename these files to /etc/systemd/system:

    cp /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service
    cp /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socket

    Making local changes

    You need to edit these files from the new location after you’ve copied and renamed them, to add some additional parameters. Here is what the tftp-server.service file initially looks like:

    Description=Tftp Server
    ExecStart=/usr/sbin/in.tftpd -s /var/lib/tftpboot

    Make the following changes to the [Unit] section:


    Make the following changes to the ExecStart line:

    ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot

    Here are what the options mean:

    • The -c option allows new files to be created.
    • The -p option is used to have no additional permissions checks performed above the normal system-provided access controls.
    • The -s option is recommended for security as well as compatibility with some boot ROMs which cannot be easily made to include a directory name in its request.

    The default upload/download location for transferring the files is /var/lib/tftpboot.

    Next, make the following changes to the [Install] section:


    Don’t forget to save your changes!

    Here is the completed /etc/systemd/system/tftp-server.service file:

    Description=Tftp Server
    ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot

    Starting the TFTP server

    Reload the systemd daemon:

    systemctl daemon-reload

    Now start and enable the server:

    systemctl enable --now tftp-server

    To change the permissions of the TFTP server to allow upload and download functionality, use this command. Note TFTP is an inherently insecure protocol, so this may not be advised on a network you share with other people.

    chmod 777 /var/lib/tftpboot

    Configure your firewall to allow TFTP traffic:

    firewall-cmd --add-service=tftp --perm
    firewall-cmd --reload

    Client Configuration

    Install the TFTP client:

    yum install tftp -y

    Run the tftp command to connect to the TFTP server. Here is an example that enables the verbose option:

    [client@thinclient:~ ]$ tftp
    tftp> verbose
    Verbose mode on.
    tftp> get server.logs
    getting from to server.logs [netascii]
    Received 7 bytes in 0.0 seconds [inf bits/sec]
    tftp> quit
    [client@thinclient:~ ]$ 

    Remember, TFTP does not have the ability to list file names. So you’ll need to know the file name before running the get command to download any files.

    Photo by Laika Notebooks on Unsplash.

    [F31] Participez à la journée de test consacrée à l'internationalisation

    Posted by Charles-Antoine Couret on September 10, 2019 09:41 PM

    Cette semaine, à partir du 9 septembre, est une semaine dédiée à un test précis : sur l'internationalisation de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

    Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

    En quoi consiste ce test ?

    Comme chaque version de Fedora, la mise à jour de ses outils impliquent souvent l’apparition de nouvelles chaînes de caractères à traduire et de nouveaux outils liés à la prise en charge de langues (en particulier asiatiques).

    Pour favoriser l'usage de Fedora dans l'ensemble des pays du monde, il est préférable de s'assurer que tout ce qui touche à l'internationalisation de Fedora soit testée et fonctionne. Notamment parce qu'une partie doit être fonctionnelle dès le LiveCD d'installation (donc sans mise à jour).

    Les tests du jour couvrent :

    • Le bon fonctionnement d'ibus pour la gestion des entrées claviers ;
    • La personnalisation des polices de caractères ;
    • L'installation automatique des paquets de langues des logiciels installés suivant la langue du système ;
    • La traduction fonctionnelle par défaut des applications ;
    • Les nouvelles dépendances des paquets de langue pour installer les polices et les entrées de saisie nécessaires.

    Bien entendu, étant donné les critères, à moins de savoir une langue chinoise, l'ensemble des tests n'est pas forcément réalisable. Mais en tant que francophones, de nombreuses problématiques nous concernent et remonter les problèmes est important. En effet, ce ne sont pas les autres communautés linguistiques qui identifieront les problèmes d'intégration de la langue française.

    Comment y participer ?

    Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

    Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

    En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

    De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

    Towards a UX Strategy for GNOME (Part 2)

    Posted by Allan Day on September 10, 2019 03:43 PM

    This post is a part of a short series, in which I’m setting out what I think could be the beginnings of a UX strategy for GNOME. In this, the second post, I’m going to describe a potential GNOME UX strategy in high-level terms. These goals are a response to the research and analysis that was described in the previous post and, it is hoped, point the way forward for how GNOME can achieve new success in the desktop market.

    Strategic goals

    For me, the main goals of a GNOME UX strategy could be:

    1. Deliver quality

    If GNOME is going to succeed in today’s desktop market, UX quality has to be job #1.

    UX quality includes what the software looks like and how it is designed, but it also refers to how the software functions. Performance and bugs (or the lack of them) are both aspects of UX!

    More than anything else, people are looking for a desktop that Just Works: they want a solution that allows them to get their work done without getting in their way. This means having a desktop that is reliable, stable, which does what people want, and which is easy to use.

    People value solutions that Just Work. They’re also prepared to abandon them when they don’t Just Work.

    To its credit, the GNOME project has historically recognised the importance of Just Works, and it has delivered huge improvements in this area. However, there is still a lot of work to be done.

    My sense is that driving up quality is one of the key strategic challenges that the GNOME project needs to face up to; I’ll be returning to this topic!

    2. Factor in the cloud

    In my previous post, I worte about how the cloud has reconfigured the landscape in which GNOME operates. Accordingly, it’s important for our UX strategy to account for the cloud. There are various ways we can do this:

    • Focus on those bits of the desktop that are used by all users, even if they mainly use a web browser. This includes all the parts of the core system, as well as the most essential desktop apps.
    • Enable and encourage native cloud applications (including Electron and Progressive Web Apps)
    • Add value with high-quality native apps.
    • Integrate with existing cloud services, when it is safe to do so.

    The last point might seem counter-intuitive, but it makes sense: in a world where the web is dominant, a fantastic set of native apps can be a powerful differentiator.

    At the same time, GNOME needs to be careful when it comes to directly competing with sophisticated web apps and services, and it needs to recognise that, nowadays, many apps aren’t worth doing if they don’t have a cloud/cross-device component.

    3. Grow the app ecosystem

    The primary purpose of a platform like GNOME is to run apps, so it stands to reason that the number and quality of the apps that are available for the platform is of critical importance.

    Recently, Flatpak has allowed the GNOME project to make great progress around application distribution, and this is already positively impacting app availability for GNOME. However, there is a lot of work still to be done, particularly around GNOME’s application development platform. This includes work for both designers and developers.

    4. Support modern hardware

    One of the things that my research revealed is that, for most users, their choice of desktop OS is thoroughly entwined with hardware purchasing choices, with hardware and software typically being seen as part of the same package. Attracting users to GNOME therefore requires that GNOME be available for, work well with, and be associated with high-quality hardware.

    A lot of hardware enablement work is done by distros, but a lot also happens in GNOME, including things like high-definition display support, touchscreen support, screen casting, and more. This is important work!

    Do less, prioritise

    Any UX strategy should address the question of prioritisation: it ought to be able to determine how resources can be directed in order to have maximum impact. This is particularly important for the GNOME project, because its resources are limited: the core community is fairly small, and there’s a lot of code to maintain.

    The idea of prioritisation has therefore both influenced the goals I’ve set out above, as well as how I’ve been trying to put them into practice.

    When thinking about prioritisation in the context of GNOME UX, there are various principles that we can follow, including:

    • User exposure, both in terms of the proportion of people that use a feature, and also the frequency with which they use it. Improvements to features that everyone uses all the time have a bigger impact than improvements to features that are only used occasionally by a subset of the user base.
    • User needs and desires: features that are viewed as being highly attractive by a lot of people are more impactful than those which are only interesting to a small subset.
    • Common underpinnings: we can prioritise by focusing on common subsystems and technical components. The key example here is something like GTK, where improvements can surface themselves in all the apps that use the toolkit.

    When we decide which design and development initiatives we want to focus on (either by working on them ourselves, or advertising them to potential contributors), principles like these, along with the high-level goals that I’ve described above, can be very helpful.

    I also believe that, in some cases, the GNOME project needs to have some hard conversations, and think about giving up some of its existing software. If quality is job #1, one obvious answer is to reduce the amount of software we care about, in order to increase the overall quality of everything else. This is particularly relevant for those parts of our software that don’t have great quality today.

    Of course, these kinds of conversations need to be handled delicately. Resources aren’t fungible in an upstream project like GNOME, and contributors can and should be free to work on what they want.

    What’s next

    In my previous post, I described the research and analysis that serves as inputs to the strategy I’m setting out. In this post I’ve translated that background into a high-level plan: four strategic goals, and an overarching principle of prioritisation.

    In the next post, I’m going to introduce a raft of design work which I think fits into the strategy that I’ve started to lay out. GNOME is lucky to have a great design team at the moment, which has been pumping out high-quality design work over the past year, so this is a great opportunity to showcase what we’ve been doing, but what I want to do is also show how it fits into context.

    Flock to Fedora ’19

    Posted by Fedora Community Blog on September 10, 2019 06:37 AM

    Attending a tech conference is not what I’ve experienced before, but I’m sure I’ll keep doing so forever. Flock ‘19 was an amazing one to start with, meeting a flock with same interest always gets you an amazing time. I’ll be sharing down some of the things that I took away from Flock to Fedora ‘19

    The community planned a tonne of talks for everyone to attend, unfortunately, it was impossible to attend all of them. These are the talks that I decided to attend.

    State of Fedora

    This talk gave me the insights of the current state of Fedora, what’s coming up and what’s going wrong.

    Facebook loves Fedora

    During this one, I got to learn about what cooperates look at an opensource operating system, what they look for and how they implement it.

    Improving Packaging Experience with Automation

    This talk was about automating the build process on koji, It turned out to be a sort of debate regarding storage management and the way koji works.

    Fedora CoreOS

    I’ve heard about CoreOS before but this talk introduced me to the working and uses of CoreOS.

    Fedora IoT

    Me being interested in IoT automatically got attracted to this talk, got to know about the Fedora effort.

    Fedora RISC-V

    In this talk, we discussed the state of fedora running on RISC-V architecture. Got to know about the work put into porting systems to another architecture.

    Fedora Summer Coding 2019 Project Showcase and Meetup

    This was the last event where all the summer interns showcased their project.

    Other than talks I met a bunch of great developers/designers and a lot of people that I really loved to hang out with. I really like the concept of lanyards and stickers to find out who’s fine talking to you and the fact that can simply go talk to anyone with a green sticker on their badge.

    It was definitely an awesome experience, I’d really love to attend another Flock conference.

    The post Flock to Fedora ’19 appeared first on Fedora Community Blog.

    Exciting few weeks in the SecureDrop land

    Posted by Kushal Das on September 10, 2019 04:52 AM

    Eric Trump tweet

    Last week there was an interesting tweet from Eric Trump, son of US President Donald Trump. Where he points out how Mr. David Fahrenthold, a journalist from Washington Post did some old school journalism and made sure that every Trump organization employee knows about how to securely leak information or talk to a journalist via SecureDrop.

    I want to say thank you to him for this excellent advertisement for our work. There were many people over Twitter, cheering him for this tweet.

    julian and matt's tweet Parker's tweet Harlo's tweet

    If you don’t know what SecureDrop is, it is an open-source whistleblower submission system that media organizations and NGOs can install to securely accept documents from anonymous sources. It was originally created by the late Aaron Swartz and is now managed by Freedom of the Press Foundation. It is mostly written in Python and uses a lot of Ansible. Jennifer Helsby, the lead developer of SecureDrop and I took part in this week’s Python podcast along with our host Tobias. You can listen to learn about many upcoming features and plans.

    If you are interested to contribute to the SecureDrop project, come over to our gitter channel and say hello.


    Last month, during Defcon 27, there was a panel about DEF CON to help hackers anonymously submit bugs to the government, interestingly the major suggestion in that panel is to use SecureDrop (hosted by Defcon) so that the researchers can safely submit vulnerabilities to the US government. Watch the full panel discussion to learn more in details.

    Inkscape 1.0 Beta

    Posted by Gwyn Ciesla on September 09, 2019 06:33 PM

    Fresh and hot in f32! Come test and enjoy!

    Gthree – ready to play

    Posted by Alexander Larsson on September 09, 2019 09:07 AM

    Today I made a new release of Gthree, version 0.2.0.

    Newly added in this release is support for Raycaster, which is important if you’re making interactive 3D applications. For example, it’s used if you want clicks on the window to pick a 3D object from the scene. See the interactive demo for an example of this.

    Also new is support for shadow maps. This allows objects between a light source and a target to cast shadows on the target. Here is an example from the demos:

    I’ve been looking over the list of feature that we support, and in this release I think all the major things you might want to do in a 3D app is supported to at least a basic level.

    So, if you ever wanted to play around with 3D graphics, now would be a great time to do so. Maybe just build the code and study/tweak the code in the examples subdirectory. That will give you a decent introduction to what is possible.

    If you just want to play I added a couple of new features to gnome-hexgl based on the new release. Check out how the tracks casts shadows on the buildings!

    Firefox 69 available in Fedora

    Posted by Fedora Magazine on September 09, 2019 08:00 AM

    When you install the Fedora Workstation, you’ll find the world-renowned Firefox browser included. The Mozilla Foundation underwrites work on Firefox, as well as other projects that promote an open, safe, and privacy respecting Internet. Firefox already features a fast browsing engine and numerous privacy features.

    A community of developers continues to improve and enhance Firefox. The latest version, Firefox 69, was released recently and you can get it for your stable Fedora system (30 and later). Read on for more details.

    New features in Firefox 69

    The newest version of Firefox includes Enhanced Tracking Protection (or ETP). When you use Firefox 69 with a new (or reset) settings profile, the browser makes it harder for sites to track your information or misuse your computer resources.

    For instance, less scrupulous websites use scripts that cause your system to do lots of intense calculations to produce cryptocurrency results, called cryptomining. Cryptomining happens without your knowledge or permission and is therefore a misuse of your system. The new standard setting in Firefox 69 prevents sites from this kind of abuse.

    Firefox 69 has additional settings to prevent sites from identifying or fingerprinting your browser for later use. These improvements give you additional protection from having your activities tracked online.

    Another common annoyance is videos that start in your browser without warning. Video playback also uses extra CPU power and you may not want this happening on your laptop without permission. Firefox already stops this from happening using the Block Autoplay feature. But Firefox 69 also lets you stop videos from playing even if they start without sound. This feature prevents unwanted sudden noise. It also solves more of the real problem — having your computer’s power used without permission.

    There are numerous other new features in the new release. Read more about them in the Firefox release notes.

    How to get the update

    Firefox 69 is available in the stable Fedora 30 and pre-release Fedora 31 repositories, as well as Rawhide. The update is provided by Fedora’s maintainers of the Firefox package. The maintainers also ensured an update to Mozilla’s Network Security Services (the nss package). We appreciate the hard work of the Mozilla project and Firefox community in providing this new release.

    If you’re using Fedora 30 or later, use the Software tool on Fedora Workstation, or run the following command on any Fedora system:

    $ sudo dnf --refresh upgrade firefox

    If you’re on Fedora 29, help test the update for that release so it can become stable and easily available for all users.

    Firefox may prompt you to upgrade your profile to use the new settings. To take advantage of new features, you should do this.

    Episode 160 - Disclosing security issues is insanely complicated: Part 2

    Posted by Open Source Security Podcast on September 09, 2019 12:06 AM
    Josh and Kurt talk about disclosing security flaws in open source. This is part two of a discussion around how to disclose security issues. This episode focuses on some expectations and behaviors for open source projects as well as researchers trying to disclose a problem to a project.

    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/11171180/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes

      Monitoring OpenWrt with collectd, InfluxDB and Grafana

      Posted by Christopher Smart on September 08, 2019 11:16 PM

      In my previous blog post I showed how to set up InfluxDB and Grafana (and Prometheus). This is how I configured my OpenWrt devices to provide monitoring and graphing of my network.

      OpenWrt includes support for collectd (and even graphing inside Luci web interface) so we can leverage this and send our data across the network to the monitoring host.

      <figure class="wp-block-image"><figcaption>OpenWrt stats in Grafana</figcaption></figure>

      Install and configure packages on OpenWrt

      Log into your OpenWrt devices and install the required packages.

      opkg update
      opkg install luci-app-statistics collectd collectd-mod-cpu \
      collectd-mod-interface collectd-mod-iwinfo \
      collectd-mod-load collectd-mod-memory collectd-mod-network collectd-mod-uptime
      /etc/init.d/luci_statistics enable
      /etc/init.d/collectd enable

      Next, log into your device’s OpenWrt web interface and you should see a new Statistics menu at the top. Hover over this and click on Setup so that we can configure collectd.

      Add the Hostname field and enter in the device’s hostname (or some name you want).

      <figure class="wp-block-image is-resized"></figure>

      Click on General plugins and make sure that Processor, System Load, Memory and Uptime are all enabled. Hit Save & Apply.

      <figure class="wp-block-image is-resized"></figure>

      Under Network plugins, ensure Interfaces is enabled and select the interfaces you want to monitor (lan, wan, wifi, etc).

      <figure class="wp-block-image is-resized"></figure>

      Still under Network plugins, also ensure Wireless is enabled but don’t select any interfaces (it will work it out). Hit Save & Apply (I don’t bother with the Ping plugin).

      <figure class="wp-block-image is-resized"></figure>

      Click on Output plugins and ensure Network is enabled so that we can stream metrics to InfluxDB. All you need to do is add an entry under server interfaces that points to the IP address of your monitor server (which is running InfluxDB with the collectd listener enabled). Hit Save & Apply.

      <figure class="wp-block-image is-resized"></figure>

      Finally, you can leave RRDTool plugin as it is, or disable it if you want to (it will stop showing graphs in Luci if you do, but we’re using Grafana anyway and you’ll have less load on your router). If you do enable, it make sure it is writing data to tmpfs to avoid wearing our your flash (this is the default configuration).

      That’s your OpenWrt configuration done!

      Loading a dashboard in Grafana

      Still in your web browser, log into Grafana on your monitor node (port 3000 by default).

      <figure class="wp-block-image is-resized"></figure>

      Import a new dashboard.

      <figure class="wp-block-image is-resized"></figure>

      We will use an existing dashboard by contributor vooon341, so simply type in the number 3484 and hit Load.

      <figure class="wp-block-image is-resized"></figure>

      This will download the dashboard from Grafana and prompt for settings. Enter whatever Name you like, select InfluxDB as your data source (configured in the previous blog post), then hit Import.

      <figure class="wp-block-image is-resized"></figure>

      Grafana will now go and query InfluxDB and present your dashboard with all of your OpenWrt devices.

      <figure class="wp-block-image is-resized"></figure>

      OpenWrt also supports a LUA Prometheus node exporter, so if you wanted to add those as well, you could. However, I think collectd does a reasonable job.

      github notifications

      Posted by David Cantrell on September 08, 2019 09:07 PM
      I have noticed that a number of projects I have on github have stopped notifying me.  I have not yet found a pattern or missed setting, but it is possible I am looking in the wrong place.  I do get notifications for some projects, just not all.  And this goes across projects that are under my personal account as well as projects that I am a member of but exist elsewhere.

      Has anyone seen this with github?  Any tips on receiving consistent notifications?  I am wanting to receive notifications of Issues (both new issues and comments posted), Pull Requests, commits, tags, and releases.

      WebKit Vulnerabilities Facilitate Human Rights Abuses

      Posted by Michael Catanzaro on September 08, 2019 05:32 PM

      Chinese state actors have recently abused vulnerabilities in the JavaScriptCore component of WebKit to hack the personal computing devices of Uighur Muslims in the Xinjiang region of China. Mass digital surveillance is a key component of China’s ongoing brutal human rights crackdown in the region.

      This has resulted in a public relations drama that is largely a distraction to the issue at hand. Whatever big-company PR departments have to say on the matter, I have no doubt that the developers working on WebKit recognize the severity of this incident and are grateful to Project Zero, which reported these vulnerabilities and has previously provided numerous other high-quality private vulnerability reports. (Many other organizations deserve credit for similar reports, especially Trend Micro’s Zero Day Initiative.)

      WebKit as a project will need to reassess certain software development practices that may have facilitated the abuse of these vulnerabilities. The practice of committing security fixes to open source long in advance of corresponding Safari releases may need to be reconsidered.

      Sadly, Uighurs should assume their personal computing devices have been compromised by state-sponsored attackers, and that their private communications are not private. Even if not compromised in this particular incident, similar successful attacks are overwhelmingly likely in the future.

      Setting up a monitoring host with Prometheus, InfluxDB and Grafana

      Posted by Christopher Smart on September 08, 2019 12:18 PM

      Prometheus and InfluxDB are powerful time series database monitoring solutions, both of which are natively supported with graphing tool, Grafana.

      Setting up these simple but powerful open source tools gives you a great base for monitoring and visualising your systems. We can use agents like node-exporter to publish metrics on remote hosts which Prometheus will scrape, and other tools like collectd which can send metrics to InfluxDB’s collectd listener (more on that later!).

      <figure class="wp-block-image is-resized"><figcaption>Prometheus’ node exporter metrics in Grafana</figcaption></figure>

      I’m using CentOS 7 on a virtual machine, but this should be similar to other systems.

      Install Prometheus

      Prometheus is the trickiest to install, as there is no Yum repo available. You can either download the pre-compiled binary or run it in a container, I’ll do the latter.

      Install Docker and pull the image (I’ll use Quay instead of Dockerhub).

      sudo yum install docker
      sudo systemctl start docker
      sudo systemctl enable docker
      sudo docker pull quay.io/prometheus/prometheus

      Create a basic configuration file for Prometheus which we will pass into the container. This is also where we configure clients for Prometheus to pull data from, so let’s add a localhost target for the monitor node itself.

      cat << EOF | sudo tee /etc/prometheus.yml
        scrape_interval:     15s
        evaluation_interval: 15s
        - job_name: 'prometheus'
          - targets: ['localhost:9090']
        - job_name: 'node'
          - targets:
            - localhost:9100

      Now we can start a persistent container. We’ll pass in the config file we created earlier but also a dedicated volume so that the database is persistent across updates. We use host networking so that Prometheus can talk to localhost to monitor itself (not required if you want to configure Prometheus to talk to the host’s external IP instead of localhost).

      sudo docker run -dit \
      --network host \
      --name prometheus \
      --restart always \
      -p 9090:9090 \
      --volume prometheus:/prometheus \
      -v /etc/prometheus.yml:/etc/prometheus/prometheus.yml:Z \
      quay.io/prometheus/prometheus \
      --config.file=/etc/prometheus/prometheus.yml \
      --web.enable-lifecycle \

      Check that the container is running properly, it should say that it is ready to receive web requests in the log. You should also be able to browse to the endpoint on port 9090 (you can run queries here, but we’ll use Grafana).

      sudo docker ps
      sudo docker logs prometheus

      Updating Prometheus config

      Updating and reloading the config is easy, just edit /etc/prometheus.yml and send a message to Prometheus to reload (this was enabled by web.enable-lifecycle option). This is useful when adding new nodes to scrape metrics from.

      curl -s -XPOST localhost:9090/-/reload

      In the container log (as above) you should see that it has reloaded the config.

      Installing Prometheus node exporter

      You’ll notice in the Prometheus configuration above we have a job called node and a target for localhost:9100. This is a simple way to start monitoring the monitor node itself! Installing the node exporter in a container is not recommended, so we’ll use the Copr repo and install with Yum.

      sudo curl -Lo /etc/yum.repos.d/_copr_ibotty-prometheus-exporters.repo \
      sudo yum install node_exporter
      sudo systemctl start node_exporter
      sudo systemctl enable node_exporter

      It should be listening on port 9100 and Prometheus should start getting metrics from http://localhost:9100/metrics automatically (we’ll see them later with Grafana).

      Install InfluxDB

      Influxdata provides a yum repository so installation is easy!

      cat << \EOF | sudo tee /etc/yum.repos.d/influxdb.repo
      sudo yum install influxdb

      The defaults are fine, other than enabling collectd support so that other clients can send metrics to InfluxDB. I’ll show you how to use this in another blog post soon.

      sudo sed-i 's/^\[\[collectd\]\]/#\[\[collectd\]\]/' /etc/influxdb/influxdb.conf
      cat << EOF | sudo tee -a /etc/influxdb/influxdb.conf
        enabled = true
        bind-address = ":25826"
        database = "collectd"
        retention-policy = ""
         typesdb = "/usr/local/share/collectd"
         security-level = "none"

      This should open a number of ports, including InfluxDB itself on TCP port 8086 and collectd receiver on UDP port 25826.

      sudo ss -ltunp |egrep "8086|25826"

      Create InfluxDB collectd database

      Finally, we need to connect to InfluxDB and create the collectd database. Just run the influx command.


      And at the prompt, create the database and exit.

      CREATE DATABASE collectd

      Install Grafana

      Grafana has a Yum repository so it’s also pretty trivial to install.

      cat << EOF | sudo tee /etc/yum.repos.d/grafana.repo
      sudo yum install grafana

      Grafana pretty much works out of the box and can be configured via the web interface, so simply start and enable it. The server listens on port 3000 and the default username is admin with password admin.

      sudo systemctl start grafana
      sudo systemctl enable grafana
      sudo ss -ltnp |grep 3000

      Now you’re ready to log into Grafana!

      Configuring Grafana

      Browse to the IP of your monitoring host on port 3000 and log into Grafana.

      <figure class="wp-block-image is-resized"></figure>

      Now we can add our two data sources. First, Prometheus, poing to localhost on port 9090

      <figure class="wp-block-image is-resized"></figure>

      ..and then InfluxDB, pointing to localhost on port 8086 and to the collectd database.

      <figure class="wp-block-image is-resized"></figure>

      Adding a Grafana dashboard

      Make sure they tested OK and we’re well on our way. Next we just need to create some dashboards, so let’s get a dashboard to show node exporter and we’ll hopefully at least see the monitoring host itself.

      Go to Dashboards and hit import.

      <figure class="wp-block-image"></figure>

      Type the number 1860 in the dashboard field and hit load.

      <figure class="wp-block-image is-resized"></figure>

      This should automatically download and load the dash, all you need to do is select your Prometheus data source from the Prometheus drop down and hit Import!

      <figure class="wp-block-image is-resized"></figure>

      Next you should see the dashboard with metrics from your monitor node.

      <figure class="wp-block-image is-resized"></figure>

      So there you go, you’re on your way to monitoring all the things! For anything that supports collectd, you can forward metrics to UDP port 25826 on your monitor node. More on that later…

      Fedora 30 : A general purpose 3D CAD modeler.

      Posted by mythcat on September 07, 2019 07:33 PM
      I don't use the Computer-aided design (CAD) solutions but today I tested a good one with Fedora 30.
      The FreeCAD application is an open source option product design.
      This is available on Linux, Windows, Mac OS X+.
      The full list with all features can be found at this wiki page.
      [root@desk mythcat]# dnf search freecad
      Last metadata expiration check: 0:14:20 ago on Sb 07 sep 2019 21:54:39 +0300.
      ======================== Name Exactly Matched: freecad =========================
      freecad.x86_64 : A general purpose 3D CAD modeler
      ======================= Name & Summary Matched: freecad ========================
      freecad-data.noarch : Data files for FreeCAD
      [root@desk mythcat]# dnf install freecad.x86_64
      Last metadata expiration check: 0:19:37 ago on Sb 07 sep 2019 21:54:39 +0300.
      Dependencies resolved.
      Package Arch Version Repo Size
      freecad x86_64 1:0.18.3-1.fc30 updates 38 M

      [root@desk mythcat]# exit
      [mythcat@desk ~]$ FreeCAD
      FreeCAD 0.18, Libs: 0.18RUnknown
      © Juergen Riegel, Werner Mayer, Yorik van Havre 2001-2019
      ##### #### ### ####
      # # # # # #
      # ## #### #### # # # # #
      #### # # # # # # # ##### # #
      # # #### #### # # # # #
      # # # # # # # # # ## ## ##
      # # #### #### ### # # #### ## ## ##
      It seems to work very well.

      Avocent PM webinterface issues

      Posted by Andreas "ixs" Thienemann on September 07, 2019 05:27 PM

      At bawue.net we are using several Avocent PM 3000 power distribution units to connect our servers.
      A nice feature of these PDUs is that they work great with our existing Cyclades console servers.

      In addition to the serial console and the SSH interface, these devices also offer a web interface. This interface never worked with Chrome or Chromium where it only shows a blank page. It does however work with Firefox, or so I thought at least.
      I recently needed to verify something and found out that with the latest Firefox the webinterface on these devices is now not broken. Empty page, same as with Chrome.

      As there is no firmware upgrade, I tried figuring out what is going on.

      It turns out, the web interface was written using the JavaScript document.load() function to fetch content from the device. Unfortunately, this function was never standardized, never supported on Chrome or Safari and has by now been removed from Firefox as well.

      But thanks to Greasemonkey or Tampermonkey it is possible to make the web interface work again. We just need to provide a document.load() function that uses AJAX/XHR Requests to load data from the device and all is good.

      Such a userscript can be found on my public github gist.

      New Computer for 2020, 2021...

      Posted by Remi Collet on September 07, 2019 02:27 PM

      My new computer is operational :

      • Corsair Graphite series 760T box
      • Gigabyte Z370 AORUS Ultra Gaming motherboard
      • Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz processor
      • WaterCooling
      • Memory: 2x16GiB

      Retrieved from my previous computer:

      • SSD disks, 180GB and 480GB
      • SATA disks 320GB, 500GB et 4000GB

      First test, boot time < 5", PHP 7.4 build take 0'55" instead of 2'11". Nice.

      I plan to extend it quickly:

      • 32GiB memory

      Thanks to all who wants to participate, see the Merci/Thanks page.

      Of course, my previous machine, from 2013, will be re-used

      How to design a status icon library in 2019

      Posted by Patrick Griffis on September 07, 2019 04:00 AM

      So the year is 2019 and people still use system trays on their desktops. I’m not going to get into if that is good or bad but lets discuss an implementation to sanely use them and what existing ones get wrong. I hope this is somewhat informative to developers using or implementing status icons.

      Lets start with some high level goals:

      • Don’t use X11

        I’ll start with this because its simple and everybody knows it by now. If you want to set a tray you can’t use X11 because not everybody uses X11. So you need a different solution and on Linux that is a DBus service that applications talk to.

      • Safely handle and expose if the tray works

        There are always going to situations where the tray will not work such as using a desktop that doesn’t support it or bugs like the service crashing and going away. This should always be reported to the application as soon as possible as this avoids issues like an application having a hide window feature but no tray to show it.

      • Be designed for a sandboxed future

        We are now at a point where desktop software is getting sandboxed so that means certain actions are limited such as reading/writing files on the host, talking to or owning arbitrary DBus names, having a PID that means anything to the host, etc. On top of that applications should expect a permission system where they are denied using a tray.

      • Recommended features

        This is more opinionated but I’d say you need to support:

        • Showing an icon and changing it (with scaling ideally)
        • Reacting to clicks and maybe scrolling
        • Exporting a menu to be shown

      Ok I think that covers the basics lets look at the real world solutions:

      - GtkStatusIcon (GTK)

      • ❌ Uses X11
      • ✔️ It does expose if the tray is “embedded” or not (almost nobody listens to this).
      • ❌ Sandbox friendly (its X11)
      • ✔️ Features

      Nobody should use this API in new code if you want to target non-X11 but this is the classic API everybody has lived with for years and it got the job done. Since it no longer exists in GTK4 due to not being portable a replacement had to come.

      - AppIndicator (Canonical)

      • ✔️ Uses DBus (but falls back to X11)
      • ❌ Failure not exposed by default
      • ❓ It is kinda sandbox friendly
      • ✔️ Features

      While nobody should be using the DBus service anymore many use the libappindciator library which behind the scenes uses StatusNotifier these days. It has a complete feature-set for the most part though menu handling is overly complex.

      There are quite a few problems with this implementation however. It is designed to silently fail by default; It falls back to X11 automatically and unless you override the fallback handling XEmbed failing yourself (maybe literally nobody does) you will not know it did. It does tell if you if the DBus portion is connected but as long as it fallsback you don’t know what is going on.

      It isn’t quite sandbox friendly because it does require talking to an insecure service and it allows referencing icons by path. However it doesn’t require bus name ownership which is a positive.

      Lastly the library is entirely unmaintained for many years and libdbusmenu is just disgusting, so its not something I’d recommend in new code.

      - StatusNotifier (KDE)

      • ✔️ Uses DBus
      • ✔️ You can detect failure if you try
      • ❌ Not sandbox friendly
      • ✔️ Features

      This is the de-facto standard at the moment that many people are using now. It has its positives and its negatives.

      The biggest problem with this API is that it is clearly not designed with a sandbox in mind. Quoting the spec:

      Each application can register an arbitrary number of Status Notifier Items by registering on the session bus the service org.kde.StatusNotifierItem-PID-ID, where PID is the process id of the application and ID is an arbitrary numeric unique identifier between different instances registered by the same application.

      So this bad on a few levels. For one it requires name ownership in the KDE namespace which I personally do not understand why. Flatpak blocks all ownership except for your app-id. I’m not even sure a well-known name is required at all.

      Secondly it requires putting your PID in the name which is a no-go for any PID namespace on Linux as they will almost always have the same values inside the sandbox.

      Lastly it has the same issues AppIndicator exposed such as talking to an insecure service, referencing icons by path, etc. It also reuses the DBusMenu spec for exporting menus which is still completely awful and unmaintained.

      I’d say it is the best solution we have but it needs improving.

      - StatusIcon (Linux Mint)

      • ✔️ Uses DBus (but falls back to X11)
      • ❌ Failure not exposed
      • ❌ Not sandbox friendly
      • ❌ Features

      So last comes Mint with a new interface which repeats the issues of the past.

      Failure is not exposed at all so applications have no idea if it works.

      It has all of the sandbox problems of StatusNotifier nearly copy paste. It requires name ownership, it uses your PID in said ownership, etc.

      In the end it does not expose as many features as the previous solutions as you can only set an icon, label, and tooltip. I understand why to a degree for menus because libdbusmenu is seriously evil but they could have went a more minimal approach and reused GMenu in GLib which perfectly handles exporting a menu over DBus.

      This announcement is clearly why I am writing this as I hope they will consider some of the sandboxing cases and better expose if a tray is working in the future. I just do not want to see past mistakes repeated and for adoption to happen before they are solved.

      P.S. Don’t use the org.x namespace and don’t do blocking IO.

      - What I want to see (or maybe create)

      • Supports features mentioned above
      • Backed by DBus service
        • This must be a robust service, validating the sent icon data in a sandbox
          • All icons must be sent as data or icon-names, no paths allowed
        • Not require well-known name ownership outside of app-id namespace
          • Uniqueness is up to the application already
        • Support more minimal menus than DBusMenu (via GMenu IMO, but anything similar)
        • Be exposed by xdg-desktop-portal which handles permissions
      • Library that exposes if the tray is working or not
        • Also don’t fallback to XEmbed, at least not silently

      FPgM report: 2019-36

      Posted by Fedora Community Blog on September 06, 2019 08:41 PM
      Fedora Program Manager weekly report on Fedora Project development and progress

      Here’s your report of what has happened in Fedora Program Management this week. The Beta freeze is underway and the Go/No-Go meeting is Thursday!

      I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.


      Help wanted

      Upcoming meetings & test days

      Fedora 31


      • 17 September — Beta release preferred target
      • 8 October — Final freeze begins
      • 22 October — Final release preferred target



      Blocker bugs

      Bug IDBlocker statusComponentBug status
      1734179Accepted (beta)dracut-modules-olpcNEW
      1744266Accepted (beta)desktop-backgroundsASSIGNED
      1745554Accepted (beta)gdmNEW
      1748003Accepted (beta)webkit2gtk3VERIFIED
      1749107Proposed (beta)firefoxMODIFIED
      1749086Proposed (beta)kdeNEW
      1743585Proposed (beta)NetworkManagerASSIGNED
      1734184Proposed (beta)python-jmespathON_QA
      1728240Proposed (beta)sddmNEW

      Fedora 32



      The post FPgM report: 2019-36 appeared first on Fedora Community Blog.