Fedora People

PSA: System update fails when trying to remove rtkit-0.11-19.fc29

Posted by Kamil Páral on October 15, 2018 11:20 AM
Recently a bug in rtkit packaging has been fixed, but the update will fail on all Fedora 29 pre-release installation that have rtkit installed (Workstation has it for sure). The details and the workaround is described here:

 

Announcing Linux Autumn 2018

Posted by Rafał Lużyński on October 15, 2018 11:07 AM

Autumn

If you have ever wondered, like I have, whether there will be an autumn (the Linux Autumn) this year then the answer is: yes.

Linux Autumn is an annual meeting of Free Software and Linux enthusiast from Poland organized since 2003 which means this year it will be its 16th time. This year it will be organized in Ustroń in the southern Poland from 9 to 11 November. The town is the same as the last year but in a different hotel.

As the place is located near the Czech and Slovak border we would like to invite more people, both speakers and attendees, from other countries. We are aware of strong presence of Fedora contributors in Brno and other nearby cities just across the border.

This conference has always been mostly Polish (in terms of the language) but there was always at least one foreign speaker who gave a talk in English. It has always been a chicken and egg problem: there are not many English talks because there are not many foreign attendees; on the other hand there are not many foreign attendees because there are not many English talks. I think we all will be happy to change this. We have already one foreign speaker confirmed, others are in progress.

Currently the registration is open but the organizers are still accepting talk proposals, the CfP deadline has been extended to 19 October. Please hurry with your talk proposal!

If you don’t know what is Linux Autumn about please see my articles about the event in 2017 and in 2016 or see the organizers’ website.

Fedora/RISC-V now mirrored as a Fedora “alternative” architecture

Posted by Richard W.M. Jones on October 15, 2018 09:48 AM

https://dl.fedoraproject.org/pub/alt/risc-v/repo/fedora/29/latest/. These packages now get mirrored further by the Fedora mirror system, eg. to https://mirror.math.princeton.edu/pub/alt/risc-v/repo/fedora/29/latest/

If you grab the latest nightly Fedora builds you can get the mirrors by editing the /etc/yum.repos.d/*.repo file.

Also we got some additional help so we now have loads more build hosts! These were provided by Facebook with hosting by Oregon State University Open Source Lab (see cfarm), so thanks to them.

Thanks to David Abdurachmanov and Laurent Guerby for doing all the work (I did nothing).

Running Linux containers as a non-root with Podman

Posted by Fedora Magazine on October 15, 2018 08:00 AM

Linux containers are processes with certain isolation features provided by a Linux kernel — including filesystem, process, and network isolation. Containers help with portability — applications can be distributed in container images along with their dependencies, and run on virtually any Linux system with a container runtime.

Although container technologies exist for a very long time, Linux containers were widely popularized by Docker. The word “Docker” can refer to several different things, including the container technology and tooling, the community around that, or the Docker Inc. company. However, in this article, I’ll be using it to refer to the technology and the tooling that manages Linux containers.

What is Docker

Docker is a daemon that runs on your system as root, and manages running containers by leveraging features of the Linux kernel. Apart from running containers, it also makes it easy to manage container images — interacting with container registries, storing images, managing container versions, etc. It basically supports all the operations you need to run individual containers.

But even though Docker is very a handy tool for managing Linux containers, it has two drawbacks: it is a daemon that needs to run on your system, and it needs to run with root privileges which might have certain security implications. Both of those, however, are being addressed by Podman.

Introducing Podman

Podman is a container runtime providing a very similar features as Docker. And as already hinted, it doesn’t require any daemon to run on your system, and it can also run without root privileges. So let’s have a look at some examples of using Podman to run Linux containers.

Running containers with Podman

One of the simplest examples could be running a Fedora container, printing “Hello world!” in the command line:

$ podman run --rm -it fedora:28 echo "Hello world!"

Building an image using the common Dockerfile works the same way as it does with Docker:

$ cat Dockerfile
FROM fedora:28
RUN dnf -y install cowsay

$ podman build . -t hello-world
... output omitted ...

$ podman run --rm -it hello-world cowsay "Hello!"

To build containers, Podman calls another tool called Buildah in the background. You can read a recent post about building container images with Buildah — not just using the typical Dockerfile.

Apart from building and running containers, Podman can also interact with container registries. To log in to a container registry, for example the widely used Docker Hub, run:

$ podman login docker.io

To push the image I just built, I just need to tag so it refers to the specific container registry and my personal namespace, and then simply push it.

$ podman -t hello-world docker.io/asamalik/hello-world
$ podman push docker.io/asamalik/hello-world

By the way, have you noticed how I run everything as a non-root user? Also, there is no big fat daemon running on my system!

Installing Podman

Podman is available by default on Silverblue — a new generation of Linux Workstation for container-based workflows. To install it on any Fedora release, simply run:

$ sudo dnf install podman

Episode 118 - Cloudflare's IPFS and onion service

Posted by Open Source Security Podcast on October 15, 2018 01:39 AM
Josh and Kurt talk about Cloudflare's new IPFS and Onion services. One brings distributed blockchain files to the masses, the other lets you host your site on tor easily.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/7128770/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

Cleaning up systemd journal logs on Fedora

Posted by Josef Strzibny on October 14, 2018 03:13 PM

systemd journal logs take a lot of space after a while. Let’s wipe them out!

First you might be interested how much space journal actually takes:

# journalctl --disk-usage
Archived and active journals take up 72.0M in the file system.

Now you know whether that’s too much or not. In case it is, use --vacuum-size option to limit the size of the log (everything above will be deleted). Here is me running the vacuum with 10MB limit:

# journalctl --vacuum-size=10M
Vacuuming done, freed 0B of archived journals from /var/log/journal/d0c1c31ca63b4654a92792c004b69295.

As you can see no space was freed up in my case. Why is that? Reading up the man page reveals that running –vacuum-size= has only an indirect effect on the output shown by –disk-usage, as the latter includes active journal files, while the vacuuming operation only operates on archived journal files.

We also learn about --vacuum-time option that limits the vacuum by time (can be combined with the previous option):

# journalctl --vacuum-size=10M --vacuum-time=2d
Deleted archived journal /var/log/journal/d0c1c31ca63b4654a92792c004b69295/user-1000@0005746cd1587966-78bd9d00691c4f53.journal~ (8.0M).
Vacuuming done, freed 8.0M of archived journals from /var/log/journal/d0c1c31ca63b4654a92792c004b69295.

Above I am deleting entries older than 2 days.

But what about those active files you ask? We need to rotate the log first with journalctl --rotate:

# journalctl --rotate
# journalctl --vacuum-size=10M --vacuum-time=1s

Using --rotate in combination with 1s (retaining only 1 second old logs) brings the disk usage down almost to zero hovewer it probably won’t be zero exactly. In case we want to be confident removing all log files we need to remove them manualy from /var/log/journal. They always end with .journal. (I do not recommend to remove them this way, but it’s the only way --disk-usage show exactly 0B…)

After the cleanup we might want to prevent excessive log size for the future. For that we can lookup SystemMaxUse option in the /etc/systemd/journald.conf configuration file.

# cat /etc/systemd/journald.conf
...
SystemMaxUse=50M
...

50M will limit the size of logs to 50MB maximum.

After editing journald.conf file restart systemd-journald service:

# systemctl restart systemd-journald.service

FEDORA WOMEN`S DAY 2018

Posted by Solanch69 on October 13, 2018 08:07 PM

El dia 22 de octubre, se celebrò en Lima, el  Fedora Women`s Day,  es un evento anual que busca integrar a mujeres en el mundo del Software Libre. Este año, tuve la oportunidad de ser una de las organizadoras, el lugar de la sede fue en la Pontificia Universidad Catolica del Peru, Lima- Peru. Detrás… Sigue leyendo FEDORA WOMEN`S DAY 2018

Fedora 28 : Fix XFS internal error XFS_WANT_CORRUPTED_GOTO .

Posted by mythcat on October 13, 2018 08:00 PM
It is a common kernel error created under certain predefined conditions.
...kernel: XFS internal error XFS_WANT_CORRUPTED_GOTO...
# xfs_repair -L /dev/mapper/fedora-root
# xfs_repair -L /dev/mapper/fedora-swap
# reboot
This will fix this error.

NeuroFedora update: week 41

Posted by Ankur Sinha "FranciscoD" on October 13, 2018 05:11 PM

In week 41, we finally announced NeuroFedora to the community on the mailing list and on the Fedora Community Blog. So, it is officially a thing!

There is a lot of software available in NeuroFedora already. You can see the list here. If you use software that is not on our list, please suggest it to us using the suggestion form.

In week 41:

  • NEST was updated to version 2.16.0. It is in testing for both Fedora 28 and Fedora 29. They should both move to the stable repositories in a few days. This new version does not support 32 bit architectures, so I've had to drop support for those.
  • libneurosim has now been submitted for review. NEST must be built with libneurosim support for PyNN to work with it properly. So PyNN will have to wait until this review is approved and NEST rebuilt.

I am hoping to spend some time on NeuroFedora every week, and I will provide regular updates as I do. Feedback is always welcome. You can get in touch with us here.

-d in go get

Posted by Sayan Chowdhury on October 13, 2018 03:20 PM

Saturday, I am sitting at a Starbucks in Bangalore, trying my hands on a Golang project. I come across this argument -d in go get:

The go cli help says:

The -d flag instructs get to stop after downloading the packages; that is,
it instructs get not to install the packages.

Wonderful! So, if you just want to download the golang project for the sake of contributing, you can use:

go get -d k8s.io/kubernetes

... and it will download the package for you, after which you can start working on the project.

Using ZRAM as swap on Fedora

Posted by Peter Robinson on October 13, 2018 12:33 PM

One of the changes I did for Fedora 29 adding using ZRAM as swap on ARM. The use of compressed RAM for swap on constrained single board computer devices has performance advantages because the RAM is an order of faster than most of the attached storage and in the case of SD/emmc and related flash storage it also saves on the wear and tear of the flash there extending the life of the storage device.

The use of ZRAM as swap isn’t limited to constrained SBCs though, I also use it on my x86 laptop to great effect. It’s also very simple to setup.

# dnf install zram
# systemctl enable zram-swap.service
# reboot

And that’s it! Simple right? To see how it’s being used there are three commands that are useful:

# systemctl status zram-swap.service
● zram-swap.service - Enable compressed swap in memory using zram
   Loaded: loaded (/usr/lib/systemd/system/zram-swap.service; enabled; vendor preset: disabled)
   Active: active (exited) since Tue 2018-10-09 22:13:24 BST; 3 days ago
 Main PID: 1177 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   Memory: 0B
   CGroup: /system.slice/zram-swap.service

Oct 09 22:13:24 localhost zramstart[1177]: Setting up swapspace version 1, size = 7.4 GiB (7960997888 bytes)
Oct 09 22:13:24 localhost zramstart[1177]: no label, UUID=d79b7cf6-41e7-4065-90a9-000811c654b4
Oct 09 22:13:24 localhost zramstart[1177]: Activated ZRAM swap device of 7961 MB
Oct 09 22:13:24 localhost systemd[1]: Started Enable compressed swap in memory using zram.
# swapon
NAME       TYPE      SIZE   USED PRIO
/dev/zram0 partition 7.4G 851.8M   -2
# zramctl
NAME       ALGORITHM DISKSIZE   DATA  COMPR  TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4           7.4G 848.3M 378.4M 389.9M       8 [SWAP]
#

When I was researching the use of ZRAM there was a lot of information online. A lot of implementations sliced up the zram into multiple slices to enable the balancing of the slices across CPUs, but this is outdated information as the zram support in recent kernels is now multi threaded so there’s no performance advantage to having multiple smaller swap devices any longer, and having a single larger swap space allows the kernel to be more effective in using it.

In Fedora all the pieces of the Fedora implementation are stored in the package source repo. So those that are interested in using zram for other use cases are free to test it. Bugs and RFEs can be reported as issues in pagure or in RHBZ like any other package.

FPgM report: 2018-41

Posted by Fedora Community Blog on October 12, 2018 09:04 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week. The Fedora 29 Final Go/No-Go and Release Readiness meetings are next week.

I’ve set up weekly office hours in #fedora-meeting. Drop by if you have any questions or comments about the schedule, Changes, elections or anything else.

Help requests

Announcements

Upcoming meetings

Fedora 29 Status

Fedora 30 Status

Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

Accepted changes

The post FPgM report: 2018-41 appeared first on Fedora Community Blog.

Disque dur fantôme

Posted by Casper on October 12, 2018 08:05 PM
Je lançais des commandes au hasard (en fait non, je cherchais à produire une liste des disques durs branchés), quand soudain une anomalie est apparue.

Lorsque je lance ls /dev/sd*, j'obtiens le retour suivant :

/dev/sda  /dev/sda1  /dev/sdb  /dev/sdb1  /dev/sdb2  /dev/sdb3  /dev/sdc

Le problème, c'est que je n'ai que 2 disques durs branché en SATA, sda et sdb identifiés avec smartctl, et rien d'autre. D'où vient alors ce disque sdc ?

# smartctl -i /dev/sdc
smartctl 6.5 2016-05-07 r4318 [x86_64-linux-4.16.9-200.fc27.x86_64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

/dev/sdc: Unknown USB bridge [0x05e3:0x0723 (0x9451)]
Please specify device type with the -d option.

Use smartctl -h to get a usage summary


Smartctl indique que ce n'est pas un disque dur régulier, c'est donc autre chose. L'information utile qu'a fourni smartctl, c'est l'identifiant du périphérique 0x05e3:0x0723 qui va servir de filtre avec grep.
Petite recherche dans lspci, sans succès. J'ai regardé dans lsusb, et ô joie :

# lsusb|grep 05e3
Bus 003 Device 003: ID 05e3:0723 Genesys Logic, Inc. GL827L SD/MMC/MS Flash Card Reader


Et le mystère est résolu ! sdc est présent même lorsque le lecteur de carte SD est vide !

[F29] Participez à la journée de test consacrée à la modularité

Posted by Charles-Antoine Couret on October 12, 2018 08:20 AM

Aujourd'hui, ce vendredi 12 octobre est une journée dédiée à un test précis : sur la modularité. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

La modularité est le résultat de la réflexion de Fedora.next, amorcé en 2014. L'objectif est de découpler le cycle de vie des applications avec celui de Fedora afin qu'un utilisateur de Fedora bénéficie de plus de flexibilité. Il serait par exemple possible de choisir une version native de Python plus récente ou plus ancienne que celle disponible par défaut. Auparavant cela n'était possible qu'en choisissant une autre version de Fedora ce qui pouvait être contraignant.

Les modules se comportent comme des dépôts supplémentaires proposant un ensemble de paquets destinés à remplacer ceux des dépôts officiels. Mais bien entendu, tout cela est géré par le projet Fedora avec la même qualité et les mêmes mainteneurs.

Le changement majeur pour Fedora 29 est la mise à disposition de cette fonctionnalité nativement pour les autres éditions de Fedora, jusque là uniquement Server en bénéficiait.

Pour le moment Fedora propose quelques modules comme Docker, Django, NodeJS et le langage Go.

Les tests du jour couvrent :

  • Lister les modules disponibles et installés ;
  • Installer un nouveau module ;
  • Activer un nouveau module ;
  • Mettre à jour un module.

Comme vous pouvez le constater, ces tests sont assez simples et ne devraient prendre que quelques minutes seulement. Il vous faudra bien évidemment installer le dépôt modular avant (paquet fedora-repos-modular).

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Kiwi TCMS team updates

Posted by Kiwi TCMS on October 12, 2018 08:10 AM

I am happy to announce that our team is steadily growing! As we work through our roadmap, status update here, and on-board new team members I start to feel the need for a bit more structure and organization behind the scenes. I also wish for consistent contributions to the project (commit early, commit often) so I can better estimate the resources that we have!

I am also actively discussing Kiwi TCMS with lots of people at various conferences and generate many ideas for the future. The latest SEETEST in Belgrade was particularly fruitful. Some of these ideas are pulling into different directions and I need help to keep them under control!

Development-wise sometimes I lose track of what's going on and who's doing what between working on Kiwi TCMS, preparing for conferences and venues to promote the project, doing code review of other team members, trying not to forget to check-in on progress (especially by interns), recruiting fresh blood and thinking about the overall future of the project. Our user base is growing and there are days where I feel like everything is happening at once or that something needs to be implemented ASAP (which is usually true anyway)!

Meet Rayna Stankova in the role of our team coach! Reny is a director for Women Who Code Sofia, senior QA engineer at VMware, mentor with CoderDojo Bulgaria and a long-time friend of mine. Although she is an experienced QA in her own right she will be contributing to the people side of Kiwi TCMS and less so technically!

Her working areas will be planning and organization:

  • help us (re)define the project vision and goals
  • work with us on roadmaps and action plans so we can meet the project goals faster
  • help us (self) organize so that we are more efficient, including checking progress and blockers (aka enforcer) and meet the aforementioned consistency point
  • serve as our professional coach, motivator and somebody who will take care of team health (yes I really suck at motivating others)

and generally serving as another very experienced member of the team!

We did a quick brainstorming yesterday and started to produce results (#smileyface)! We do have a team docs space to share information (non-public for now, will open it gradually as we grow) and came up with the idea to use Kiwi TCMS as a check-list for our on-boarding/internship process!

I don't know how it will play out but I do expect from the team to self-improve, be inspired, become more focused and more productive! All of this also applies to myself, even more so!

Existing team members progress

Last year we started with 2 existing team members (Tony and myself) and 3 new interns (Ivo, Kaloyan and Tseko) who built this website!

Tony is the #4 contributor to Kiwi TCMS in terms of number of commits and is on track to surpass one of the original authors (before Kiwi TCMS was forked)! He's been working mostly on internal refactoring and resolving the thousands of pylint errors that we had (down to around 500 I think). This summer Tony and I visited the OSCAL conference in Tirana and hosted an info booth for the project.

Ivo is the #5 contributor in terms of numbers of commits. He did learn very quickly and is working on getting rid of the remaining pylint errors. His ability to adapt and learn is quite impressive actually. Last month he co-hosted a git workshop at HackConf, a 1000+ people IT event in Sofia.

Kaloyan did most of the work on our website initially (IIRC). Now he is studying in the Netherlands and not active on the project. We are working to reboot his on-boarding and I'm hoping he will find the time to contribute to Kiwi TCMS regularly.

From the starting team only Tseko decided to move on to other ventures after he contributed to the website.

Internship program

At Kiwi TCMS we have a set of training programs that teach all the necessary technical skills before we let anyone actively work on the project, let alone become a team member.

Our new interns are Denitsa Uzunova and Desislava Koleva. Both of them are coming from Vratsa Software Community and were mentors at the recently held CodeWeek hackathon in their home city! I wish them fast learning and good luck!

Happy testing!

Command line quick tips: Reading files different ways

Posted by Fedora Magazine on October 12, 2018 08:00 AM

Fedora is delightful to use as a graphical operating system. You can point and click your way through just about any task easily. But you’ve probably seen there is a powerful command line under the hood. To try it out in a shell, just open the Terminal application in your Fedora system. This article is one in a series that will show you some common command line utilities.

In this installment you’ll learn how to read files in different ways. If you open a Terminal to do some work on your system, chances are good that you’ll need to read a file or two.

The whole enchilada

The cat command is well known to terminal users. When you cat a file, you’re simply displaying the whole file to the screen. Really what’s happening under the hood is the file is read one line at a time, then each line is written to the screen.

Imagine you have a file with one word per line, called myfile. To make this clear, the file will contain the word equivalent for a number on each line, like this:

one
two
three
four
five

So if you cat that file, you’ll see this output:

$ cat myfile
one
two
three
four
five

Nothing too surprising there, right? But here’s an interesting twist. You can also cat that file backward. For this, use the tac command. (Note that Fedora takes no blame for this debatable humor!)

$ tac myfile
five
four
three
two
one

The cat file also lets you ornament the file in different ways, in case that’s helpful. For instance, you can number lines:

$ cat -n myfile
     1 one
     2 two
     3 three
     4 four
     5 five

There are additional options that will show special characters and other features. To learn more, run the command man cat, and when done just hit q to exit back to the shell.

Picking over your food

Often a file is too long to fit on a screen, and you may want to be able to go through it like a document. In that case, try the less command:

$ less myfile

You can use your arrow keys as well as PgUp/PgDn to move around the file. Again, you can use the q key to quit back to the shell.

There’s actually a more command too, based on an older UNIX command. If it’s important to you to still see the file when you’re done, you might want to use it. The less command brings you back to the shell the way you left it, and clears the display of any sign of the file you looked at.

Just the appetizer (or dessert)

Sometimes the output you want is just the beginning of a file. For instance, the file might be so long that when you cat the whole thing, the first few lines scroll past before you can see them. The head command will help you grab just those lines:

$ head -n 2 myfile
one
two

In the same way, you can use tail to just grab the end of a file:

$ tail -n 3 myfile
three
four
five

Of course these are only a few simple commands in this area. But they’ll get you started when it comes to reading files.

PHP version 7.1.23 and 7.2.11

Posted by Remi Collet on October 11, 2018 03:24 PM

RPM of PHP version 7.2.11 are available in remi repository for Fedora 28-29 and in remi-php72 repository for Fedora 26-27 and Enterprise Linux  6 (RHEL, CentOS).

RPM of PHP version 7.1.23 are available in remi repository for Fedora 26-27 and in remi-php71 repository for Enterprise Linux (RHEL, CentOS).

emblem-notice-24.pngNo security fix this month, so no update for version 5.6.38 and 7.0.32.

emblem-important-2-24.pngPHP version 5.5 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.2 installation (simplest):

yum-config-manager --enable remi-php72
yum update php\*

Parallel installation of version 7.2 as Software Collection (x86_64 only):

yum install php72

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update

Parallel installation of version 7.1 as Software Collection (x86_64 only):

yum install php71

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.5
  • EL6 rpm are build using RHEL-6.10
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70 / php71 / php72)

Moving away from the 1.6 freedesktop runtime

Posted by Alexander Larsson on October 11, 2018 02:08 PM

A flatpak runtime contains the basic dependencies that an application needs. It is shared by applications so that application authors don’t have to bother with complicated low-level dependencies, but also so that these dependencies can be shared and get shared updates.

Most flatpaks these days use the freedesktop runtime or one of its derivates (like the Gnome and KDE runtimes). Historically, these have been using the 1.6 version of the freedesktop runtime which is based on Yocto.

The 1.6 runtime has served its place to kickstart flatpak and flathub well, but it is getting quite long in the tooth. We still fix security issues in it now and then, but it is not seeing a lot of maintenance recently. Additionally, not a lot of people know enough yocto to work on it, so we were never able to build a larger community around it.

However, earlier this summer a complete reimplementation, version 18.08, was announced, and starting with version 3.30 the Gnome runtime is now based on it as well, with a KDE version is in the works. This runtime is based on BuildStream, making it much easier to work with, which has resulted in a much larger team working on this runtime. Partly this is due to the awesome fact that Codethink has several people paid to work on this, but there are also lots of community support.

The result is a better supported, easier to maintain runtime with more modern content. What we need to do now is to phase out the old runtime and start using the new one in apps.

So, this is a call to action!

Anyone who maintains a flatpak application, especially on flathub, please try to move to a runtime based on 18.08. And if you have any problems, please report them to the upstream freedesktop-sdk project.

Introducing SkipTheLine

Posted by Sayan Chowdhury on October 11, 2018 11:51 AM

2013

Introducing SkipTheLine

The year I graduated from a third-tier college in Durgapur. I did not expect much from the college, mostly because of the poor placement record in the college.

To get myself a job, I was primarily going through hasjob.co and any job opportunities that came through Bangpypers mailing list.

Finally, I landed myself into an internship and later which turned into a full-time at HackerEarth was through the hasjob.co.


2018

5 years later, finding a job in a suitable company is still tough. In the last 5 years, I have come to know about more options, be it AngelList, Hasjob, StackOverflow, Facebook Job Groups etc.

But the problem still I think is the same, even if you apply through these websites a lot of times you don't get a reply from the company you applied to. On other hand, referrals work. I've referred good people to companies I know. This way, there is a chance of getting an interview in the company. Cracking the interview is a different story altogether which I believe depends a lot on the candidate.

Recently, I got to know that Prashant, one of my school senior and a close friend, who famously known in the tech community for his "Bitcoin Wedding" started an effort called "SkipTheLine".

SkipTheLine is a newsletter where Prashant publishes profiles of three developers. These developers are from the community, who are active in open source, or competitive programming, or just good at technologies they work on. He goes on to introduce the developers through email with the companies point of contact via email and then the candidate and the POC take the discussion forward.

I personally loved the initiative that he took as at the end of the day if people whom I know come asking for a job referral I would just direct them to SkipTheLine. Prashant has quite a stronghold in the startup community and does great work in connecting the folks with some really good startup across the country.

I know people personally who got hired a within few days of their newsletter published so if you are looking out for a job, do fill out the SkipTheLine form.

If you are looking to hire, do drop me an email at gmail AT yudocaa DOT in.

Modularity Test Day 2018-10-12

Posted by Fedora Community Blog on October 11, 2018 09:31 AM
Fedora 29 Modularity Test Day

Friday, 2018-10-12  is the Fedora 29  Modularity Test Day!
We need your help to test if everything runs smoothly!

Why Modularity Test Day?

Many of you would have read the amazing article which came out months ago!
Featuring one of major change[1] of Fedora 29  we would test to make sure that all the functionalities are performing as they should.
Modularity is testable today on any Workstation, Labs, Spins  and we will focus on testing the functionality.
It’s also pretty easy to join in: all you’ll need is Fedora 29(which you can grab from the wiki page).

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Modularity Test Day 2018-10-12 appeared first on Fedora Community Blog.

Updated packages of varnish-6.0.1 with matching vmods, for el6 and el7

Posted by Ingvar Hagelund on October 11, 2018 09:24 AM

Recently, the Varnish Cache project released an updated upstream version 6.0.1 of Varnish Cache. This is a maintenance and stability release of varnish 6.0. I have updated the fedora rawhide package, and also updated the varnish 6.0 copr repo with packages for el6 and el7 based on the fedora package. A selection of matching vmods is also included in the copr repo.

Packages are available at https://copr.fedorainfracloud.org/coprs/ingvar/varnish60/

The following vmods are available:

Included in varnish-modules:
vmod-bodyaccess
vmod-cookie
vmod-header
vmod-saintmode
vmod-tcp
vmod-var
vmod-vsthrottle
vmod-xkey

Packaged separately:
vmod-curl
vmod-digest
vmod-geoip
vmod-memcached
vmod-querystring
vmod-uuid

Please test and report bugs. If there is enough interest, I may consider pushing these to fedora as well.

Varnish Cache is a powerful and feature rich front side web cache. It is also very fast, and that is, fast as in powered by The Dark Side of the Force. On steroids. And it is Free Software.

Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, data center, and cloud, contact us at www.redpill-linpro.com.

Kiwi TCMS winter conference presence

Posted by Kiwi TCMS on October 11, 2018 08:53 AM

We are happy to announce that OpenFest - the biggest open source conference in Bulgaria has provided an info booth for our project. This year the event will be held on 3rd and 4th of November at Sofia Tech Park!

Last time the team went to a conference together we had a lot of fun! Join us at OpenFest to learn more about Kiwi TCMS and have fun with us!

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

In case you are unable to visit Sofia, which you totally should, you can catch up with us in Russia until the end of the year:

Feel free to ping us at @KiwiTCMS or look for the kiwi bird logo and come to say hi. Happy testing!

Fedora Women’s Day 2018 – Mexico City

Posted by Fedora Community Blog on October 11, 2018 08:30 AM
Fedora Women's Day 2018 event report

Fedora Women’s Day (FWD) is a day to celebrate and bring visibility to female contributors in open source projects including Fedora. The initiative is led by Fedora’s Diversity and Inclusion team. The number of women in tech have been increasing year over year,  further highlighting the importance of a more inclusive culture in tech.

On September 21, We had our first Fedora Women’s Day in the UAM Azcapotzalco (Mexico City) and we loved to do it.

<figure class="wp-block-image">Taken during the first Fedora Women's Day at UAM Azcapotzalco (Mexico City)<figcaption>Taken during the first Fedora Women’s Day at UAM Azcapotzalco (Mexico City)</figcaption></figure>

Activities

The agenda was:

<figure class="wp-block-image">Vivana Nava presenting on Women in Tech during Fedora Women's Day 2018 at UAM Azcapotzalco (Mexico City)<figcaption>Vivana Nava presenting on Women in Tech during Fedora Women’s Day 2018 at UAM Azcapotzalco (Mexico City)</figcaption></figure>

Fedora Women’s Day in numbers

16 attendees

  • 14 women.
  • 2 men.
  • 8 pizzas.

All the attendees are in Science, Technology, Engineering, and Math (STEM) Careers

<figure class="wp-block-image">chart 2</figure>

Thanks D&I Team

The girls who participated in this event are very grateful with this initiative and want to host the event the next year. Thanks!

<figure class="wp-block-image">Group photo with all attendees at end of Fedora Women's Day 2018 event in Mexico City<figcaption>Group photo with all attendees at end of Fedora Women’s Day 2018 event in Mexico City</figcaption></figure>

Photo by Arièle Bonte on Unsplash

The post Fedora Women’s Day 2018 – Mexico City appeared first on Fedora Community Blog.

Compte rendu des Embedded et Kernel recipes 2018

Posted by Charles-Antoine Couret on October 10, 2018 09:12 PM

J'avais déjà rapporté le fait que j'assistais aux Embedded et Kernel recipes 2018.

Kernel-recipes-entry.jpg

C'était une expérience enrichissante même si assez chargée. Les conférences se sont enchaînées et les pauses ont donné lieu à de nombreuses et instructives conversations. J'ai beaucoup apprécié le format, on n'avait pas non plus à courir de partout ou de choisir entre deux conférences car tout est conçu pour se concentrer sur une salle et un sujet. Et discuter ensuite des confs qu'on a vu, avec le conférencier également. Le fait qu'on soit relativement peu nombreux (une centaine) facilite les échanges et le bon déroulement de l'organisation.

Les locaux de Mozilla étaient en effet superbes même s'il faisait un peu frais dans l'ensemble. Dommage qu'ils quittent ce lieu prochainement. L'installation pour la conf était vraiment de bonne qualité. Je comprends mieux pourquoi cette salle ait été utilisée pour de nombreuses manifestations.

C'était l'occasion de croiser pas mal de monde, dont quelques personnes que je connaissais déjà comme un ex-collègue de ma période marseillaise et Benjamin Tissoire, co-mainteneur des entrées du noyau. Quelques personnalités du noyau étaient présents bien entendu. Et étant sur Paris, on a pu manger un morceau avec des contributeurs de Fedora. Là encore, c'était agréable.

Draw-Embedded-Recipes-Couret.jpg

J'ai pu effectuer ma conférence sur la mise à jour des systèmes embarquées dans de bonnes conditions. Et un dessinateur présent pendant l'évènement a pu faire un beau portrait. Très amusant et original. La vidéo de ce résultat, une première pour moi en anglais, est disponible par ici.

On a reçu également une carte de Libre computer (Le potatoe), ce qui est toujours apprécié. N'en ayant pas l'utilité personnellement (j'ai un RPi 1 qui traîne dans les tiroirs), c'est mon boulot qui l'a récupéré pour des workshops et autres tests.

Vraiment c'était une très bonne semaine. Merci aux organisateurs pour ce travail, une qualité impeccable. L'ambiance, le confort, mais aussi la qualité des sujets abordés, c'était très bien. Merci à mon employeur aussi de m'avoir offert cette opportunité. Cela donne envie d'y retourner, dommage que ce soit aussi difficile d'y participer, fautes de places libres, mais c'est aussi ce qui rend cet évènement si particulier.

Couret-Swupdate-Conf.jpeg

J'espère à une prochaine fois.

Fedora 28 : Testing Blender 2.80 .

Posted by mythcat on October 10, 2018 08:09 AM
I tested the new Blender 2.80 alpha 2 version and is working well.
You can start to download it from official download page.
The next step: unarchive the tar.bz file and run the blender from the newly created folder.
I try to create a package but seems the tool not working with the .spec file.
This is a screenshot with this Blender 3D software running in my Fedora 28.

Design faster web pages, part 1: Image compression

Posted by Fedora Magazine on October 10, 2018 08:00 AM

Lots of web developers want to achieve fast loading web pages. As more page views come from mobile devices, making websites look better on smaller screens using responsive design is just one side of the coin. Browser Calories can make the difference in loading times, which satisfies not just the user but search engines that rank on loading speed. This article series covers how to slim down your web pages with tools Fedora offers.

Preparation

Before you sart to slim down your web pages, you need to identify the core issues. For this, you can use Browserdiet. It’s a browser add-on available for Firefox, Opera and Chrome and other browsers. It analyzes the performance values of the actual open web page, so you know where to start slimming down.

Next you’ll need some pages to work on. The example screenshot shows a test of getfedora.org. At first it looks very simple and responsive.

Browser Diet - values of getfedora.org

Browser Diet – values of getfedora.org

However, BrowserDiet’s page analysis shows there are 1.8MB in files downloaded. Therefore, there’s some work to do!

Web optimization

There are over 281 KB of JavaScript files, 203 KB more in CSS files, and 1.2 MB in images. Start with the biggest issue — the images. The tool set you need for this is GIMP, ImageMagick, and optipng. You can easily install them using the following command:

sudo dnf install gimp imagemagick optipng

For example, take the following file which is 6.4 KB:

First, use the file command to get some basic information about this image:

$ file cinnamon.png
cinnamon.png: PNG image data, 60 x 60, 8-bit/color RGBA, non-interlaced

The image — which is only in grey and white — is saved in 8-bit/color RGBA mode. That’s not as efficient as it could be.

Start GIMP so you can set a more appropriate color mode. Open cinnamon.png in GIMP. Then go to Image>Mode and set it to greyscale. Export the image as PNG with compression factor 9. All other settings in the export dialog should be the default.

$ file cinnamon.png
cinnamon.png: PNG image data, 60 x 60, 8-bit gray+alpha, non-interlaced

The output shows the file’s now in 8bit gray+alpha mode. The file size has shrunk from 6.4 KB to 2.8 KB. That’s already only 43.75% of the original size. But there’s more you can do!

You can also use the ImageMagick tool identify to provide more information about the image.

$ identify cinnamon2.png
cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2831B 0.000u 0:00.000

This tells you the file is 2831 bytes. Jump back into GIMP, and export the file. In the export dialog disable the storing of the time stamp and the alpha channel color values to reduce this a little more. Now the file output shows:

$ identify cinnamon.png
cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2798B 0.000u 0:00.000

Next, use optipng to losslessly optimize your PNG images. There are other tools that do similar things, including advdef (which is part of advancecomp), pngquant and pngcrush.

Run optipng on your file. Note that this will replace your original:

$ optipng -o7 cinnamon.png 
** Processing: cinnamon.png
60x60 pixels, 2x8 bits/pixel, grayscale+alpha
Reducing image to 8 bits/pixel, grayscale
Input IDAT size = 2720 bytes
Input file size = 2812 bytes

Trying:
 zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 1922
 zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920
 
Selecting parameters:
 zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920

Output IDAT size = 1920 bytes (800 bytes decrease)
Output file size = 2012 bytes (800 bytes = 28.45% decrease)

The option -o7 is the slowest to process, but provides the best end results. You’ve knocked 800 more bytes off the file size, which is now 2012 bytes.

To resize all of the PNGs in a directory, use this command:

$ optipng -o7 -dir=<directory> *.png

The option -dir lets you give a target directory for the output. If this option is not used, optipng would overwrite the original images.

Choosing the right file format

When it comes to pictures for the usage in the internet, you have the choice between:

JPG-LS and JPG 2000 are not widely used. Only a few digital cameras support these formats, so they can be ignored. aPNG is an animated PNG, and not widely used either.

You could save a few bytes more through changing the compression rate or choosing another file format. The first option you can’t do in GIMP, as it’s already using the highest compression rate. As there are no alpha channels in the picture, you can choose JPG as file format instead. For now use the default value of 90% quality — you could change it down to 85%, but then alias effects become visible. This saves a few bytes more:

$ identify cinnamon.jpg
cinnamon.jpg JPEG 60x60 60x60+0+0 8-bit sRGB 2676B 0.000u 0:00.000

Alone this conversion to the right color space and choosing JPG as file format brought down the file size from 23 KB to 12.3 KB, a reduction of nearly 50%.

PNG vs. JPG: quality and compression rate

So what about the rest of the images? This method would work for all the other pictures, except the Fedora “flavor” logos and the logos for the four foundations. Those are presented on a white background.

One of the main differences between PNG and JPG is that JPG has no alpha channel. Therefore it can’t handle transparency. If you rework these images by using a JPG on a white background, you can reduce the file size from 40.7 KB to 28.3 KB.

Now there are four more images you can rework: the backgrounds. For the grey background, set the mode to greyscale again. With this bigger picture, the savings also is bigger. It shrinks from 216.2 KB to 51.0 KB — it’s now barely 25% of its original size. All in all, you’ve shrunk 481.1 KB down to 191.5 KB — only 39.8% of the starting size.

Quality vs. Quantity

Another difference between PNG and JPG is the quality. PNG is a lossless compressed raster graphics format. But JPG loses size through compression, and thus affects quality. That doesn’t mean you shouldn’t use JPG, though. But you have to find a balance between file size and quality.

Achievement

This is the end of Part 1. After following the techniques described above, here are the results:

You brought image size down to 488.9 KB versus 1.2MB at the start. That’s only about a third of the size, just through optimizing with optipng. This page can probably be made to load faster still. On the scale from snail to hypersonic, it’s not reached racing car speed yet!

Finally you can check the results in Google Insights, for example:

In the Mobile area the page gathered 10 points on scoring, but is still in the Medium sector. It looks totally different for the Desktop, which has gone from 62/100 to 91/100 and went up to Good. As mentioned before, this test isn’t the be all and end all. Consider scores such as these to help you go in the right direction. Keep in mind you’re optimizing for the user experience, and not for a search engine.

Bodhi 3.10.1 released

Posted by Bodhi on October 09, 2018 08:23 PM

This release fixes a crash while composing modular repositories (#2631).

NeuroFedora SIG: Call For Participation

Posted by Fedora Community Blog on October 09, 2018 08:17 PM

I’ve recently resurrected the NeuroFedora SIG. Many thanks to Igor and
the others who’d worked on it in the past and have given us a firm base.

NeuroFedora: The goal

The (current) goal of the NeuroFedora SIG is to make Fedora an easy to
use platform for neuroscientists.

Neuroscience is an extremely multidisciplinary field. It brings together mathematicians, chemists, biologists, physicists, psychologists, engineers (electrical and others) computer scientists and more. A lot of software is used nowadays in Neuroscience:

  • data collection, analysis, and sharing
  • lots of image processing (a lot of ML is used here, think Data Science)
  • simulation of brain networks (https://neuron.yale.edu/neuron/, http://nest-simulator.org/) dissemination of scientific results (peer reviewed and otherwise, think LaTeX)

Neuroscience isn’t just about understanding how the brain functions, we also want to understand how it processes information—how it “computes”. (Some of you will already be aware of the Human Brain Project, a flagship EU project) Now, given that a large proportion of neuroscientists are not trained in computer science, a lot of time and effort is spent setting up systems, installing software (often from source). This can be hard for people not well-versed in build systems and so on. So, at NeuroFedora, we will try provide a ready to use Fedora based system for neuroscientists to work with, so they can quickly get their environment set up and work on the science.

Please join us!

If you are interested in neuroscience, please consider joining the SIG. Packaging software is only one way in which one can contribute. Writing docs and answering questions about the software in NeuroFedora are other ways too, for example. You can get in touch with us here.

What is in it for you?

In general, it will increase your awareness of neuroscience (which is a fascinating field—but of course, I am biased). We also hope to use the Fedora classroom sessions to host beginner level classes on using the software we package. If you’d like to get into neuroscience research work, it is an excellent opportunity to learn.

Fedora and Science

In general, furthering Open Science is quite in line with our goals of further FOSS—Open science shares the philosophy of FOSS. The data, the tools, the results, should be accessible to all to understand, use, learn from, and develop. I’ve just written to the Mindshare team asking if we can get the various Science related SIGs together and do more. You can find my e-mail here.

Comments/suggestions/feedback/questions are all welcome!

NeuroFedora logo designed by Terezahl from the Fedora Design team

(This is based on an e-mail that was initially sent to the devel mailing list).

The post NeuroFedora SIG: Call For Participation appeared first on Fedora Community Blog.

Fedora at LinuxDays 2018 in Prague

Posted by Jiri Eischmann on October 09, 2018 01:53 PM

LinuxDays, the biggest Linux event in the Czech Republic, took place at the Faculty of Information Technology of Czech Technical University in Prague. The number of registered attendees was a bit lower this year, it could be caused by municipality and senate elections happening on Fri and Sat, but the number got almost to the 1300 mark anyway.

Besides a busy schedule of talks and workshops the conference also has a pretty large booth area and as every year I organized the Fedora one. I drove by car to Prague with Carlos Soriano and Felipe Borges from the Red Hat desktop team on Saturday morning and we were joined by František Zatloukal (Fedora QA) at the booth.

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_1559" style="width: 1354px">linuxdays-fedora<figcaption class="wp-caption-text">František and me at the booth.</figcaption></figure>

Our focus for this year was Silverblue and modularity. I prepared one laptop with an external monitor to showcase Silverblue, the atomic version of Fedora Workstation. I must say that the interest of people in Silverblue surprised me. There were even some coming next day saying: “It sounded so cool yesterday and I couldn’t resist and install it when I got home and played with it in the night…” With Silverblue comes distribution of applications in Flatpak and there was a lot of user interest in this direction as well.

<figure class="wp-caption aligncenter" data-shortcode="caption" id="attachment_1560" style="width: 179px">DSC_0563<figcaption class="wp-caption-text">Reasons to use Fedora.</figcaption></figure>

I was hoping for more interest in modularity, but people don’t seem to be so aware of it. It doesn’t have the same reach outside the Fedora Project as Flatpak does, it’s not so easy to explain its benefits and use cases. We as a project have to do a better job selling it.

The highlight of Saturday was when one of the sysadmins at National Library of Technology, which is on the same campus, took us to the library to show us public computers where they run Fedora Workstation. It’s 120 computers with over 1000 users (in the last 90 days). Those computers serve a very diverse group of users, from elderly people to computer science students. And they have received very few complaints since they switched from Windows to Fedora. Also they’ve hit almost no problems as sysadmins. They only mentioned one corner case bug in GDM which we promised to look into.

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_1561" style="width: 1354px">linuxdays-ntk<figcaption class="wp-caption-text">Carlos and Felipe checking out Fedora in the library.</figcaption></figure>

It was also interesting to see the setup. They authenticate users against AD using the SSSD client, mount /home from a remote server using NFS. They enable several GNOME Shell extensions by default: AlternateTab (because of Windows users), Places (to show the Places menu next to Activities)… They also created one custom extension that replaces the “Power Off” button with “Log Out” button in the user menu because users are not supposed to power the computers off. They also create very useful stats of application usage based on “recently-used” XML files that GNOME creates to generate the menu of frequently used applications. All computers are administrated using Ansible scripts.

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_1562" style="width: 1200px">Dj_-sJtW0AY6u0Q<figcaption class="wp-caption-text">Default wallpaper with instructions.</figcaption></figure>

The only talk I attended on Saturday was “Why and How I Switched to Flatpak for App Distribution and Development in Sandbox” by Jiří Janoušek who develops Nuvola apps. It was an interesting talk and due to his experience developing and distributing apps on Linux Jiří was able to name and describe all the problems with app distribution on Linux and why Flatpak helps here.

On Sunday, we organized a workshop to teach to build flatpaks. It was the only disappointment of the weekend. Only 3 ppl showed up and none of them didn’t really need to learn to build flatpaks. We’ll have the same workshop at OpenAlt in Brno and if the attendance is also low, we’ll know that workshop primarily for app developers is not a good fit for such conferences.
But it was not a complete waste of time, we discussed some questions around Flatpak and worked on flatpaking applications. The result is GNOME Recorder already available in Flathub and Datovka in the review process.

The event is also a great opportunity to talk to many people from the Czech community and other global FLOSS projects. SUSE has traditionally a lot of people there, there was Xfce, FFMPEG, FreeBSD, Vim, LibreOffice…

 

Farewell, application menus!

Posted by Allan Day on October 09, 2018 01:25 PM

Application menus – or app menus, as they are often called – are the menu that you see in the GNOME 3 top bar, with the name and icon for the current app. These menus have been with us since the beginning of the GNOME 3.0 series, but we’re planning on retiring them for the next GNOME release (version 3.32). This post is intended to provide some background on this change, as well as information on how the transition will happen.

<figure class="wp-caption alignnone" id="attachment_2751" style="width: 1920px"><figcaption class="wp-caption-text">The development version of Web, whose app menu has been removed</figcaption></figure>

Background

When app menus were first introduced, they were intended to play two main roles. First, they were supposed to contain application-level menu items, like Preferences, About and Quit. Secondly, they were supposed to indicate which app was focused.

Unfortunately, we’ve seen app menus not performing well over the years, despite efforts to improve them. People don’t always engage with them. Often, they haven’t realised that the menus are interactive, or they haven’t remembered that they’re there.

My feeling is that this hasn’t been helped by the fact that we’ve had a split between app menus and the menus in application windows. With two different locations for menu items, it becomes easy to look in the wrong place, particularly when one menu is more frequently visited than the other.

One of the other issues we’ve had with application menus is that adoption by third-party applications has been limited. This has meant that they’re often empty, other than the default quit item, and people have learned to ignore them.

As a result of these issues, there’s a consensus that they should be removed.

The plan

<figure class="wp-caption alignnone" id="attachment_2753" style="width: 1920px">Software, which has moved its app menu into the window<figcaption class="wp-caption-text">Software, which has also removed its app menu</figcaption></figure>

We are planning on removing application menus from GNOME in time for the next release, version 3.32. The application menu will no longer be shown in the top bar (neither the menu or the application name and icon will be shown). Each GNOME application will move the items from its app menu to a menu inside the application window (more detailed design guidelines are on the GNOME Gitlab instance).

If an application fails to remove its app menu by 3.32, it will be shown in the app’s header bar, using the fallback UI that is already provided by GTK. This means that there’s no danger of menu items not being accessible, if an app fails to migrate in time.

We are aiming to complete the entire migration away from app menus in time for GNOME 3.32, and to avoid being in an awkward in-between state for the next release. The new menu arrangement should feel natural to existing GNOME users, and they hopefully shouldn’t experience any difficulties.

The technical changes involved in removing app menus are quite simple, but there are a lot of apps to convert (so far we’ve
fixed 11 out of 63!) Therefore, help with this initiative would be most welcome, and it’s a great opportunity for new contributors to get involved.

App menus, it was nice knowing you…

Building Fedora Vagrant boxes for VirtualBox using Packer

Posted by Amit Saha on October 09, 2018 01:00 PM

In a previous post, I shared that we are going to have Fedora Scientific Vagrant boxes with the upcoming Fedora 29 release. Few weeks back, I wanted to try out a more recent build to script some of the testing I do on Fedora Scientific boxes to make sure that the expected libraries/programs are installed. Unexpectedly, vagrant ssh would not succeed.
I filed a issue with rel-eng where I was suggested to see if a package in Fedora Scientific was mucking around with the SSH config. To do so, I had to find a way to manually build Vagrant boxes.

The post here seems to be one way of doing it. Unfortunately, I was in a Windows environment where I wanted to build the box, so I needed to try out something else. chef/bento uses Packer and hence this approach looked promising.

After creating a config file for Fedora 29 and making sure I had my kickstart files right, the following command will build a virtual box vagrant image:

$ packer build -force -only=virtualbox-iso .\fedora-29-scientific-x86_64.json

Once I had the box build environment ready, it was then a matter of a manual commenting/uncomenting out of package/package groups to find out the culprit.

I am writing an introductory book to web application deployment

Posted by Josef Strzibny on October 09, 2018 10:23 AM

I decided to write a book (at the very least attempt to). And yes, there will be some Fedora inside!

Who is the target audience?

Everybody who want to start with system administration for the purposes of web server deployment. Beginners or false beginners. Ideally people with web development experience and comfortable with command line.

What will it be about?

The book will touch some general topics in system administration and provide some practical examples including deployment of Ruby on Rails & PostgreSQL application. I might add other stack/s once I have this ready.

What will be most likely in the book?

I am slowly working on the final Table of Contents. Here is something that will be there:

  • Creating Virtual Private Server (VPS) on something like Digital Ocean (Fedora/CentOS)
  • Managing users, processes, services
  • Basic Nginx configuration
  • Running with SELinux
  • PostgreSQL database setup
  • SSL certificates with Let’s Encrypt for HTTPS
  • git-push deployment for convenience

In general it’s an intersection of various things that make up for a web application deployment on a VPS.

What will it be not about?

There will be no Ansible, no Chef, no Salt, no Terraform. I think it would be too much for this introductory book. I might include a configuration management chapter that discuss this topic in general though.

How can I follow up with the progress on the book?

Check out vpsformakers.com. I will continuously update it as I progress and there is an option to join a mailing list for any book related news.

Firefox on Wayland update

Posted by Martin Stransky on October 09, 2018 09:38 AM

As a next step in the Wayland effort we have new fresh Firefox packages [1] with all the goodies from Firefox 63/64 (Nightly) for you. They come with better (and fixed) rendering, v-sync support, and working HiDPI. Support for hi-res displays is not perfect yet and more fixes are on the way – thanks to Jan Horak who wrote that patches.

The builds also ship PipeWire WebRTC patch for desktop sharing created by Jan Grulich and Tomas Popela. Wayland applications are isolated from desktop and don’t have access to other windows (as X11) thus PipeWire supplies the missing functionality along the browser sandbox.

I think the rendering is generally covered now and the browser should work smoothly with Wayland backend. That’s also a reason why I make it default on Fedora 30 (Rawhide) and firefox-x11 package is available as a X11 fallback. Fedora 29 and earlier stay with default X11 backend and Wayland is provided by firefox-wayland package.

And there’s surely some work left to make Firefox perfect on Wayland – for instance correctly place popups on Gtk 3.24, update WebRender/EGL, fix KDE compositor and so on.

[1] Fedora 27 Fedora 28 Fedora 29

Goodbye JJB, Hello Jenkies Pipeline

Posted by Randy Barlow on October 08, 2018 09:21 PM

I've spent the last couple of weeks working on tidying up Bodhi's Continuous Integration story. This essentially comes down to two pieces: writing a new test running script for the CI system to use (and humans too in the development environment!), and switching from Jenkins Job Builder to Jenkies Pipeline …

Flatpak, after 1.0

Posted by Matthias Clasen on October 08, 2018 05:45 PM

Flatpak 1.0 has happened a while ago, and we now have a stable base. We’re up to the 1.0.3 bug-fix release at this point, and hope to get 1.0.x adopted in all major distros before too long.

Does that mean that Flatpak is done ? Far from it! We have just created a stable branch and started landing some queued-up feature work on the master branch. This includes things like:

  • Better life-cycle control with ps and kill commands
  • Logging and history
  • File copy-paste and DND
  • A better testsuite, including coverage reports

Beyond these, we have a laundry list of things we want to work on in the near future, including

  • Using  host GL drivers (possibly with libcapsule)
  • Application renaming and end-of-life migration
  • A portal for dconf/gsettings (a stopgap measure until we have D-Bus container support)
  • A portal for webcam access
  • More tests!

We are also looking at improving the scalability of the flathub infrastructure. The repository has grown to more than 400 apps, and buildbot is not really meant for using it the way we do.

What about releases?

We have not set a strict schedule, but the consensus seems to be that we are aiming for roughly quarterly releases, with more frequent devel snapshots as needed. Looking at the calendar, that would mean we should expect a stable 1.2 release around the end of the year.

Open for contribution

One of the easiest ways to help Flatpak is to get your favorite applications on flathub, either by packaging it yourself, or by convincing the upstream to do it.

If you feel like contributing to Flatpak itself,  please do! Flatpak is still a young project, and there are plenty of small to medium-size features that can be added. The tests are also a nice place to stick your toe in and see if you can improve the coverage a bit and maybe find a bug or two.

Or, if that is more your thing, we have a nice design for improving the flatpak commandline user experience that is waiting to be implemented.

[F29] Participez à la journée de test consacrée à la mise à niveau

Posted by Charles-Antoine Couret on October 08, 2018 10:01 AM

Aujourd'hui, ce lundi 8 octobre, est une journée dédiée à un test précis : sur la mise à niveau de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Nous sommes proches de la diffusion de Fedora 29 édition finale. Et pour que ce lancement soit un succès, il est nécessaire de s'assurer que le mécanisme de mise à niveau fonctionne correctement. C'est-à-dire que votre Fedora 27 ou 28 devienne une Fedora 29 sans réinstallation, en conservant vos documents, vos paramètres et vos programmes. Une très grosse mise à jour en somme.

Les tests du jour couvrent :

  • Mise à niveau depuis Fedora 27 ou 28, avec un système chiffré ou non ;
  • Même que précédemment mais avec KDE comme environnement ou une version Spin quelconque ;
  • De même avec la version Server au lieu de Workstation ;
  • En utilisant GNOME Logiciels plutôt que dnf.

En effet, Fedora propose depuis quelques temps déjà la possibilité de faire la mise à niveau graphiquement avec GNOME Logiciels et en ligne de commande avec dnf. Dans les deux cas le téléchargement se fait en utilisation normale de votre ordinateur, une fois que ce sera prêt l'installation se déroulera lors du redémarrage.

Pour ceux qui veulent bénéficier de F29 avant sa sortie officielle, profitez-en pour réaliser ce test, que cette expérience bénéficie à tout le monde. :-)

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Play Windows games on Fedora with Steam Play and Proton

Posted by Fedora Magazine on October 08, 2018 08:00 AM

Some weeks ago, Steam announced a new addition to Steam Play with Linux support for Windows games using Proton, a fork from WINE. This capability is still in beta, and not all games work. Here are some more details about Steam and Proton.

According to the Steam website, there are new features in the beta release:

  • Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support.
  • DirectX 11 and 12 implementations are now based on Vulkan, which improves game compatibility and reduces performance impact.
  • Fullscreen support has been improved. Fullscreen games seamlessly stretch to the desired display without interfering with the native monitor resolution or requiring the use of a virtual desktop.
  • Improved game controller support. Games automatically recognize all controllers supported by Steam. Expect more out-of-the-box controller compatibility than even the original version of the game.
  • Performance for multi-threaded games has been greatly improved compared to vanilla WINE.

Installation

If you’re interested in trying Steam with Proton out, just follow these easy steps. (Note that you can ignore the first steps to enable the Steam Beta if you have the latest updated version of Steam installed. In that case you no longer need Steam Beta to use Proton.)

Open up Steam and log in to your account. This example screenshot shows support for only 22 games before enabling Proton.

Now click on Steam option on top of the client. This displays a drop down menu. Then select Settings.

Now the settings window pops up. Select the Account option and next to Beta participation, click on change.

Now change None to Steam Beta Update.

Click on OK and a prompt asks you to restart.

Let Steam download the update. This can take a while depending on your internet speed and computer resources.

After restarting, go back to the Settings window. This time you’ll see a new option. Make sure the check boxes for Enable Steam Play for supported titles, Enable Steam Play for all titles and Use this tool instead of game-specific selections from Steam are enabled. The compatibility tool should be Proton.

The Steam client asks you to restart. Do so, and once you log back into your Steam account, your game library for Linux should be extended.

Installing a Windows game using Steam Play

Now that you have Proton enabled, install a game. Select the title you want and you’ll find the process is similar to installing a normal game on Steam, as shown in these screenshots.

After the game is done downloading and installing, you can play it.

Some games may be affected by the beta nature of Proton. The game in this example, Chantelise, had no audio and a low frame rate. Keep in mind this capability is still in beta and Fedora is not responsible for results. If you’d like to read further, the community has created a Google doc with a list of games that have been tested.

Episode 117 - Will security follow Linus' lead on being nice?

Posted by Open Source Security Podcast on October 08, 2018 12:01 AM
Josh and Kurt talk about Linus' effort to work on his attitude. What will this mean for security and IT in general?

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/7128768/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

Pushing composed images to vSphere

Posted by Weldr on October 08, 2018 12:00 AM

Weldr aka. Composer can generate images suitable for uploading to a VMWare ESXi or vSphere system, and running as a virtual machine there. The images have the right format, and include the necessary agents.

Prerequisites

We’ll use Fedora 29 as our OS of choice for running this. Run this in its own VM with at least 8 gigabytes of memory and 40 gigabytes of disk space. Lorax makes some changes to the operating system its running on.

First install Composer:

$ sudo yum install lorax-composer cockpit-composer cockpit composer-cli

Next make sure to turn off SELinux on the system. Lorax doesn’t yet work properly with SELinux running, as it installs an entire OS image in an alternate directory:

$ sudo setenforce 0
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config

Now start lorax-composer system service:

$ sudo systemctl enable --now lorax-composer.socket

If you’re going to use Cockpit UI to drive Composer (see below), you can also enable it like this:

$ sudo systemctl enable --now cockpit.socket
$ sudo firewall-cmd --add-service=cockpit && firewall-cmd --add-service=cockpit --permanent

Compose an image from the CLI

To compose an image in Composer from the command line, we first have to have a blueprint defined. This blueprint describes what goes into the image. For the purposes of this example we’ll use the example-http-server blueprint, which builds an image that contains a basic HTTP server.

Because VMWare deployments typically does not have cloud-init configured to inject user credentials to virtual machines, we must perform that task ourselves on the blueprint. Use the following command to extract the blueprint to a example-http-server.toml file in the current directory:

$ composer-cli blueprints save example-http-server

Add the following lines to the end of the example-http-server.toml file to set the initial root password to foobar. You can also use a crypted password string for the password or set an SSH key.

[[customizations.user]]
name = "root"
password = "foobar"
key = "..."

Now save the blueprint back into composer with the following command:

$ composer-cli blueprints push example-http-server.toml

We run the following command to start a compose. Notice that we pass the image type of vmdk which indicates we want an image appropriate for pushing to VMWare in the Virtual Machine Disk format.

$ sudo composer-cli compose start example-http-server vmdk
Compose 55070ff6-d637-40fe-80f9-9518f2ee0f21 added to the queue

Now check the status of the compose like this:

$ sudo composer-cli compose status
55070ff6-d637-40fe-80f9-9518f2ee0f21 RUNNING  Mon Oct  8 11:40:50 2018 example-http-server 0.0.1 vmdk

In order to diagnose a failure or look for more detailed progress, see:

$ sudo journalctl -fu lorax-composer
...

When it’s done you can download the resulting image into the current directory:

$ sudo composer-cli compose image 55070ff6-d637-40fe-80f9-9518f2ee0f21
55070ff6-d637-40fe-80f9-9518f2ee0f21-disk.ami: 4460.00 MB

Pushing and using the image

You can upload the image into vSphere via HTTP, or by pushing it into your shared VMWare storage. We’ll use the former mechanism. Click on *Upload Files’ in the vCenter:

Upload files

When you create a VM, on the Device Configuration, delete the default New Hard Disk and use the drop down to select an Existing Hard Disk disk image:

Disk Image Selection

And lastly, make sure you use an IDE device as the Virtual Device Node for the disk you create. The default is SCSI, which will result in an unbootable virtual machine. Disk Image Selection

Pushing composed images to Azure

Posted by Weldr on October 08, 2018 12:00 AM

Weldr aka. Composer can generate images suitable for uploading to the Azure cloud, and running an instance there. The images have the right format, and include the necessary agents, as well as cloud-init.

Prerequisites

We’ll use Fedora 29 as our OS of choice for running this. Run this in its own VM with at least 8 gigabytes of memory and 40 gigabytes of disk space. Lorax makes some changes to the operating system its running on.

First install Composer:

$ sudo yum install lorax-composer cockpit-composer cockpit composer-cli

Next make sure to turn off SELinux on the system. Lorax doesn’t yet work properly with SELinux running, as it installs an entire OS image in an alternate directory:

$ sudo setenforce 0
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config

Now start lorax-composer system service:

$ sudo systemctl enable --now lorax-composer.socket

If you’re going to use Cockpit UI to drive Composer (see below), you can also enable it like this:

$ sudo systemctl enable --now cockpit.socket
$ sudo firewall-cmd --add-service=cockpit && firewall-cmd --add-service=cockpit --permanent

Install the Azure CLI tooling:

$ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
$ sudo sh -c 'echo -e "[azure-cli]\nname=Azure CLI\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo'
$ sudo yum install azure-cli

Now log into the Azure CLI like so:

$ az login
To sign in, use a web browser to open the page
https://microsoft.com/devicelogin and enter the code XXXXXXXXX to authenticate.
...

Make sure you have an appropriate resource group created in Azure. The one I’m using is called composer. Make sure you also have an appropriate storage account created in Azure. The one I’m using is called composerredhat. Next list the keys for that storage account:

$ GROUP=composer
$ ACCOUNT=composerredhat
$ az storage account keys list --resource-group $GROUP --account-name $ACCOUNT

Make note of "key1" in the above output, and assign it to an environment variable:

$ KEY1=....................

Lastly create an appropriate storage container:

$ CONTAINER=composerredhat
$ az storage container create --account-name $ACCOUNT --account-key $KEY1 --name $CONTAINER

Compose an image from the UI

To compose an image in Composer, log into the Cockpit Web Console with your web browser. It’s running on port 9090 on the VM that you’re running Composer in. Use any admin or root Linux system credentials to log in. Select the Image Builder tab.

Cockpit Composer

We first have to have a blueprint defined. This blueprint describes what goes into the image. For the purposes of this example we’ll use the example-http-server blueprint, which builds an image that contains a basic HTTP server.

Click on the Create Image button and choose Azure from the dropdown to choose the Image Type:

Create Image AMI

If you click on the blueprint, you should see progress described on the Images tab:

Create Image Progress

Once it’s done, download the image.

Compose an image from the CLI

To compose an image in Composer from the command line, we first have to have a blueprint defined. This blueprint describes what goes into the image. For the purposes of this example we’ll use the example-http-server blueprint, which builds an image that contains a basic HTTP server.

We run the following command to start a compose. Notice that we pass the image type of vhd which indicates we want an image appropriate for pushing to Azure in the Virtual Hard Disk format.

$ sudo composer-cli compose start example-http-server vhd
Compose 25ccb8dd-3872-477f-9e3d-c2970cd4bbaf added to the queue

Now check the status of the compose like this:

$ sudo composer-cli compose status
25ccb8dd-3872-477f-9e3d-c2970cd4bbaf RUNNING  Mon Oct  8 09:45:52 2018 example-http-server 0.0.1 vhd

In order to diagnose a failure or look for more detailed progress, see:

$ sudo journalctl -fu lorax-composer
...

When it’s done you can download the resulting image into the current directory:

$ sudo composer-cli compose image 25ccb8dd-3872-477f-9e3d-c2970cd4bbaf
25ccb8dd-3872-477f-9e3d-c2970cd4bbaf-disk.vhd: 4460.00 MB

Pushing and using the image

So now you have an image created by Composer, and sitting in the current working directory. Here’s how you push it to Azure and create an instance from it:

$ VHD=25ccb8dd-3872-477f-9e3d-c2970cd4bbaf-disk.vhd
$ az storage blob upload --account-name $ACCOUNT --container-name $CONTAINER --file $VHD --name $VHD --type page
Alive[#####                                                           ]  9.1480%
...

Once the upload to the Azure BLOB completes, we can create an Azure image from it:

$ az image create --resource-group $GROUP --name $VHD --os-type linux --location eastus --source https://$ACCOUNT.blob.core.windows.net/$CONTAINER/$VHD
 - Running ...

Next create an instance either with the Azure portal, or a command similar to the following:

$ az vm create --resource-group $GROUP --location eastus --name $VHD --image $VHD --admin-username azure-user --generate-ssh-keys
 - Running ...

Use your private key via SSH to access the resulting instance as usual. The user to log in as is azure-user

Pushing composed images to AWS

Posted by Weldr on October 08, 2018 12:00 AM

Weldr aka. Composer can generate images suitable for uploading to Amazon Web Services, and starting an EC2 instance. The images have the right partition layout, and include cloud-init.

Prerequisites

We’ll use Fedora 29 as our OS of choice for running this. Run this in its own VM with at least 8 gigabytes of memory and 40 gigabytes of disk space. Lorax makes some changes to the operating system its running on.

First install Composer:

$ sudo yum install lorax-composer cockpit-composer cockpit composer-cli

Next make sure to turn off SELinux on the system. Lorax doesn’t yet work properly with SELinux running, as it installs an entire OS image in an alternate directory:

$ sudo setenforce 0
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config

Now enable and start lorax-composer system service:

$ sudo systemctl enable --now lorax-composer.socket

If you’re going to use Cockpit UI to drive Composer (see below), you can also enable it like this:

$ sudo systemctl enable --now cockpit.socket
$ sudo firewall-cmd --add-service=cockpit && firewall-cmd --add-service=cockpit --permanent

Install the AWS client tooling:

$ sudo yum install python3-pip
$ sudo pip3 install awscli

Make sure you have an Access Key ID configured in AWS IAM account manager and use that info to configure the AWS command line client:

$ aws configure
AWS Access Key ID [None]: ............
AWS Secret Access Key [None]: .............
Default region name [None]: us-east-1
Default output format [None]:

Make sure you have an appropriate S3 bucket. We’ve called ours examplecomposer but yours must be globally unique, so you can’t choose the same name:

$ BUCKET=composerredhat
$ aws s3 mb s3://$BUCKET

S3 bucket screenshot

If you haven’t already, create a vmimport S3 Role in IAM and grant it permissions to access S3. This is how you do it from the command line:

$ printf '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vmie.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals":{ "sts:Externalid": "vmimport" } } } ] }' > trust-policy.json
$ printf '{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket" ], "Resource":[ "arn:aws:s3:::%s", "arn:aws:s3:::%s/*" ] }, { "Effect":"Allow", "Action":[ "ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe*" ], "Resource":"*" } ] }' $BUCKET $BUCKET > role-policy.json
$ aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
$ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json

IAM vmimport role

Compose an image from the UI

To compose an image in Composer, log into the Cockpit Web Console with your web browser. It’s running on port 9090 on the VM that you’re running Composer in. Use any admin or root Linux system credentials to log in. Select the Image Builder tab.

Cockpit Composer

We first have to have a blueprint defined. This blueprint describes what goes into the image. For the purposes of this example we’ll use the example-http-server blueprint, which builds an image that contains a basic HTTP server.

Click on the Create Image button and choose Amazon Machine Image from the dropdown to choose the Image Type:

Create Image AMI

If you click on the blueprint, you should see progress described on the Images tab:

Create Image Progress

Once it’s done, download the image:

Download Image

Compose an image from the CLI

To compose an image in Composer from the command line, we first have to have a blueprint defined. This blueprint describes what goes into the image. For the purposes of this example we’ll use the example-http-server blueprint, which builds an image that contains a basic HTTP server.

We run the following command to start a compose. Notice that we pass the image type of ami which indicates we want an image appropriate for pushing to Amazon Web Services.

$ sudo composer-cli compose start example-http-server ami
Compose 8db1b463-91ee-4fd9-8065-938924398428 added to the queue

Now check the status of the compose like this:

$ sudo composer-cli compose status
8db1b463-91ee-4fd9-8065-938924398428 RUNNING  Mon Oct  8 08:11:33 2018 example-http-server 0.0.1 ami

In order to diagnose a failure or look for more detailed progress, see:

$ sudo journalctl -fu lorax-composer
...

When it’s done you can download the resulting image into the current directory:

$ sudo composer-cli compose image 8db1b463-91ee-4fd9-8065-938924398428
8db1b463-91ee-4fd9-8065-938924398428-disk.ami: 4460.00 MB

Pushing and using the image

So now you have an image created by composer, and sitting in the current working directory. Here’s how you push it to S3 and start an EC2 instance:

$ AMI=8db1b463-91ee-4fd9-8065-938924398428-disk.ami
$ aws s3 cp $AMI s3://$BUCKET
Completed 24.2 MiB/4.4 GiB (2.5 MiB/s) with 1 file(s) remaining
...

Once the upload to S3 completes, we import it as a snapshot into EC2:

$ printf '{ "Description": "CentOS image", "Format": "raw", "UserBucket": { "S3Bucket": "%s", "S3Key": "%s" } }' $BUCKET $AMI > containers.json
$ aws ec2 import-snapshot --disk-container file://containers.json

You can track the status of the import using the following command:

$ aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active

Next create an image from the uploaded snapshot, by selecting the snapshot in the EC2 console, right clicking on it and selecting Create Image:

Select Snapshot

Make sure to select the Virtualization type of Hardware-assisted virtualization in the image you create:

Create Image

Now you can run an instance using whatever mechanism you like (CLI or AWS Console) from the snapshot. Use your private key via SSH to access the resulting EC2 instance as usual. The user to log in as is ec2-user

Banksy Shredder

Posted by Ismael Olea on October 07, 2018 10:00 PM



PD: After some reports about male nudity this post has been edited to remove the portrait of my back. If you have reservations with male nudity PLEASE DON'T FOLLOW THE LINK.

PPD: If you don't have problems with male nudity for your convenience here you'll find the Wiki Commons category «Nude men» of pictures.

«Software Quality Assurance, First Edition» PDF file

Posted by Ismael Olea on October 07, 2018 10:00 PM

Print ISBN:9781118501825, Online ISBN:9781119312451, DOI:10.1002/9781119312451

For your convenience I’ve compiled in just one file the book Software Quality Assurance by Claude Y. Laporte and Alain April. The book is provided for free download at the publisher website as separated files. Download the full book

About the book: «This book introduces Software Quality Assurance (SQA) and provides an overview of standards used to implement SQA. It defines ways to assess the effectiveness of how one approaches software quality across key industry sectors such as telecommunications, transport, defense, and aerospace.»

Claude Y. Laporte is the editor of the ISO/IEC 29110 standard of software engineering for very small entities (VSE).

DrKonqi and QtWebEngine

Posted by Daniel Vrátil on October 07, 2018 11:33 AM

Here’s a little tip how to get DrKonqi, the KDE crash handler to work in applications that use QtWebEngine.

If your application uses QtWebEngine, you probably noticed that DrKonqi doesn’t pop up when the program crashes. This is because QtWebEngine installs its own crash handler, overriding the one DrKonqi has set up.

The workaround is quite simple but is not trivial to find because all of it is undocumented (and not everyone wants to dig into Chromium code…). The trick is to add --disable-in-process-stack-traces to QTWEBENGINE_CHROMIUM_FLAGS environment variable before initializing QtWebEngine:

const auto chromiumFlags = qgetenv("QTWEBENGINE_CHROMIUM_FLAGS");
if (!chromiumFlags.contains("disable-in-process-stack-traces")) {
    qputenv("QTWEBENGINE_CHROMIUM_FLAGS", chromiumFlags + " --disable-in-process-stack-traces");
}
...
auto view = new QtWebEngineView(this);
...

Here’s a full example of how we fixed this in Kontact

A Fedora 28 Remix for Tegra using i3

Posted by Nicolas Chauvet on October 07, 2018 10:29 AM

This is dedicated to older Tegra such as Tegra20, Tegra30 and Tegra114. It can work on Tegra K1, but at this time, using Fedora 29 is a better choice. Specially as Fedora 29 on Tegra K1 and later have support for GPU acceleration with nouveau.

The image integrates the grate-driver that provides a reverse-engineer mesa driver (FLOSS, but not yet upstream). This only advertises OpenGL 1.4 yet, but it can at least run glxgears fine. This is not the case with the softpipe driver on Tegra20.

There is also a HW video decode driver using [libvdpau_tegra|https://github.com/grate-driver/libvdpau-tegra#usage-example|en].

Please use mpv or vlc from [RPM Fusion|https://rpmfusion.org|en|RPM Fusion] for video acceleration.

This spin was only tested on Toshiba AC100 and Trimslice.

* Download [Tegra Fedora 28 Respin via torrent|http://dl.kwizart.net/pub/tegra/28/Fedora-remix-tegra-i3-28-20180918.n.0-sda.raw.xz.torrent|en]

* Install the disk image on a sdcard

> SDCARD=/dev/sdc (to be adapted)

> xzcat Fedora-remix-tegra-i3-28-20180918.n.0-sda.raw.xz | sudo dd of=${SDCARD} bs=4M

You need a recent u-boot version with Extlinux support. I recommends 2018.09 release or later

To update the bootloader on tegra20 devices please see
https://github.com/NVIDIA/tegra-uboot-flasher-scripts

Remind that paz00 still lacks keyboard support for the bootloader upstream. To have keyboard support you can to get:
https://github.com/ac100-ru/u-boot-ac100-exp/tree/nvec-dev-mainline-master-2017-07-15
There is a pre-built u-boot binary based on this tree in the same directory.

If you want to install the packaged grate-driver from an official Fedora 28 image:
> curl https://repos.fedorapeople.org/repos/kwizart/ac100/fedora-ac100.repo -o /etc/yum.repos.d/fedora-ac100.repo
This will replace mesa libdrm, but you can keep the fedora kernel

Interested in having an official i3 spin in Fedora?
For Tegra, it will depends on the upstreaming of the grate-driver, but I've submitted a PR to have a generic i3 spin. As some arm or aarch64 based devices that can output display, but may not be able to have enough accelerated desktop capabilities (Unless using a proprietary or downstream driver that won't be in Fedora). See if you want to help:
https://pagure.io/fedora-kickstarts/pull-request/428

Fedora 29 Upgrade Test Day 2018-10-08

Posted by Fedora Community Blog on October 07, 2018 08:57 AM
F29 Upgrade Test Day 2018-10-08

Monday, 2018-10-08, is the Fedora 29 Upgrade Test Day!
As part of this planned change for Fedora 29, we need your help to test if everything runs smoothly!

Why Upgrade Test Day?

As we approach the Final Release date for Fedora 29. Most users will be upgrading to Fedora 29 and this test day will help us understand if everything is
working perfectly. This test day will cover both a Gnome graphical upgrade and an upgrade done using DNF .

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Fedora 29 Upgrade Test Day 2018-10-08 appeared first on Fedora Community Blog.

NeuroFedora: towards a ready to use Free/Open source environment for neuroscientists

Posted by Ankur Sinha "FranciscoD" on October 05, 2018 11:19 PM

I've recently resurrected the NeuroFedora SIG. Many thanks to Igor and the others who had worked on it in the past and have given us a firm base to build on.

The goal

The (current) goal of the NeuroFedora SIG is to make Fedora an easy to use platform for neuroscientists. We aim to do this by making commonly used Neuroscience software easily installable on a Fedora system.

Neuroscience is an extremely multidisciplinary field. It brings together mathematicians, chemists, biologists, physicists, psychologists, engineers (electrical and others), computer scientists and more. A lot of software is used nowadays in Neuroscience for:

  • data collection, analysis, and sharing
  • lots of image processing (a lot of ML is used here, think Data Science)
  • simulation of brain networks (NEURON, Nest, Moose, PyNN, Brian)
  • dissemination of scientific results (peer reviewed and otherwise, think LaTeX)

Given that a large proportion of neuroscientists are not trained in computer science, a lot of time and effort is spent setting up systems, installing software (often building whole dependency chains from source). This can be hard for people not well-versed in build systems and so on.

So, at NeuroFedora, we will provide a ready to use Fedora based system for neuroscientists to work with, so they can quickly get their environment set up and work on the science.

Why Fedora?

For one, I have been a contributor for a while and know the community and the infrastructure quite well. That applies to me and others from the Fedora community that may work on this and not the research community in general.

Technically, there are many advantages of using Fedora as a base. Fedora is closely linked to the Red Hat Enterprise Linux eco system---which Cent OS is a part of and Scientific Linux is is based on too (Recently, CoreOS also joined the Red Hat family). RPM based systems are commonly deployed in supercomputers and clusters. So, making this software available on Fedora also makes it simpler to make it available on these systems. Additionally, the Fedora community is promoting Flatpaks, and working to permit multiple versions of software via the modularity system. Fedora also supports Docker very well.

Join us!

Packaging software is only one way in which one can contribute. Writing docs and answering questions about the software in NeuroFedora are other ways too, for example. If you are interested in neuroscience and in promoting Open Science, please consider joining the SIG. You can get in touch with us via one of our many communication channels.

This invitation extends to all--undergraduates, post-graduates, trainee researchers (PhD candidates like me), professional researchers, hobbyists, and everyone else. If you work in the field already, it is a great way of supporting the research community. For others, it is a great place to learn about neuroscience, and Free Software and the various technical skills that go into developing software.

Current status

We track the software we are working on here. A lot of software is now ready to use in Fedora. This includes various Python libraries and simulators such as Nest and Moose. Neuron, Brian, and PyNN are all in the pipeline. All of TeX Live is also available in Fedora. If there is other Free/Open source software that you use which isn't on our list, please let us know. If you can help maintain it with us, that'll be even better.

Fedora/Free software and Science

Open science shares the philosophy of FOSS. The data, the tools, the results, should be accessible to all to understand, use, learn from, and develop. More and more researchers are making it a point to keep Science as open as possible whether it is to do with the tools or dissemination of their findings. NeuroFedora hopes to aid this movement. Come, join us!

GStreamer Conference 2018

Posted by Christian F.K. Schaller on October 05, 2018 05:08 PM

For the 9th time this year there will be the GStreamer Conference. This year it will be in Edinburgh, UK right after the Embedded Linux Conference Europe, on the 25th of 26th of October. The GStreamer Conference is always a lot of fun with a wide variety of talks around Linux and multimedia, not all of them tied to GStreamer itself, for instance in the past we had a lot of talks about PulseAudio, V4L, OpenGL and Vulkan and new codecs.This year I am really looking forward to talks such as the DeepStream talk by NVidia, Bringing Deep Neural Networks to GStreamer by Pexip and D3Dx Video Game Streaming on Windows by Bebo, to mention a few.

For a variety of reasons I missed the last couple of conferences, but this year I will be back in attendance and I am really looking forward to it. In fact it will be the first GStreamer Conference I am attending that I am not the organizer for, so it will be nice to really be able to just enjoy the conference and the hallway track this time.

So if you haven’t booked yourself in already I strongly recommend going to the GStreamer Conference website and getting yourself signed up to attend.

See you all in Edinburgh!

Also looking forward to seeing everyone attending the PipeWire Hackfest happening right after the GStreamer Conference.

Stigmatizing volunteers who miss an event

Posted by Daniel Pocock on October 05, 2018 08:35 AM

In various free software communities, I've come across incidents where people have been criticized inappropriately when they couldn't attend an event or didn't meet other people's expectations. This has happened to me a few times and I've seen it happen to other people too.

As it turns out, this is an incredibly bad thing to do. I'm not writing about this to criticize any one person or group in return. Rather, it is written in the hope that people who are still holding grudges like this might finally put them aside and also to reassure other volunteers that you don't have to accept this type of criticism.

Here are some of the comments I received personally:

"Last year, you signed up for the conference but didn't attend, cancelling on the last minute, when you had already been ..."

"says the person who didn't attend any of the two ... he was invited to, because, well, he had no time"

"you didn't stayed at the booth enough at ..., never showed up at the booth at the ... and never joined ..."

Having seen this in multiple places, I don't want this blog to focus on any one organization, person or event.

In all these cases, the emails were sent to large groups on CC, one of them appeared on a public list. Nobody else stepped in to point out how wrong this is.

Out of these three incidents, one of them subsequently apologized and I sincerely thank him for that.

The emails these were taken from were quite negative and accusatory. In two of these cases, the accusation was being made after almost a year had passed. It leaves me wondering how many people in the free software community are holding grudges like this and for how long.

Personally, going to an event usually means giving talks and workshops. Where possible, I try to involve other people in my presentations too and may disappear for an hour or skip a social gathering while we review slides. Every volunteer, whether they are speakers, organizers or whatever else usually knows the most important place where they should be at any moment and it isn't helpful to criticize them months later without even asking, for example, about what they were doing rather than what they weren't doing.

Think about some of the cases where a volunteer might cancel their trip or leave an event early:

  • At the last minute they decided to go to the pub instead.
  • They never intended to go in the first place and wanted to waste your time.
  • They are not completely comfortable telling you their reason because they haven't got to know you well enough or they don't want to put it in an email.
  • There is some incredibly difficult personal issue that may well be impossible for them to tell you about because it is uncomfortable or has privacy implications. Imagine if a sibling commits suicide, somebody or their spouse has a miscarriage, a child with a mental health issue or a developer who is simply burnt out. A lot of people wouldn't tell you about tragedies in this category and they are entitled to their privacy.

When you think about it, the first two cases are actually really unlikely. You don't do that yourself, so why do you assume or imply that any other member of the community would behave that way?

So it comes down to the fact that when something like this happens, it is probably one of the latter two cases.

Even if it really was one of the first two cases, criticizing them won't make them more likely to show up next time, it has no positive consequences.

In the third case, if the person doesn't trust you well enough to tell you the reason they changed their plans, they are going to trust you even less after this criticism.

In the fourth case, your criticism is going to be extraordinarily hurtful for them. Blaming them, criticizing them, stigmatizing them and even punishing them and impeding their future participation will appear incredibly cruel from the perspective of anybody who has suffered from some serious tragedy: yet these things have happened right under our noses in respected free software projects.

What is more, the way the subconscious mind works and makes associations, they are going to be reminded about that tragedy or crisis when they see you (or one of your emails) in future. They may become quite brisk in dealing with you or go out of their way to avoid you.

Many organizations have adopted codes of conduct recently. In Debian, it calls on us to assume good faith. The implication is that if somebody doesn't behave the way you hope or expect, or if somebody tells you there is a personal problem without giving you any details, the safest thing to do and the only thing to do is to assume it is something in the most serious category and treat them with the respect that you would show them if they had fully explained such difficult circumstances to you.

Fedora Classroom Session: Fedora Modularity 101

Posted by Fedora Magazine on October 05, 2018 08:00 AM

Fedora Classroom sessions continue next week with a session on Fedora Modularity. The general schedule for sessions appears on the wiki. You can also find resources and recordings from previous sessions there. Here are details about this week’s session on Tuesday, October 09 at 1400 UTC. That link allows you to convert the time to your timezone.

Topic: Fedora Modularity 101

As the Fedora Modularity docs page explains, Modularity introduces a new optional repository to Fedora called Modular (often referred to as the “Application Stream” or AppStream for short). This repo ships additional versions of software on independent life cycles. This way users can keep their operating system up-to-date while having the right version of an application for their use case, even when the default version in the distribution changes.

Here’s the agenda for the Classroom session:

Fedora Modularity 101
  1. Why Modularity?
  2. User demo.
  3. Main differences in packager workflows
  4.  A walk through the whole process of making a module in Fedora.

Instructor

Adam Samalik is an open source enthusiast using Linux since 2007. He started with many random distributions, then used Arch Linux for a few years, and then switched to Fedora 20 when he realized he needed more stability in his life to get work done.

Adam spends most of his time these days developing and advocating for Fedora Modularity, and also developing the build-side of the new Fedora Docs website.

Other than that, he enjoys giving talks, presenting demos, making graphical designs, and writing — all of that with various levels of proficiency.

Joining the session

This session takes place on Blue Jeans. The following information will help you join the session:

We hope you attend, learn from, and enjoy this session. Also, If you have any feedback about the sessions, have ideas for a new one or want to host a session, feel free to comment on this post or edit the Classroom wiki page.