Fedora People

Remember the extra metadata if you change a desktop ID!

Posted by Richard Hughes on April 23, 2019 11:48 AM

This is important if you’re the upstream maintainer of an application: If you change the desktop ID then it’s like breaking API. Changing a desktop ID should be done carefully and in a development branch only — and then you need to communicate it and give the distros a chance to adapt to the new name.

If you’re just changing the desktop ID and not forking development, you also need to add something like this in your metainfo.xml file:

  <provides>
    <id>old-name.desktop</id>
  </provides>

GNOME Software gets lots of bugs about showing “duplicate” search results, but there’s no reliable way it can know that calibre-gui.desktop is the same app as com.calibre_ebook.calibre without some help. If you’re a packager building an application for something like Flathub you only need to include the extra provides line if you’re adding a new metainfo.xml file rather than just using rename-appdata-file in the JSON file.

Fedora Docs Translations FAD Report

Posted by Fedora Community Blog on April 23, 2019 07:37 AM

Last week Jean-Baptiste Holcroft and Adam Šamalík met in Strasbourg for Docs Translations mini-FAD in order to prototype translations support for the Fedora Docs website. And we did a lot of work! This post is a report from the event, a status report, and a brief plan for how to move forward.

Our goal was to make sure we’re both on the same page about how it’s all going to work, to do some coding and publish a functional prototype, and to write a set of requirements for a potential production deployment.

The event happened a co-working space Le Shadok and we were grateful for being able to use the spaces for free.

PipelineArchitecture 

The architecture we went for is flexible enough to support many different workflows and tools for translators while having as little impact on the Docs build pipeline as possible. To achieve that, we use the commonly used POT and PO files as an interface between the Docs build pipeline and the translators — so both words can operate at their own pace.

Because Antora doesn’t yet support languages, we work around it by building a separate site for each language, and we include a piece of UI that allows users to choose what language they want to view. One benefit of this approach is that all the internal links keep functioning within a language out of the box. And we’ll be ready to switch to a proper support later.

<figure class="wp-block-image"></figure>

Progress at the event

There were three main areas of focus: coordination, coding, and making a plan for the future.

The architecture is not terribly complicated, but the devil is in the details — so the ability to get together and make sure we both understand it the same way and that it works nicely for both the Fedora Docs build pipeline and the translators was very beneficial.

From the technology perspective, we have focused on automation and bug-fixes (read: a complete rewrite) of the existing code, writing new parts to make it all work together, and thinking about ways of production deployment in the future.

The most visible change has been the UI that allows users to choose a language on the site.

However, the most work went to the scripts that do the conversion between Antora sources and the POT and PO files. We had a very simple functionality before the event, but it required a lot of human intervention to make it work over time. So we’ve added some automation to it and fixed bugs and the result has been a complete rewrite of the scripts. The result is one script that does it all. You can see it in the fedora-docs/translations-scripts repository on Pagure.

Finally, we have set up Jean-Baptiste’s Weblate instance to work with all the sources we’ve generated, and to support French, Czech, and Japanese.

The result

The website preview has been deployed to Fedora Docs Staging. Languages can be switched using the menu on the top-right corner.

The Antora build, including the languages, is now being built automatically in Fedora OpenShift. However, the Antora sources with the translated content has to be generated manually.

Jean-Baptiste’s Weblate instance is available for people to try — it supports FAS (Fedora Account System) login.

Fun statistics: we have converted over 1,500 source pages into translatable POT files. That resulted in 65,000 sentences to be translated into each language.

At the moment, all of this is just to demonstrate how the whole pipeline works and to test it. Production deployment is yet to be discussed.

Next steps

There are still a few things missing in the pipeline — such as support for images, and a few minor bugs. We feel it is good enough to be tested now, but we need to fix them before going to production.

To deploy this in production, there are two things missing: a service that will do the conversion between Antora sources and the POT and PO files (that is done manually now), and a translation platform.

To deploy the scripts into production, there is still some hardening to be done, but once that’s completed, it will only require an OpenShift service that will run periodically.

For the translation platform, there are still conversations to be had about which one to use and where to run it. We have used Weblate in this example because we believe it does a great job for what we need.

We invite people to try what we’ve build and send us feedback either here in the comments or on the Fedora Docs mailing list.

The post Fedora Docs Translations FAD Report appeared first on Fedora Community Blog.

Fedora 30 Upgrade Test Day 2019-04-26

Posted by Fedora Community Blog on April 22, 2019 10:08 PM
Test Day : Fedora 30 Upgrade Test Day

Friday, 2019-04-26, is the Fedora 30 Upgrade Test Day!
As part of this planned change for Fedora 30, we need your help to test if everything runs smoothly!

Why Upgrade Test Day?

As we approach the Final Release date for Fedora 30. Most users will be upgrading to Fedora 30 and this test day will help us understand if everything is working perfectly. This test day will cover both a Gnome graphical upgrade and an upgrade done using DNF .

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Fedora 30 Upgrade Test Day 2019-04-26 appeared first on Fedora Community Blog.

Pi-hole with DNS over TLS on Fedora

Posted by Christopher Smart on April 22, 2019 12:44 PM

Quick and dirty guide to using Pi-hole with Stubby to provide both advertisement blocking and DNS over TLS. I’m using Fedora 29 ARM server edition on a Raspberry Pi 3.

Download Fedora server ARM edition and write it to an SD card for the Raspberry Pi 3.

sudo fedora-arm-image-installer --resizefs --image=Fedora-Server-armhfp-29-1.2-sda.raw.xz --target=rpi3 --media=/dev/mmcblk0

Make sure your Raspberry Pi can already resolve DNS queries from some other source, such as your router or internet provider.

Log into the Fedora Server Cockpit web interface for the server (port 9090) and enable automatic updates from the Software tab. Else you can do updates manually.

sudo dnf -y update && sudo reboot

Install Stubby

Install Stubby to forward DNS requests over TLS.

sudo dnf install getdns bind-utils

Edit the Stubby config file.

sudo vim /etc/stubby/stubby.yml

Set listen_addresses to localhost 127.0.0.1 on port 53000 (also set your preferred upstream DNS providers, if you want to change the defaults, e.g. CloudFlare).

listen_addresses:
– 127.0.0.1@53000
– 0::1@53000

Start and enable Stubby, checking that it’s listening on port 53000.

sudo systemctl restart stubby
sudo ss -lunp |grep 53000
sudo systemctl enable stubby

Stubby should now be listening on port 53000, which we can test with dig. The following command should return an IP address for google.com.

dig @localhost -p 53000 google.com

Next we’ll use Pi-hole as a caching DNS service to forward requests to Stubby (and provide advertisement blocking).

Install Pi-hole

Sadly, Pi-hole doesn’t support SELinux at the moment so set it to permissive mode (or write your own rules).

sudo setenforce 0
sudo sed -i s/^SELINUX=.*/SELINUX=permissive/g /etc/selinux/config

Install Pi-hole from their Git repository.

sudo dnf install git
git clone --depth 1 https://github.com/pi-hole/pi-hole.git Pi-hole
cd "Pi-hole/automated install/"
sudo ./basic-install.sh

The installer will run, install deps and prompt for configuration. When asked what DNS to use, select Custom from the bottom of the list.

<figure class="wp-block-image"><figcaption>Custom DNS servers</figcaption></figure>

Set the server to 127.0.0.1 (note that we cannot set the port here, we’ll do that later)

<figure class="wp-block-image"><figcaption>Use local DNS server</figcaption></figure>

In the rest of the installer, also enable the web interface and server if you like and allow it to modify the firewall else this won’t work at all! 🙂 Make sure you take note of your admin password from the last screen, too.

Finally, add the port to our upstream (localhost) DNS server so that Pi-hole can forward requests to Stubby.

sudo sed -i '/^server=/ s/$/#53000/' /etc/dnsmasq.d/01-pihole.conf
sudo sed -i '/^PIHOLE_DNS_[1-9]=/ s/$/#53000/' /etc/pihole/setupVars.conf
sudo systemctl restart pihole-FTL

If you don’t want to muck around with localhost and ports you could probably add an IP alias and bind your Stubby to that on port 53 instead.

Testing

On a machine on your network, set /etc/resolv.conf to point to the IP address of your Pi-hole server to use it for DNS.

On the Pi-hole, check incoming DNS requests to ensure they are listening and forwarding on the right ports using tcpdump.

sudo tcpdump -Xnn -i any port 53 or port 53000 or port 853

Back on your client machine, ping google.com and with any luck it will resolve.

For a new query, tcpdump on your Pi-hole box should show an incoming request from the client machine to your pi-hole on port 53, a follow-up localhost request to 53000 and then outward request from your Pi-hole to 853, then finally the returned result back to your client machine.

You should also notice that the payload for the internal DNS queries are plain text, but the remote ones are encrypted.

Web interface

Start browsing around and see if you notice any difference where you’d normally see ads. Then jump onto the web interface on your Pi-hole box and take a look around.

<figure class="wp-block-image"><figcaption>Pi-hole web interface</figcaption></figure>

If that all worked, you could get your DHCP server to point clients to your shiny new Pi-hole box (i.e. use DHCP options 6,<ip_address>).

If you’re feeling extra brave, you could redirect all unencrypted DNS traffic on port 53 back to your internal DNS before it leaves your network, but that might be another blog post…


The Linux desktop is not in trouble

Posted by Ben Cotton on April 22, 2019 11:39 AM

Writing for ZDNet earlier this month, Steven J. Vaughan-Nichols declared trouble for the Linux desktop. He’s wrong.

Or maybe not. Maybe we’re just looking at different parts of the elephant. sjvn’s core argument, if I may sum it up, is that fragmentation is holding back the Linux desktop. Linux can’t gain significant traction in the desktop market because there are just so many options. This appeals to computer nerds, but leads to confusion for general users who don’t want to care about whether they’re running GNOME or KDE Plasma or whatever.

Fragmentation

I’m sympathetic to that argument. When I was writing documentation for Fedora, we generally wrote instructions for GNOME, since that was the default desktop. Fedora users can also choose from spins of KDE Plasma, LXQt, Xfce, plus can install other desktop environments. If someone installs KDE Plasma because that’s what their friend gave them, will they be able to follow the documentation? If not, will they get frustrated and move back to Windows or MacOS?

Even if they stick it out, there are two large players in the GUI toolkit world: GTK and Qt. You can use an app written in one in a desktop environment written in the other, but it doesn’t always look very good. And the configuration settings may not be consistent between apps, which is also frustrating.

Corporate indifference

Apart from that, sjvn also laments the lack of desktop effort from major Linux vendors:

True, the broad strokes of the Linux desktop are painted primarily by Canonical and Red Hat, but the desktop is far from their top priority. Instead, much of the nuts and bolts of the current generation of the Linux desktop is set by vendor-related communities: Red Hat, Fedora, SUSE’s openSUSE, and Canonical’s Ubuntu.

I would argue that this is the way it should be. As he notes in the preceding paragraph, the focus of revenue generation is on enterprise servers and cloud. There are two reasons for that: that’s where the customer money is and enterprises don’t want to innovate on their desktops.

I’ll leave the first part to someone else, but I think the “enterprises don’t want to innovate on their desktops” part is important. I’ve worked at and in support of some large organizations and in all cases, they didn’t want anything more from their desktops than “it allows our users to run their business applications in a reliable manner”. Combine this with the tendency of the enterprise to keep their upgrade cycles long and it makes no sense to keep desktop innovation in the enterprise product.

Community distributions are generally more focused on individuals or small organizations who may be more willing to accept disruptive change as the paradigm is moved forward. This is true beyond the desktop, too. Consider changes like the adoption of systemd or replacing yum with dnf: these also appeared in the community distributions first, but I didn’t see that used as a case for “enterprise Linux distributions are in trouble.”

What’s the answer?

Looking ahead, I’d love to see a foundation bring together the Linux desktop community and have them hammer out out a common desktop for everyone. Yes, I know, I know. Many hardcore Linux users love have a variety of choices. The world is not made up of desktop Linux users. For the million or so of us, there are hundreds of millions who want an easy-to-use desktop that’s not Windows, doesn’t require buying a Mac, and comes with broad software and hardware support.

Setting aside the XKCD #927 argument, I don’t know that this is an answer. Even if the major distros agreed to standardize on the same desktop (and with Ubuntu returning to GNOME, that’s now the case), that won’t stop effort on other desktops. If the corporate sponsors don’t invest any effort, the communities still will. People will use whatever is provided to them in the workplace, so presenting a single standard desktop to consumers will rely on the folks who make the community distributions to agree to that. It won’t happen.

But here’s the crux of my disagreement with this article. The facts are all correct, even if I disagree with the interpretation of some of them. The issue is that we’re not looking at the success of the Linux desktop in the same way.

If you define “Linux desktop” as “a desktop environment that runs the Linux kernel”, then ChromeOS is doing quite well, and will probably continue to grow (unless Google gets bored with it). In that case, the Linux desktop is not in trouble, it’s enjoying unprecedented success.

But when most people say “Linux desktop”, they think of a traditional desktop model. In this case, the threat to Linux desktops is the same as the threat to Windows and MacOS: desktops matter less these days. So much computing, particularly for consumers, happens in the web browser when done on a PC at all.

Rethinking the goal

This brings me back to my regular refrain: using a computer is a means, not an end. People don’t run a desktop environment to run a desktop environment, they run a desktop environment because it enables them to do the things they want to do. As those things are increasingly done on mobile or in the web browser, achieving dominant market share for desktops is no longer a meaningful goal (if, indeed, it ever was).

Many current Linux desktop users are (I guess), motivated at least in part by free software ideals. This is not a mainstream position. Consumers will need more practical reasons to choose any Linux desktop over the proprietary OS that was shipped by the computer’s manufacturer.

With that in mind, the answer isn’t standardization, it’s making the experience better. Fedora Silverblue and OpenSUSE Kubic are efforts in that direction. Using those as a base, with Flatpaks to distribute applications, the need for standardization at the desktop environment level decreases because the users are mostly interacting with the application level, one step above.

The usual disclaimer applies: I am a Red Hat employee who works on Fedora. The views in this post are my own and not necessarily the views of Red Hat, the Fedora Council, or anyone else. They may not even be my views by the time you read this.

The post The Linux desktop is not in trouble appeared first on Blog Fiasco.

TeleIRC v1.3.1 released with quality-of-life improvements

Posted by Justin W. Flory on April 22, 2019 08:30 AM
RITlug/teleirc development update

On April 20th, 2019, the TeleIRC development team released TeleIRC v1.3.1, the latest version after the final development sprint for the university semester. This release introduces minor improvements in order to accommodate heavier work-balance loads on our volunteer contributors. However, it gave us an opportunity to reduce technical debt. This blog post explains what’s new in TeleIRC v1.3.1 and also offers a retrospective into how this last sprint went.

Special thanks and appreciation goes to Tim Zabel and Nic Hartley for their contributions this release cycle.

What’s new

  • Bold usernames in message prefixes (#134, Nic Hartley)
  • Include filetype in IRC string when a document is uploaded on Telegram (#139, Tim Zabel)
  • Include zero-width space in username for join/part messages to group (#139, Tim Zabel)

Additionally, contributor documentation improved. I added contributor guidelines and instructions to set up a development environment. Also, our friends at Ura Design designed our new project logo. Thanks to Ura, we have an awesome project logo and stickers in time for Imagine RIT 2019 later this month!

<figure class="wp-block-embed-twitter wp-block-embed is-type-rich is-provider-twitter">
<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
</figure>

TeleIRC v1.3.1: sprint retrospective

Originally, we planned to release v1.4 at the end of this sprint. For a number of reasons, this did not happen. We decided to reduce our scope and finish strong with a bugfix release instead of the originally-planned feature release. This retrospective summarizes “lessons learned” for future project sprints with a team of university students.

Extended holidays are sprint bookends

In the last sprint, our university had a week-long break from classes. Most students use this time to visit family or travel outside of Rochester. Originally, we agreed to pause the sprint and resume when we returned. In retrospect, it didn’t work out like that.

It was harder to start again when we returned from the break. Instead of an extended holiday acting as a pause in an ongoing sprint, extended holiday breaks should divide two separate sprints. The breaks from classes are personal time; working on projects is not possible for everyone. The interruption caused by a break impacts productivity of the team. Therefore, future sprint planning will take the university calendar into consideration.

Adjustable sprint length to semester

Sprint lengths should have an adjustable length depending on what part of the academic semester the sprint is. For example, earlier this semester, we released v1.3 in a two-week sprint. For this v1.3.1 release, it was over a month. What happened? Should sprints have a variable length?

When working with an academic crowd, variable sprint lengths are worth considering. The first half of a semester typically has less assigned coursework. Final projects are not at play. Therefore, usually team members have more time to invest in the project at the start of a semester. Towards the end of the semester, coursework and class projects pile on and make it difficult to find bandwidth to work on side projects like TeleIRC.

The compromise is keeping our sprints short at the start of a semester and stretching them out as a semester goes on. This gives students more flexibility to work at a pace that encourages quality work but isn’t overwhelming with other responsibilities of being a student. Going forward, we will try variable-length sprints in the Fall 2019 semester.

Get involved with TeleIRC!

More opportunities are coming to participate with TeleIRC! The team is happy for new people to join us. Opportunities are available for short-term and long-term contributions.

Come say hello in our developer chat rooms, either on IRC or in Telegram! Watch for TeleIRC development reports on my blog for more announcements.


Background photo by Daria Nepriakhina on Unsplash.

The post TeleIRC v1.3.1 released with quality-of-life improvements appeared first on Justin W. Flory's Blog.

2 new apps for music tweakers on Fedora Workstation

Posted by Fedora Magazine on April 22, 2019 08:00 AM

Linux operating systems are great for making unique customizations and tweaks to make your computer work better for you. For example, the i3 window manager encourages users to think about the different components and pieces that make up the modern Linux desktop.

Fedora has two new packages of interest for music tweakers: mpris-scrobbler and playerctl. mpris-scrobbler tracks your music listening history on a music-tracking service like Last.fm and/or ListenBrainz. playerctl is a command-line music player controller.

mpris-scrobbler records your music listening trends

mpris-scrobbler is a CLI application to submit play history of your music to a service like Last.fm, Libre.fm, or ListenBrainz. It listens on the MPRIS D-Bus interface to detect what’s playing. It connects with several different music clients like spotify-client, vlc, audacious, bmp, cmus, and others.

<figure class="wp-block-image">Last.fm last week in music report. Generated from user-submitted listening history.<figcaption>Last.fm last week in music report. Generated from user-submitted listening history.</figcaption></figure>

Install and configure mpris-scrobbler

mpris-scrobbler is available for Fedora 28 or later, as well as the EPEL 7 repositories. Run the following command in a terminal to install it:

sudo dnf install mpris-scrobbler

Once it is installed, use systemctl to start and enable the service. The following command starts mpris-scrobbler and always starts it after a system reboot:

systemctl --user enable --now mpris-scrobbler.service

Submit plays to ListenBrainz

This article explains how to link mpris-scrobbler with a ListenBrainz account. To use Last.fm or Libre.fm, see the upstream documentation.

To submit plays to a ListenBrainz server, you need a ListenBrainz API token. If you have an account, get the token from your profile settings page. When you have a token, run this command to authenticate with your ListenBrainz API token:

$ mpris-scrobbler-signon token listenbrainz
Token for listenbrainz.org:

Finally, test it by playing a song in your preferred music client on Fedora. The songs you play appear on your ListenBrainz profile.

<figure class="wp-block-image">Basic statistics and play history from a user profile on ListenBrainz. The now playing track is currently playing on a Fedora Workstation laptop with mpris-scrobbler.<figcaption>Basic statistics and play history from a user profile on ListenBrainz. The current track is playing on a Fedora Workstation laptop with mpris-scrobbler.</figcaption></figure>

playerctl controls your music playback

playerctl is a CLI tool to control any music player implementing the MPRIS D-Bus interface. You can easily bind it to keyboard shortcuts or media hotkeys. Here’s how to install it, use it in the command line, and create key bindings for the i3 window manager.

Install and use playerctl

playerctl is available for Fedora 28 or later. Run the following command in a terminal to install it:

sudo dnf install playerctl

Now that it’s installed, you can use it right away. Open your preferred music player on Fedora. Next, try the following commands to control playback from a terminal.

To play or pause the currently playing track:

playerctl play-pause

If you want to skip to the next track:

playerctl next

For a list of all running players:

playerctl -l

To play or pause what’s currently playing, only on the spotify-client app:

playerctl -p spotify play-pause

Create playerctl key bindings in i3wm

Do you use a window manager like the i3 window manager? Try using playerctl for key bindings. You can bind different commands to different key shortcuts, like the play/pause buttons on your keyboard. Look at the following i3wm config excerpt to see how:

# Media player controls
bindsym XF86AudioPlay exec "playerctl play-pause"
bindsym XF86AudioNext exec "playerctl next"
bindsym XF86AudioPrev exec "playerctl previous"

Try it out with your favorite music players

Need to know more about customizing the music listening experience on Fedora? The Fedora Magazine has you covered. Check out these five cool music players on Fedora:

<figure class="wp-block-embed is-type-rich is-provider-fedora-magazine">
5 cool music player apps
<iframe class="wp-embedded-content" data-secret="ElMGRd93qX" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/5-cool-music-player-apps/embed/#?secret=ElMGRd93qX" title="“5 cool music player apps” — Fedora Magazine" width="600"></iframe>
</figure>

Bring order to your music library chaos by sorting and organizing it with MusicBrainz Picard:

<figure class="wp-block-embed is-type-rich is-provider-fedora-magazine">
Picard brings order to your music library
<iframe class="wp-embedded-content" data-secret="XD1a2U91pW" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/picard-brings-order-music-library/embed/#?secret=XD1a2U91pW" title="“Picard brings order to your music library” — Fedora Magazine" width="600"></iframe>
</figure>

Photo by Frank Septillion on Unsplash.

Implicit function declarations: flex’s use of “reallocarray”

Posted by RHEL Developer on April 22, 2019 07:00 AM

Several months ago, I took over the maintenance of the flex package in Fedora and decided to kick the tires by rebasing the package in Fedora Rawhide. I downloaded and hashed the latest tarball at the time, flex-2.6.4, tweaked the spec file, and fired up a local build. Unfortunately, it failed with a SIGSEGV at build time:

./stage1flex -o stage1scan.c ./scan.l
make[2]: *** [Makefile:1695: stage1scan.c] Segmentation fault (core dumped)

Some debugging with gdb led me to the conclusion that the segmentation fault was the result of a block of memory returned from the reallocarray function being written to during flex initialization.  In this article, I’ll describe the issue further and explain changes made to address it.

Here is a simplified snippet of my gdb session:

(gdb) bt
#0 check_mul_overflow_size_t (right=1, left=2048, left@entry=0)
#1 __GI___libc_reallocarray (optr=0x0, nmemb=2048, elem_size=1)
#2 allocate_array at misc.c:147
#3 flexinit at main.c:974
#4 flex_main at main.c:168
#5 __libc_start_main
(gdb) fin
Run till exit from #0 check_mul_overflow_size_t
__GI___libc_reallocarray
33              return realloc (optr, bytes);
(gdb) fin
Run till exit from #0 __GI___libc_reallocarray
in allocate_array
147             mem = reallocarray(NULL, (size_t) size, element_size);
Value returned is $1 = (void *) 0x5555557c6420
(gdb) fin
Run till exit from #0 allocate_array
in flexinit
974             action_array = allocate_character_array (action_size);
Value returned is $2 = (void *) 0x557c6420
(gdb) n
975             defs1_offset = prolog_offset = action_offset = action_index = 0;
(gdb) n
976             action_array[0] = '\0';
(gdb) n
Program received signal SIGSEGV, Segmentation fault.

I didn’t notice anything off here right up to the point at which the segfault occurs, but maybe you already did. All I saw was that the returned pointer was non-NULL on line 974, but writing to it on line 976 resulted in a segfault. It began to look like a malloc bug.

On a whim, I built the same tarball outside of the Fedora build system. This time, the typical ./configure && make command line didn’t segfault at build time. So apparently the difference lay in the build options used by rpmbuild. Some trial and error led me to the cause: -pie, the linker flag that produces a position independent executable. Building with -pie caused the segmentation fault.

Armed with this “reproducer” and advice from my colleagues at Red Hat, I set about doing a git-bisect on the flex sources. HEAD was building cleanly on the upstream master branch at that point even with -pie, so it was just a matter of finding the commit that fixed the build. The commit in question was the fix for the following issue reported against flex upstream:

#241: “implicit declaration of function reallocarray is invalid in C99”

So, flex sources didn’t declare _GNU_SOURCE, leading to the compiler’s seeing no declaration of the reallocarray function. In such cases, the compiler creates an implicit function declaration with the default return type (int) and generates code accordingly. On 64-bit Intel machines, the int type is only 32 bits wide while pointers are 64 bits wide. Going back and looking at the gdb session, it then became clear to me that the pointer gets truncated:

147             mem = reallocarray(NULL, (size_t) size, element_size);
Value returned is $1 = (void *) 0x5555557c6420
(gdb) fin
Run till exit from #0  allocate_array
in flexinit
974             action_array = allocate_character_array (action_size);
Value returned is $2 = (void *) 0x557c6420

This only happens in position independent executables because the heap gets mapped to a part of the address space where pointers are larger than INT_MAX, exposing the above flex bug. GCC actually warns of the presence of implicit function declarations via the -Wimplicit-function-declaration option. It appears that there was a fairly recent proposal to enable this warning in Fedora builds, but it was eventually shelved. If enabled, the warning would still cause the flex build to fail—but earlier and at a point where the problem was clear.

At this point, getting the build to compile successfully was a simple matter of backporting the corresponding flex patch that defines _GNU_SOURCE and exposes the reallocarray prototype to the compiler.

But we didn’t just stop there. One of my colleagues, Florian Weimer—a regular contributor to glibc—thought that all this could have been avoided if reallocarray had been exposed by glibc via the more general _DEFAULT_SOURCE feature test macro. The change has now been committed to glibc upstream and is available since glibc-2.29.

With this change, we hope to avoid similar situations in other components in Fedora and the glibc user community. glibc now provides the reallocarray function prototype unless the user explicitly requires stricter conformance to a given standard.

Share

The post Implicit function declarations: flex’s use of “reallocarray” appeared first on Red Hat Developer Blog.

Episode 142 - Hypothetical security: what if you find a USB flash drive?

Posted by Open Source Security Podcast on April 21, 2019 11:48 PM
Josh and Kurt talk about what one could do if you find a USB drive. The context is based on the story where the Secret Service was rumored to have plugged a malicious USB drive into a computer. The purpose of discussion is to explore how to handle a situation like this in the real world. We end the episode with a fantastic comparison of swim safety and security.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/9476621/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


    4 cool new projects to try in COPR for April 2019

    Posted by Fedora Magazine on April 19, 2019 09:00 AM

    COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

    Here’s a set of new and interesting projects in COPR.

    Joplin

    Joplin is a note-taking and to-do app. Notes are written in the Markdown format, and organized by sorting them into various notebooks and using tags.
    Joplin can import notes from any Markdown source or exported from Evernote. In addition to the desktop app, there’s an Android version with the ability to synchronize notes between them — using Nextcloud, Dropbox or other cloud services. Finally, there’s a browser extension for Chrome and Firefox to save web pages and screenshots.

    <figure class="wp-block-image"></figure>

    Installation instructions

    The repo currently provides Joplin for Fedora 29 and 30, and for EPEL 7. To install Joplin, use these commands with sudo:

    sudo dnf copr enable taw/joplin
    sudo dnf install joplin

    Fzy

    Fzy is a command-line utility for fuzzy string searching. It reads from a standard input and sorts the lines based on what is most likely the sought after text, and then prints the selected line. In addition to command-line, fzy can be also used within vim. You can try fzy in this online demo.

    Installation instructions

    The repo currently provides fzy for Fedora 29, 30, and Rawhide, and other distributions. To install fzy, use these commands:

    sudo dnf copr enable lehrenfried/fzy
    sudo dnf install fzy

    Fondo

    Fondo is a program for browsing many photographs from the unsplash.com website. It has a simple interface that allows you to look for pictures of one of several themes, or all of them at once. You can then set a found picture as a wallpaper with a single click, or share it.

    Installation instructions

    The repo currently provides Fondo for Fedora 29, 30, and Rawhide. To install Fondo, use these commands:

    sudo dnf copr enable atim/fondo
    sudo dnf install fondo

    YACReader

    YACReader is a digital comic book reader that supports many comics and image formats, such as cbz, cbr, pdf and others. YACReader keeps track of reading progress, and can download comics’ information from Comic Vine. It also comes with a YACReader Library for organizing and browsing your comic book collection.

    Installation instructions

    The repo currently provides YACReader for Fedora 29, 30, and Rawhide. To install YACReader, use these commands:

    sudo dnf copr enable atim/yacreader
    sudo dnf install yacreader

    PHP version 7.2.18RC1 and 7.3.5RC1

    Posted by Remi Collet on April 19, 2019 04:32 AM

    Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

    RPM of PHP version 7.3.5RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30 or remi-php73-test repository for Fedora 27-29 and Enterprise Linux.

    RPM of PHP version 7.2.18RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 28-29 or remi-php72-test repository for Fedora 27 and Enterprise Linux.

     

    emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

    emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

    Parallel installation of version 7.3 as Software Collection:

    yum --enablerepo=remi-test install php73

    Parallel installation of version 7.2 as Software Collection:

    yum --enablerepo=remi-test install php72

    Update of system version 7.3:

    yum --enablerepo=remi-php73,remi-php73-test update php\*

    Update of system version 7.2:

    yum --enablerepo=remi-php72,remi-php72-test update php\*

    Notice: version 7.3.5RC1 in Fedora rawhide for QA.

    emblem-notice-24.pngEL-7 packages are built using RHEL-7.6.

    emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

    Software Collections (php72, php73)

    Base packages (php)

    FPgM report: 2019-16

    Posted by Fedora Community Blog on April 18, 2019 08:06 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week. The Fedora 30 final freeze is in effect. The Go/No-Go and release readiness meetings will be held on Thursday.

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else

    Announcements and help wanted

    Help wanted

    Meetings and test days

    Fedora 30 Status

    Final freeze is in effect. The Fedora 30 GA is scheduled for 30 April 2019.

    Schedule

    • 2019-04-30 — Final preferred target
    • 2019-05-07 — Final target date #1

    Blocker bugs

    Bug IDBlocker statusComponentBug Status
    1693409Accepted (Final)gdmNEW
    1690429Accepted (Final)gnome-shellON_QA
    1688462 Accepted (Final)libdnfPOST
    1697591Proposed (Final)xorg-x11-serverASSIGNED
    1701279Proposed (Final)appstream-dataON_QA
    1696270Proposed (Final)gnome-shellON_QA
    89216Proposed (Final)opensshASSIGNED

    Fedora 31 Status

    Changes

    Announced

    Submitted to FESCo

    The post FPgM report: 2019-16 appeared first on Fedora Community Blog.

    DBUS Server side library wish list

    Posted by Tony Asleson on April 18, 2019 05:35 PM

    Ideally it would be great if a DBUS server side library provided

    1. Fully implements the functionality needed for common interfaces (Properties, ObjectManager, Introspectable) in a sane and easy way and doesn’t require you to manually supply the interface XML.
    2. Allows you to register a number of objects simultaneously, so if you have circular references etc.  This avoids race conditions on client.
    3. Ability to auto generate signals when object state changes and represent the state of the object separately for each interface on each object.
    4. Freeze/Thaw on objects or the whole model to minimize number of signals, but without requiring the user to manually code stuff up for signals.
    5. Configurable process/thread model and thread safe.
    6. Incrementally and dynamically add/remove an interface to an object without destroying the object and re-creating and while incrementally adding/removing the state as well.
    7. Handle object path construction issues, like having an object that needs an object path to another object that doesn’t yet exist.  This is alleviated of you have #8.
    8. Ability to create one or more objects without actually registering them with the service, so you can handle issues like #7 easier, especially when coupled with #2, directly needed for #2.  Thus you create 1 or more objects and register them together.
    9. Doesn’t require the use of code generating tools.
    10. Allow you to have multiple paths/name spaces which refer to the same objects. This would be useful for services that implement functionality in other services without requiring the clients to change.
    11. Allows you to remove a dbus object from the model while you are processing a method on it.  eg. client is calling a remove method on the object it wants to remove.

    Stories from the amazing world of release-monitoring.org #4

    Posted by Fedora Community Blog on April 18, 2019 07:42 AM

    The Future chamber was lit by hundreds of candles with strange symbols glowing on the walls. In the center of the chamber stood I, wearing the ceremonial robe and preparing for the task that lies before me.

    Somebody opened the doors, I turned around to see, who that could be. “Oh, a pleasant surprise, traveler. Stand near the door and watch this, you will love it.” I focused back to my thoughts and added. “Today I will show you the future that is waiting for this realm, but first we need to see current situation to understand the changes.”

    Current situation

    I started to cite the incantations and above my head the image started to take some form. “This is the manuscript to help you understand, how the release-monitoring.org is working now. It’s simplified, so you don’t be bothered by details.”

    <figure class="wp-block-image"><figcaption>Manuscript of the current situation</figcaption></figure>

    “As you can see, first thing that will happen is to let Anitya know that there is project it should track. This project is added either by Abstract Positive Intuition (API) or by some outside entity (User). “

    “When the project is added to Anitya. We start to send messengers to every project and wait, if there is something new happening (cron job is checking every project for new version). If we see something new, we will sent the message using magical mirror (Fedora messaging).”

    “And here is when the-new-hotness comes into play. It needs to commune with Great Oraculum (scm-requests pagure repository) and see if Ever Vigilant Guard (package maintainer) wants to be notified about the news. In case he wants to be notified, we send a messenger to land of Bugzilla. Optionally we send another to realm of Koji, if Ever Vigilant Guard wants this (create a scratch build in Koji).”

    “This is simplified description about the current situation of release-monitoring.org. There is also plenty of additional smaller or larger issues that needs to be addressed, but this is all for now. If you want to know more about the other issues, traveler, please visit Bugcronomicon of Anitya and the-new-hotness.”

    Near future

    I broke my concentration and the image started to fade away. I started to collect my power again, this time to reveal more than just a current state of things. My mind must flow through the currents of time. “Now I will show you the future that is lying in front of us.” I concentrated again and new image started to take shape above my head.

    <figure class="wp-block-image"><figcaption>Manuscript of the near future</figcaption></figure>

    “What you can see here is the near future. Something that is either being worked on or is already planned. So what is the difference from current situation. Let’s see.”

    “There will be new option for adding a new project. Now we will welcome the messengers from the far away realm of libraries.io. This new connection was requested by the mages from the realm of copr.”

    “The next change is to stop sending messengers periodically on every project, instead we will use queue (replace cron job by service) that will first send the messengers to the projects, which denied access to previous messengers (rate limiting).”

    “We also want to establish new connection between the-new-hotness and Flathub to help them in their journey. We want to use Abstract Positive Intuition (API) of the realm of GitHub to notify them about news related to their projects.”

    “Great Oraculum (scm-requests pagure repository) will be no longer bothered with our requests and we will ask Ever Vigilant Guards (package maintainers) directly. This will allow them to easily choose, if they want to be notified or not. We can thank to my fellow mage Pierre-Yves Chibon for this change.”

    “Bugzilla will no longer be used and we will instead use connection with the realm of Pagure, more specifically with the large island in Pagure known as dist-git or Fedora package sources. You can read more about this in previous entry of my journal.”

    “There will also be other changes that are not visible on the first sight. Most important of these changes is to look only for the news that are really new. Right now our messengers are always collecting everything in the archive and not all of them are really new. So we will add a few small changes to prevent this. First is to remember where the last news was delivered and check against this date every time a messenger is sent (check HTTP header field If-modified-since). The situation is slightly different in case of the realm of GitHub, where we must remember some identifier (tag id) instead of date.”

    “Other not visible change is mechanism to prevent addition of duplicate projects. We will show the outside entity (user) the projects that are similar, so he can check if he isn’t adding project that is already in Anitya. Another change in this will be some normalization of the project (Use normalized homepage as ecosystem instead of the one user added).”

    “This will be everything I will show you from the near future and now I need some rest, before I will go further.” The image above my head slowly fade away as I started to think about something else. I looked at the traveler, if he is still by the doors. Traveler was still standing there waiting for any new information that I could reveal, but for now I must rest.

    Far away future

    When I returned from the rest and entered the Future chamber again, traveler already stood there, waiting for me. I get to my position in the center of the room and started to concentrate again. “Now we will look further in the time. We will see, how the release-monitoring.org could look one day. This is not a clear future, so there could be changes that will prevent this vision to came true, but you will at least see what I’m talking about.” Another image started to form above my head.

    <figure class="wp-block-image"><figcaption>Manuscript of the far future</figcaption></figure>

    “As you can see the far future is not that different from the near one, but don’t be fooled. There is more than the eye can meet. First I will talk about the visible change. When there will be new project added by Ever Vigilant Guard (package maintainer) to Fedora universe and the Ever Vigilant Guard will want to be notified about it, it will be automatically added to Anitya. No need to do this manually anymore. There could even be something similar for Flathub.”

    “What about the other things that can’t be seen in the manuscript? One of them is allowing to add project simply by giving address of it to Anitya (reading all metadata from URL). This could be really a life changer for plenty of people, who work with Anitya.”

    “There will also be a statistics collector, because every mage loves information (new page containing statistics about the recent runs with information about failed, ratelimited or succeeded checks) and we will use nice magic images (graphs) to show these to others.”

    “We also allow Ever Vigilant Guard (package maintainer) to show us where the news should be delivered in land of Pagure (allow to map version prefix to branch for creating PR).”

    “In the future Anitya will not only send latest news when sending message through magical mirror (Fedora messaging), but all the news received from the latest messenger (Fedora messaging message will contain all new versions retrieved in latest check). Together with previous change this will be really helpful to every Ever Vigilant Guard.”

    “To make project more simple to read, we will introduce the division of news to categories specified by outside entities (users could create a version stream for different version prefixes like ‘3.’ or ‘4.’ and show them as tabs on the project page).”

    “This will be all I will show you today, I hope you are excited as I am about the future that lies before us, traveler. Things will probably change before we get there, but that is life. Ever changing shapeshifter.” I slowly focused my mind away from the future and started to walk away from Future chamber together with the traveler.

    Post scriptum

    This is all for now from the world of release-monitoring.org. Do you like this world and want to join our conclave of mages? Seek me (mkonecny) in the magical yellow pages (IRC freenode #fedora-apps) and ask how can you help. Or visit the Bugcronomicon (GitHub issues on Anitya or the-new-hotness) directly and pick something to work on.

    The post Stories from the amazing world of release-monitoring.org #4 appeared first on Fedora Community Blog.

    Notre documentation pourra bientôt être traduite !

    Posted by Jean-Baptiste Holcroft on April 18, 2019 12:00 AM

    Il y a deux ans, notre communauté a changé d’outil de gestion de la documentation, mais nous y avons perdu notre capacité à traduire en plusieurs langues. Le week-end dernier, nous avons créé un prototype.

    Les 12 et 13 avril 2019, j’ai invité Adam Samalik à Strasbourg, au Shadok, pour rendre possible la traduction de la documentation de Fedora. Il s’agit d’automatiser la conversion de 1500 pages de documentation dans un format dédié à la traduction. Ces fichiers accueillent le travail des traducteurs et seront converti automatiquement en pages de documentation traduites. Et pour rendre le travail de traduction plus aisé, nous avons paramétré une plateforme de traduction.

    La complexité de cette automatisation est liée à trois points :

    • le contenu d’origine de ces 1500 pages de documentation est stocké à divers endroits que nous souhaitons conserver indépendants,
    • les traducteurs doivent pouvoir utiliser le processus de travail et outils de leur choix,
    • l’ajout d’une nouvelle langue doit être automatisé.

    Nous savons maintenant générer manuellement cette publication, le résultat de ce week-end est le suivant :

    Maintenant que nous avons cette mécanique, permettant de traduire toute page de notre documentation donnant de la souplesse de travail tant aux producteurs de documentation qu’aux traducteurs, nous attendons des tests de la part des équipes de traduction française, tchèque et japonaise, afin qu’elles identifient les bugs éventuels.

    Dans tous les cas, ces travaux vont encore se poursuivre pendant plusieurs semaines avant d’être considérés comme suffisamment fiables pour être en production.

    Valgrind 3.15.0 with improved DHAT heap profiler

    Posted by Mark J. Wielaard on April 17, 2019 09:00 PM

    Julian Seward released valgrind 3.15.0 which updates support for existing platforms and adds a major overhaul of the DHAT heap profiler.  There are, as ever, many refinements and bug fixes.  The release notes give more details.

    Nicholas Nethercote used the old experimental DHAT tool a lot while profiling the Rust compiler and then decided to write and contribute A better DHAT (which contains a screenshot of the the new graphical viewer).

    CORE CHANGES

    • The XTree Massif output format now makes use of the information obtained when specifying --read-inline-info=yes.
    • amd64 (x86_64): the RDRAND and F16C insn set extensions are now supported.

    TOOL CHANGES

    DHAT

    • DHAT been thoroughly overhauled, improved, and given a GUI.  As a result, it has been promoted from an experimental tool to a regular tool.  Run it with --tool=dhat instead of --tool=exp-dhat.
    • DHAT now prints only minimal data when the program ends, instead writing the bulk of the profiling data to a file.  As a result, the --show-top-n and --sort-by options have been removed.
    • Profile results can be viewed with the new viewer, dh_view.html.  When a run ends, a short message is printed, explaining how to view the result.
    • See the documentation for more details.

    Cachegrind

    • cg_annotate has a new option, --show-percs, which prints percentages next to all event counts.

    Callgrind

    • callgrind_annotate has a new option, --show-percs, which prints percentages next to all event counts.
    • callgrind_annotate now inserts commas in call counts, and sort the caller/callee lists in the call tree.

    Massif

    • The default value for --read-inline-info is now yes on Linux/Android/Solaris. It is still no on other OS.

    Memcheck

    • The option --xtree-leak=yes (to output leak result in xtree format) automatically activates the option --show-leak-kinds=all, as xtree visualisation tools such as kcachegrind can in any case select what kind of leak to visualise.
    • There has been further work to avoid false positives.  In particular, integer equality on partially defined inputs (C == and !=) is now handled better.

    OTHER CHANGES

    • The new option --show-error-list=no|yes displays, at the end of the run, the list of detected errors and the used suppressions.  Prior to this change, showing this information could only be done by specifying -v -v, but that also produced a lot of other possibly-non-useful messages.  The option -s is equivalent to --show-error-list=yes.

    New Japanese era

    Posted by Rafał Lużyński on April 17, 2019 10:44 AM

    On 1 May 2019 a new era of the Japanese calendar will begin. Fedora is ready for this change.

    What This Is All About

    1 December 2017 the Emperor of Japan Akihito officially announced that he would abdicate on 30 April 2019. From 1 May his successor Naruhito will rule which will also begin a new era in the Japanese calendar. This is rather unusual event because so far emperors ruled until their death. Obviously, this made the moment of the era change difficult to predict. The emperor’s decision will help the country prepare for the change.

    Although the Gregorian calendar (the same as in many countries around the world) is known and used in Japan, the traditional Japanese calendar is also used with the years counted from the enthronement of an emperor. Each period of an emperor’s rule is called an era and has its own proper name. For example, the current era is named Heisei (平成), at the time of writing we have 31 year of Heisei era.

    On 1 April, one month before the beginning of the new era its name was announced. It will be named Reiwa (令和). As we know it we can adapt computers and other devices displaying dates automatically.

    How To Test

    In Unix systems dates are formatted by strftime() function, the easiest way to test it is to use the date command. Launched from the command line it displays the current date and time in our own language and using the default format:

    The change applies to the Japanese calendar which is available only in Japanese locales:

    There is still nothing interesting because the default date format in Japanese locale displays the Gregorian calendar. A custom date format must be used:

    So here we have something interesting: 31 year of Heisei era has been displayed.

    So far we are in April, in the old Japanese era. How to display a date from the new era? Let’s use the commands which will display a date one month ahead:

    If we still can see the 31 year of Heisei era then this is a bug. An updated system should display:

    Why the command LC_ALL=ja_JP.utf8 date +%EY -d "1 month" has not displayed any number? The first year of an era usually is not written with a number 1 but described with a word gannen (元年) which means “the initial year”.

    More Explanations

    Let’s explain what those magical symbols like +%Ec mean. The character "+" means that the following string is a date format which has to be used. The character "%" marks the beginning of the format, the character "E" means that the alternative calendar should be used. Here is a summary of the format specifiers which have been used for tests:

    %EC   the name of an era (e.g., Heisei, Reiwa)
    %EY   year number including the era name
    %Ey   year number in the Japanese calendar without an era name
    %Ec   full date and time in the Japanese calendar
    %Ex   full date (without the time) in the Japanese calendar

    Compare this with:

    %C   century number minus 1, or the initial digits of the year (currently 20—yes, not 21)
    %Y   year number in the Gregorian calendar (2019)
    %y   year number abbreviated to 2 digits (19)
    %c   full date and time
    %x   full date (without the time)

    Please note that the "E" character in English locales will cause no effect, it will be ignored.

    The -d switch means that date command should display a different date than the current one, for example -d "1 month" means one month ahead from now.

    How To Update

    In order that the calendar support the new Japanese era the glibc library must be updated to the version at least:

    Fedora 28   glibc-2.27-38
    Fedora 29   glibc-2.28-27
    Fedora 30   glibc-2.29-9
    Fedora Rawhide   glibc-2.29.9000-10
    RHEL 7/CentOS 7   glibc-2.17-260.el7_6.4

    Fedora Rawhide is a base for the future Fedora 31 release (November 2019) but before this happens a new version glibc 2.30 will be released which will support the new Japanese era from the beginning.

    What syslog-ng relays are good for

    Posted by Peter Czanik on April 17, 2019 08:15 AM

    While there are some users who run syslog-ng as a stand-alone application, the main strength of syslog-ng is central log collection. In this case the central syslog-ng instance is called the server, while the instances sending log messages to the central server are called the clients. There is a (somewhat lesser known) third type of instance called the relay, too. The relay collects log messages via the network and forwards them to one or more remote destinations after processing (but without writing them onto the disk for storage).A relay can be used for many different use cases. We will discuss a few typical examples below.

    Note that the syslog-ng application has an open source edition (OSE) and a premium edition (PE). Most of the information below applies to both editions. Some features are only available in syslog-ng PE and some scenarios need additional licenses when implemented using syslog-ng PE.

    UDP-only source devices

    Typically, most network devices send log messages over UDP only. Even though some of them support TCP-based logging, vendors recommend not to use it (as in many cases the TCP logging implementation is extremely buggy). UDP does not guarantee that all UDP packets will be delivered, so it is a weak point of the system. To ensure at least a best effort level of reliability, it is recommended to deploy a relay on the network, closeto these source devices. With the least possible (and, more importantly, the most reliable) hops between the source and the relay, the risk of losing UDP packets can be minimized. Once the packet arrives at the relay, we can ensure the messages are delivered to the central server in a reliable manner, based on TCP/TLS and ALTP (syslog-ng PE only: Advanced Log Transfer Protocol).

    Too many source devices

    Depending on the hardware and configuration, an average syslog-ng instance can usually handle the following number of concurrent connections:

    1. If the maximum message rate is lower than 200,000 messages per second:

    ◦ maximum ca. 5,000 TCP connections

    ◦ maximum ca. 1,000 TLS connections

    ◦ maximum ca. 1,000 ALTP connections

    2. If the message rate is higher than 200,000 messages per second, always contact One Identity.

    As a result, if you have more source devices, it is required to deploy a relay machine at least per 5,000 sources and batch up all the logs into a single TCP connection that connects the relay to the server. If TLS or ALTP is used, relays should be deployed per 1,000 source devices.

    Collecting logs from remote sites (especially over public WAN)

    It is quite common that companies need to collect log messages from geographically remote sites (sometimes in global distance), and sometimes over public WAN. In this case it is recommended to install a relay nodeper each remote site at least. The relay can be the last outgoing hop for all the messages of the remote site, which has several benefits:

    • Maintenance: you only need to change the configuration of the relayif you want to re-route the logs of some/all sources of the remote site. Plusou don't need to change each source’s configuration one by one.

    • Security: If you trust your internal network, it is not necessary to hold encrypted connections within the LAN of the remote site, as the messages can get to the relay without encryption. Naturally, messages should be sent in an encrypted way over the public WAN, and it is enough the hold only a single TCP/TLS connection between the sites (that is,between the remote relay and the central server). This eliminates the wasting of resources as holding several TLS connections directly from the clients is more costly than holding a single connection from the relay.

    • Reliability: It is possible to setup a 'main' disk-buffer on the relay. This main disk-buffer is only responsible for buffering all the logs of the remote site if the central syslog-ng server is temporarily unavailable. Of course, it is easier to maintain this single large main disk-buffer instead of setting disk-buffers on individual client machines.

    Separation / distribution / balancing of message processing tasks

    Most Linux applications have their own human readable, but difficult to handle, log messages. Without parsing and normalization, it is difficult to alert and report on these log messages. Many syslog-ng users utilize the message parsing tools of syslog-ng to normalize their different log messages. Just like normalization, filtering can also be resource-heavy, depending on what the filtering is based on. In this case, it might be inefficient to perform all the message processing tasks on the server (which can result in decreased overall performance). It is a typical setup todeploy relays in front of the central server operating as a receiver front-end. Most resource-heavy tasks (for example, parsing, filtering, etc) are performed on this receiver layer. As all resource-heavy tasks are performed on the relay, the central server behind it only needs to get the messages from the relay and write them into the final text-based or tamper-proof format (logstore, PE only). As you have the means to run more relays, you can balance the resource-heavy tasks between more relays and a single server behind them can still be fast enough to write all the messages onto the disk.

    Acting as a relay depends on the functionality. Namely, a relay doesn't have to be a dedicated relay machine at all. In fact, it can be one of the clients with a relay configuration in terms of log collection. On the other hand, in a robust log collection infrastructure the relays have their own purpose, therefore it is recommended to run dedicated relay machines in such cases.

    When it comes to the commercial PE version of syslog-ng, the relays are included in the price (at least until the licensed LSH number). Hence, you can run several parallel relays to ensure horizontal redundancy. Let's say each of the relays has the very same configuration and if one goes down, an other relay can take over processing. Distribution of the logs can be done by the built-in client-side failover functionality and by a general load-balancer as well. The latter is also used to serve N+1 redundant relay deployments (in this case, switching from one relay to an other relay is done not only due to an outage, but due to real load-balancing purposes, too).

    What syslog-ng relays are NOT good for

    The purpose of the relay is to buffer the logs for short term (for example, a few minutes or a few hours long, depending on the log volume) outages. It is not designed to buffer logs generated by the sources during a very long (for example, up to a few days long) server or connection outage.

    If you expect extended outages, we recommend that you deploy servers instead of relays. There are many deployments where long term storage and archiving are performed on the central syslog-ng server, but relays also do short-term log storage. From the syslog-ng PE point of view, these are servers, and thus need separate server licenses.

    If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/balabit/syslog-ng. On Twitter, I am available as @PCzanik.


    Managing RAID arrays with mdadm

    Posted by Fedora Magazine on April 17, 2019 08:00 AM

    Mdadm stands for Multiple Disk and Device Administration. It is a command line tool that can be used to manage software RAID arrays on your Linux PC. This article outlines the basics you need to get started with it.

    The following five commands allow you to make use of mdadm’s most basic features:

    1. Create a RAID array:
      # mdadm --create /dev/md/test --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
    2. Assemble (and start) a RAID array:
      # mdadm --assemble /dev/md/test /dev/sda1 /dev/sdb1
    3. Stop a RAID array:
      # mdadm --stop /dev/md/test
    4. Delete a RAID array:
      # mdadm --zero-superblock /dev/sda1 /dev/sdb1
    5. Check the status of all assembled RAID arrays:
      # cat /proc/mdstat

    Notes on features

    mdadm --create

    The create command shown above includes the following four parameters in addition to the create parameter itself and the device names:

    1. –homehost:
      By default, mdadm stores your computer’s name as an attribute of the RAID array. If your computer name does not match the stored name, the array will not automatically assemble. This feature is useful in server clusters that share hard drives because file system corruption usually occurs if multiple servers attempt to access the same drive at the same time. The name any is reserved and disables the homehost restriction.
    2. –metadata:
      mdadm reserves a small portion of each RAID device to store information about the RAID array itself. The metadata parameter specifies the format and location of the information. The value 1.0 indicates to use version-1 formatting and store the metadata at the end of the device.
    3. –level:
      The level parameter specifies how the data should be distributed among the underlying devices. Level 1 indicates each device should contain a complete copy of all the data. This level is also known as disk mirroring.
    4. –raid-devices:
      The raid-devices parameter specifies the number of devices that will be used to create the RAID array.

    By using level=1 (mirroring) in combination with metadata=1.0 (store the metadata at the end of the device), you create a RAID1 array whose underlying devices appear normal if accessed without the aid of the mdadm driver. This is useful in the case of disaster recovery, because you can access the device even if the new system doesn’t support mdadm arrays. It’s also useful in case a program needs read-only access to the underlying device before mdadm is available. For example, the UEFI firmware in a computer may need to read the bootloader from the ESP before mdadm is started.

    mdadm --assemble

    The assemble command above fails if a member device is missing or corrupt. To force the RAID array to assemble and start when one of its members is missing, use the following command:

    # mdadm --assemble --run /dev/md/test /dev/sda1

    Other important notes

    Avoid writing directly to any devices that underlay a mdadm RAID1 array. That causes the devices to become out-of-sync and mdadm won’t know that they are out-of-sync. If you access a RAID1 array with a device that’s been modified out-of-band, you can cause file system corruption. If you modify a RAID1 device out-of-band and need to force the array to re-synchronize, delete the mdadm metadata from the device to be overwritten and then re-add it to the array as demonstrated below:

    # mdadm --zero-superblock /dev/sdb1
    # mdadm --assemble --run /dev/md/test /dev/sda1
    # mdadm /dev/md/test --add /dev/sdb1

    These commands completely overwrite the contents of sdb1 with the contents of sda1.

    To specify any RAID arrays to automatically activate when your computer starts, create an /etc/mdadm.conf configuration file.

    For the most up-to-date and detailed information, check the man pages:

    $ man mdadm 
    $ man mdadm.conf

    The next article of this series will show a step-by-step guide on how to convert an existing single-disk Linux installation to a mirrored-disk installation, that will continue running even if one of its hard drives suddenly stops working!

    Cockpit 192

    Posted by Cockpit Project on April 17, 2019 12:00 AM

    Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 192.

    Machines: Auto-detect guest operating system

    When creating a new VM, the “OS Vendor” and “Operating System” fields are now set automatically after specifying the installation sources for many vendors. This uses the libvirt osinfo-detect utility.

    Machines OS auto-detection

    Translation cleanup

    Instead of showing just a predetermined set of languages, the language dialog now offers all available translations, ordered alphabetically, in their native translations:

    New language dialog

    Also, translations with less than 50% completion coverage were removed.

    Allow accounts with non-standard shells

    Previously, logging into Cockpit was restricted to users who have a shell specified in /etc/shells, enforced through pam_shells. This was found to be too strict, and excludes users with e. g. the tlog-rec-session shell for session recording.

    Cockpit now accepts shells that are unchangeable by non-administrastors, yet still allow a user to log in. However, shells which do not allow a user to log in, such as /sbin/nologin or /bin/false, may still be used to deny access to Cockpit.

    Try it out

    Cockpit 192 is available now:

    Stories from the amazing world of release-monitoring.org #3

    Posted by Fedora Community Blog on April 16, 2019 07:39 AM

    The darkness is slowly falling on my wizard tower, where I’m spending most of the time, when working on the new stuff in realm of the release-monitoring.org. But if you already arrived I could give you some of my time. You probably want to know what I’m working on. It’s not really a secret, but if you insist, I will tell you what happened in the realm in past weeks. So take a chair and hear my words.

    Missing scribe

    This story began on 28th March. I was looking at the news arriving by carrier pigeons (fedora-devel mailing list in Thunderbird 🙂 ) and there was one asking this question “Is Anitya not working again?”. I was terrified at first that something happened on Anitya and I’m not aware of it. So I cast a teleportation spell and transported myself directly to Anitya (checked release-monitoring.org openshift instance). Everything was in order.

    So I asked by using the carrier pigeons a few additional questions. It turned out that the original sender of the message was using Abstract Positive Intuition (API) to introduce plenty of new projects to Anitya.

    I started to look for the root of this inconvenience and it doesn’t take long to find it. I arrived to the department that is taking care of the Abstract Positive Intuition (API) and found out, that there is no scribe sending messages when we get a new request for projects added to Fedora universe (when new mapping is added through API, there was trigger to send message about this). I fixed this quickly in my work in progress version by adding a new scribe (fixed in master on GitHub).

    But the damage has been done. I asked the original sender to provide me a list of Fedora universe projects (list of packages) he added to Anitya and started working on magical formulae (python script) to make things right. This took more time than I thought, but I’m able to now fix similar issues when they happen. I added the magical formulae to my library, to have it close to hand when something happen, you can look at it, if you dare. However you need certain rights to be able to use it.

    After fixing the projects (the script is deleting the latest version from the list of projects and let the cronjob to check for them again) on the list that original sender sent to me I was approached by two others that had similar problems. So I can say that this work already paid off.

    Support for multiple prefixes

    Every time a news (version) from some project arrives, Anitya needs to do some changes before it could be shown to everyone interested. One of the things that Anitya does is stripping the news of unimportant parts like prefix. This prefix could be any text. Until recently Anitya only supported one prefix, which wasn’t enough for some projects. And now I did some magic and we are supporting multiple of those prefixes.

    I will conjure a few images to present you how this works. On the first image you could see some project we are watching without any changes.

    <figure class="wp-block-image"><figcaption>
    </figcaption></figure>

    So what I do is to change the project and define two prefixes split by ‘;’.

    <figure class="wp-block-image"></figure>

    And now you can see the output.

    <figure class="wp-block-image"></figure>

    As you can see this isn’t the best example, because this caused the order of news to change. In standard situation you wouldn’t remove parts of the news itself, but things like ‘rel’, ‘release-‘, ‘project-‘ and similar.

    Hungry rat

    Recently we had a particularly hungry rat in Anitya, which always ate one of the workers when someone tried to change the known aliases of the project (package mappings). This caused that instead of the original worker, who knew something about the alias, we send fresh one that didn’t know nothing (first distribution was shown instead of the mapped one.)

    It took some effort and few of the ice spells (thank gods we don’t need to pay any health insurance) to finally find the rat and catch it. We are now using it to run in the wheel to create some magical energy for Anitya. It eats much, but it’s producing vast amount of energy.

    The workers are now happy that nobody wants to eat them, when running with their information. And anybody who wants to change known aliases is happy, because the information is not lost.

    Post scriptum

    So, when you could see these changes in Anitya? Look for the version 0.16.0, which is currently still work in progress.

    This is all for now from the world of release-monitoring.org. Do you like this world and want to join our conclave of mages? Seek me (mkonecny) in the magical yellow pages (IRC freenode #fedora-apps) and ask how can you help. Or visit the Bugcronomicon (GitHub issues on Anitya or the-new-hotness) directly and pick something to work on.

    The post Stories from the amazing world of release-monitoring.org #3 appeared first on Fedora Community Blog.

    An Awesome Week

    Posted by Sirko Kemter on April 16, 2019 07:29 AM

    Last week was one of the most awesome weeks, I ever had. Even the start was not so successful, I tried to get my bicycle repaired. Unfortunately as I came to the bike store in my area they was bringing out the furniture as it got closed. But on friday I got it repaired what was quiet an adventure and took the whole afternoon. I had to visit 5 shops to find the right size, but the ugly thing came later finding somebody who fixes the tire on the wheel. Normally you find here each ten meters somebody who fixes tires but nobody wanted to do it. 1.5 hours searching to find one, but now I can use my bicycle again, what saves me a lot of time and money.
    The best day was by far the wednesday, first I got confirmed that Open Development Cambodia will host the next Translation Sprint and even more they would host us month for month. After a short consultation with the most active translators, we will do starting May bi-monthly Translation Sprints. I had 3 weeks ago a meeting at Open Development Cambodia and they just wanted me to note down what Fedora is and what we doing, just for their sponsors. The meeting took me the whole day, not the meeting itself but getting there was one hour for me without bike and of course one hour back. But now after 6 months searching and dozens of unsuccessful meetings I finished it and the next sprint can happen. I already made all the necessary tickets and after Khmer New Year we will announce it.


    But the best what did happen, that my Moneypool for a new laptop got filled in not even 12 hours after I did blog about it. That really makes me happy, but since then I am not sure if I shall leave it open or close it. In case one I could try to get a better used one or even a new one, if some more donations get in. I did look what I could get, Thinkpads of the T-serie are over 1000US$ but some Edge seems to be in reachable distance. I would go for one even I had bad experiences with an Edge before. So far I decided to leave it open as I cant buy one right now during the New Year holidays anyway and to my birthday are 14 days left. So if you want to donate, you can do, you even can leave me a comment as I am not sure about it.

    After I found a place for the next Translation Sprint and even for the following ones, I have now more time and energy again to work on the next “Different Release Party” together with PNC. I hope this works out.

    Kubernetes on Fedora IoT with k3s

    Posted by Fedora Magazine on April 15, 2019 08:00 AM

    Fedora IoT is an upcoming Fedora edition targeted at the Internet of Things. It was introduced last year on Fedora Magazine in the article How to turn on an LED with Fedora IoT. Since then, it has continued to improve together with Fedora Silverblue to provide an immutable base operating system aimed at container-focused workflows.

    Kubernetes is an immensely popular container orchestration system. It is perhaps most commonly used on powerful hardware handling huge workloads. However, it can also be used on lightweight devices such as the Raspberry Pi 3. Read on to find out how.

    Why Kubernetes?

    While Kubernetes is all the rage in the cloud, it may not be immediately obvious to run it on a small single board computer. But there are certainly reasons for doing it. First of all it is a great way to learn and get familiar with Kubernetes without the need for expensive hardware. Second, because of its popularity, there are tons of applications that comes pre-packaged for running in Kubernetes clusters. Not to mention the large community to provide help if you ever get stuck.

    Last but not least, container orchestration may actually make things easier, even at the small scale in a home lab. This may not be apparent when tackling the the learning curve, but these skills will help when dealing with any cluster in the future. It doesn’t matter if it’s a single node Raspberry Pi cluster or a large scale machine learning farm.

    K3s – a lightweight Kubernetes

    A “normal” installation of Kubernetes (if such a thing can be said to exist) is a bit on the heavy side for IoT. The recommendation is a minimum of 2 GB RAM per machine! However, there are plenty of alternatives, and one of the newcomers is k3s – a lightweight Kubernetes distribution.

    K3s is quite special in that it has replaced etcd with SQLite for its key-value storage needs. Another thing to note is that k3s ships as a single binary instead of one per component. This diminishes the memory footprint and simplifies the installation. Thanks to the above, k3s should be able to run k3s with just 512 MB of RAM, perfect for a small single board computer!

    What you will need

    1. Fedora IoT in a virtual machine or on a physical device. See the excellent getting started guide here. One machine is enough but two will allow you to test adding more nodes to the cluster.
    2. Configure the firewall to allow traffic on ports 6443 and 8472. Or simply disable it for this experiment by running “systemctl stop firewalld”.

    Install k3s

    Installing k3s is very easy. Simply run the installation script:

    curl -sfL https://get.k3s.io | sh -

    This will download, install and start up k3s. After installation, get a list of nodes from the server by running the following command:

    kubectl get nodes

    Note that there are several options that can be passed to the installation script through environment variables. These can be found in the documentation. And of course, there is nothing stopping you from installing k3s manually by downloading the binary directly.

    While great for experimenting and learning, a single node cluster is not much of a cluster. Luckily, adding another node is no harder than setting up the first one. Just pass two environment variables to the installation script to make it find the first node and avoid running the server part of k3s

    curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \
    K3S_TOKEN=XXX sh -

    The example-url above should be replaced by the IP address or fully qualified domain name of the first node. On that node the token (represented by XXX) is found in the file /var/lib/rancher/k3s/server/node-token.

    Deploy some containers

    Now that we have a Kubernetes cluster, what can we actually do with it? Let’s start by deploying a simple web server.

    kubectl create deployment my-server --image nginx

    This will create a Deployment named “my-server” from the container image “nginx” (defaulting to docker hub as registry and the latest tag). You can see the Pod created by running the following command.

    kubectl get pods

    In order to access the nginx server running in the pod, first expose the Deployment through a Service. The following command will create a Service with the same name as the deployment.

    kubectl expose deployment my-server --port 80

    The Service works as a kind of load balancer and DNS record for the Pods. For instance, when running a second Pod, we will be able to curl the nginx server just by specifying my-server (the name of the Service). See the example below for how to do this.

    # Start a pod and run bash interactively in it
    kubectl run debug --generator=run-pod/v1 --image=fedora -it -- bash
    # Wait for the bash prompt to appear
    curl my-server
    # You should get the "Welcome to nginx!" page as output

    Ingress controller and external IP

    By default, a Service only get a ClusterIP (only accessible inside the cluster), but you can also request an external IP for the service by setting its type to LoadBalancer. However, not all applications require their own IP address. Instead, it is often possible to share one IP address among many services by routing requests based on the host header or path. You can accomplish this in Kubernetes with an Ingress, and this is what we will do. Ingresses also provide additional features such as TLS encryption of the traffic without having to modify your application.

    Kubernetes needs an ingress controller to make the Ingress resources work and k3s includes Traefik for this purpose. It also includes a simple service load balancer that makes it possible to get an external IP for a Service in the cluster. The documentation describes the service like this:

    k3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available the load balancer will stay in Pending.

    k3s README

    The ingress controller is already exposed with this load balancer service. You can find the IP address that it is using with the following command.

    $ kubectl get svc --all-namespaces
    NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    default kubernetes ClusterIP 10.43.0.1 443/TCP 33d
    default my-server ClusterIP 10.43.174.38 80/TCP 30m
    kube-system kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 33d
    kube-system traefik LoadBalancer 10.43.145.104 10.0.0.8 80:31596/TCP,443:31539/TCP 33d

    Look for the Service named traefik. In the above example the IP we are interested in is 10.0.0.8.

    Route incoming requests

    Let’s create an Ingress that routes requests to our web server based on the host header. This example uses xip.io to avoid having to set up DNS records. It works by including the IP adress as a subdomain, to use any subdomain of 10.0.0.8.xip.io to reach the IP 10.0.0.8. In other words, my-server.10.0.0.8.xip.io is used to reach the ingress controller in the cluster. You can try this right now (with your own IP instead of 10.0.0.8). Without an ingress in place you should reach the “default backend” which is just a page showing “404 page not found”.

    We can tell the ingress controller to route requests to our web server Service with the following Ingress.

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
    name: my-server
    spec:
    rules:
    - host: my-server.10.0.0.8.xip.io
    http:
    paths:
    - path: /
    backend:
    serviceName: my-server
    servicePort: 80

    Save the above snippet in a file named my-ingress.yaml and add it to the cluster by running this command:

    kubectl apply -f my-ingress.yaml

    You should now be able to reach the default nginx welcoming page on the fully qualified domain name you chose. In my example this would be my-server.10.0.0.8.xip.io. The ingress controller is routing the requests based on the information in the Ingress. A request to my-server.10.0.0.8.xip.io will be routed to the Service and port defined as backend in the Ingress (my-server and 80 in this case).

    What about IoT then?

    Imagine the following scenario. You have dozens of devices spread out around your home or farm. It is a heterogeneous collection of IoT devices with various hardware capabilities, sensors and actuators. Maybe some of them have cameras, weather or light sensors. Others may be hooked up to control the ventilation, lights, blinds or blink LEDs.

    In this scenario, you want to gather data from all the sensors, maybe process and analyze it before you finally use it to make decisions and control the actuators. In addition to this, you may want to visualize what’s going on by setting up a dashboard. So how can Kubernetes help us manage something like this? How can we make sure that Pods run on suitable devices?

    The simple answer is labels. You can label the nodes according to capabilities, like this:

    kubectl label nodes <node-name> <label-key>=<label-value>
    # Example
    kubectl label nodes node2 camera=available

    Once they are labeled, it is easy to select suitable nodes for your workload with nodeSelectors. The final piece to the puzzle, if you want to run your Pods on all suitable nodes is to use DaemonSets instead of Deployments. In other words, create one DaemonSet for each data collecting application that uses some unique sensor and use nodeSelectors to make sure they only run on nodes with the proper hardware.

    The service discovery feature that allows Pods to find each other simply by Service name makes it quite easy to handle these kinds of distributed systems. You don’t need to know or configure IP addresses or custom ports for the applications. Instead, they can easily find each other through named Services in the cluster.

    Utilize spare resources

    With the cluster up and running, collecting data and controlling your lights and climate control you may feel that you are finished. However, there are still plenty of compute resources in the cluster that could be used for other projects. This is where Kubernetes really shines.

    You shouldn’t have to worry about where exactly those resources are or calculate if there is enough memory to fit an extra application here or there. This is exactly what orchestration solves! You can easily deploy more applications in the cluster and let Kubernetes figure out where (or if) they will fit.

    Why not run your own NextCloud instance? Or maybe gitea? You could also set up a CI/CD pipeline for all those IoT containers. After all, why would you build and cross compile them on your main computer if you can do it natively in the cluster?

    The point here is that Kubernetes makes it easier to make use of the “hidden” resources that you often end up with otherwise. Kubernetes handles scheduling of Pods in the cluster based on available resources and fault tolerance so that you don’t have to. However, in order to help Kubernetes make reasonable decisions you should definitely add resource requests to your workloads.

    Summary

    While Kubernetes, or container orchestration in general, may not usually be associated with IoT, it certainly makes a lot of sense to have an orchestrator when you are dealing with distributed systems. Not only does is allow you to handle a diverse and heterogeneous fleet of devices in a unified way, but it also simplifies communication between them. In addition, Kubernetes makes it easier to utilize spare resources.

    Container technology made it possible to build applications that could “run anywhere”. Now Kubernetes makes it easier to manage the “anywhere” part. And as an immutable base to build it all on, we have Fedora IoT.

    Episode 141 - Timezones are hard, security is harder

    Posted by Open Source Security Podcast on April 15, 2019 12:18 AM
    Josh and Kurt talk about the difficulty of security. We look at the difficulty of the EU not observing daylight savings time, which is probably magnitudes easier than getting security right. We also hit on a discussion on Reddit about U2F that shows the difficulty. Security today is too hard, even for the experts.


    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/9392213/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes


      Fedora 29 : Thonny editor for python.

      Posted by mythcat on April 14, 2019 08:10 PM
      Today I used the Flatpak Linux tool to install the last version of Inkscape 0.92.4 5da689c313 released at 2019-01-14.
      Flatpak (formerly xdg-app) is a software utility for software deployment, package management, and application virtualization for Linux desktop computers. It provides a sandbox environment in which users can run applications in isolation from the rest of the system. see Wikipedia Flatpak.
      The Flatpak tool is installed by default on Fedora Workstation. To install and runt the last version of Inkscape you need to use these commands:
      [mythcat@desk Downloads]$ flatpak install org.inkscape.Inkscape.flatpakref

      org.inkscape.Inkscape permissions:
      ipc x11 file access [1]

      [1] host


      ID Arch Branch Remote Download
      1. [✓] org.gnome.Platform.Locale x86_64 3.30 flathub 17.4 kB / 320.2 MB
      2. [✓] org.freedesktop.Platform.VAAPI.Intel x86_64 18.08 flathub 1.8 MB / 1.8 MB
      3. [✓] org.freedesktop.Platform.html5-codecs x86_64 18.08 flathub 4.8 MB / 4.9 MB
      4. [✓] org.inkscape.Inkscape x86_64 stable flathub 86.1 MB / 88.6 MB
      5. [✓] org.inkscape.Inkscape.Locale x86_64 stable flathub 8.5 kB / 18.6 MB

      Installation complete.

      [mythcat@desk Downloads]$ flatpak run org.inkscape.Inkscape
      Gtk-Message: 22:58:01.259: Failed to load module "pk-gtk-module"
      Gtk-Message: 22:58:01.259: Failed to load module "canberra-gtk-module"
      The Inkscape drawing tool works well.

      virt-install + nbdkit live install

      Posted by Richard W.M. Jones on April 13, 2019 09:31 AM

      This seems to be completely undocumented which is why I’m writing this … It is possible to boot a Linux guest (Fedora in this case) from a live CD on a website without downloading it. I’m using our favourite flexible NBD server, nbdkit and virt-install.

      First of all we’ll run nbdkit and attach it to the Fedora 29 live workstation ISO. To make this work more efficiently I’m going to place a couple of filters on top — one is the readahead (prefetch) filter recently added to nbdkit 1.12, and the other is the cache filter. In combination these filters should reduce the load on the website and improve local performance.

      $ rm /tmp/socket
      $ nbdkit -f -U /tmp/socket --filter=readahead --filter=cache \
          curl https://download.fedoraproject.org/pub/fedora/linux/releases/29/Workstation/x86_64/iso/Fedora-Workstation-Live-x86_64-29-1.2.iso
      

      I actually replaced that URL with a UK-based mirror to make the process a little faster.

      Now comes the undocumented virt-install command:

      $ virt-install --name test --ram 2048 \
          --disk /var/tmp/disk.img,size=10 
          --disk device=cdrom,source_protocol=nbd,source_host_transport=unix,source_host_socket=/tmp/socket \
          --os-variant fedora29
      

      After a bit of grinding that should boot into Fedora 29, and you never (not explicitly at least) had to download the ISO.

      Screenshot_2019-04-13_10-30-00

      To be fair qemu does also have a curl driver which virt-install could use, but nbdkit is better with the filters and plugins system giving you ultimate flexibility — check out my video about it.

      Create your First Application with Laravel

      Posted by Davis Álvarez on April 13, 2019 05:09 AM

      Since it’s initial launch in 2011, Laravel has experienced exponential growth. In 2015 it became the most outstanding PHP framework in GitHUb.

      Laravel’s philosophy is to develop PHP code in an elegant and simple way based on a model MVC (Model-View-Controller). It has a modular and extensible code through a package manager and robust support for database management.

      This guide is elaborated thinking in people who are starting with Laravel, a registration system will be developed to cover the basic operations of create, read, update and delete records, commonly known as CRUD.

      Server Requirements

      All the requirements of the server for the development of this guide are in the article Install Laravel with Apache and MySQL

      Create Database

      Execute the following command in the terminal:

      mysql -u root -p

      It will ask you for the password of the root user, once you enter the password correctly it will show you the mysql prompt>

      <figure class="wp-caption aligncenter" style="width: 820px">
      Starting MySQL from the command line
      <figcaption class="wp-caption-text">Starting MySQL from the command line</figcaption></figure>

      You can also use phpmyadmin to create the database, if you have it installed.

      From the terminal create the database to use:

      CREATE DATABASE laravelcrud;

      Create the username and password, which will be assigned to the created database:

      CREATE USER 'laravel_user'@'127.0.0.1' IDENTIFIED BY 'LaB_101

      Assign the created user to the database:

      GRANT ALL ON laravelcrud.* TO 'laravel_user'@'127.0.0.1';

      To reload the privilege table, the following command is executed:

      FLUSH PRIVILEGES;

      Create Laravel Project

      Execute the command that will create a new project called laravelcrud, this command will generate the entire structure of folders, files and dependencies necessary to run the application:

      laravel new laravelcrud
      <figure class="wp-caption aligncenter" style="width: 890px">
      Create new Laravel project
      <figcaption class="wp-caption-text">Create new Laravel project</figcaption></figure>

      Open The Project in the Preference Editor

      Once the new Laravel project is generated, open it in your  preferred editor, personally I like to use Visual Code.

      Through the terminal access to the project folder:

      cd laravelcrud

      Set Database Configuration

      Search the .env file and proceed to write the configuration of the database created and that will be used by the application:

      DB_CONNECTION=mysql
      DB_HOST=127.0.0.1
      DB_PORT=3306
      DB_DATABASE=laravelcrud
      DB_USERNAME=laravel_user
      DB_PASSWORD=LaB_101$

      To be sure that there is a connection between our application and the database, execute the command:

      php artisan migrate

      This command will create 3 tables in the default database:

      1. migratios
      2. password_resets
      3. users

      In the terminal connected to the MySQL prompt run:

      mysql> use laravelcrud;
      mysql> show tables;

      Additional in the project the migrations for those tables were automatically created in the generated folder app -> database-> migrations

      Create Model and Migration

      In the terminal execute:

      php artisan make:model Registry -m

      Two files will be created:

      1. Registry.php the model.
      2. create_registries_table.php the migration.

      You need to create the structure (schema) of the registry table to modify the newly created migration file found in app -> database -> migrations -> create_registries_table.ph

      In the class created for the migration, the methods up and down are automatically generated, modifying as follows:

      <?php
      
      use Illuminate\Support\Facades\Schema;
      use Illuminate\Database\Schema\Blueprint;
      use Illuminate\Database\Migrations\Migration;
      
      class CreateRegistriesTable extends Migration
      {
          /**
           * Run the migrations.
           *
           * @return void
           */
          public function up()
          {
              Schema::create('registries', function (Blueprint $table) {
                 
                  $table->integer('id')->unsigned();
                  $table->string('first_name',50);
                  $table->string('second_name',50)->nullable();
                  $table->string('surename',50);
                  $table->string('second_surename',50)->nullable();
                  $table->string('email',100);
                  $table->string('cell_phone',50)->nullable();
                  $table->string('phone',50)->nullable();
                  $table->string('coments',500)->nullable();
                  $table->integer('age');
                  $table->boolean('flosspainfo')->nullable();
                  $table->boolean('fedorainfo')->nullable();
                  $table->boolean('latansecinfo')->nullable();
      
                  $table->timestamps();
                  $table->primary(array('id', 'email'));
      
                  
              });
      
              DB::statement('ALTER TABLE registries MODIFY id INTEGER NOT NULL AUTO_INCREMENT');
          }
      
          /**
           * Reverse the migrations.
           *
           * @return void
           */
          public function down()
          {
              Schema::dropIfExists('registries');
          }
      }

      Re-execute the migration command:

      php artisan migrate

      To validate the creation of the table run the database command used previously and to validate the creation of the columns run in the terminal:

      use laravelcrud;
      show columns from registries;

      Create The Register View

      The first view is generated, to do this create the registries folder inside app -> resources -> views and then create the create.blade.php file inside this folder:

      <!DOCTYPE html>
      <html>
        <head>
          <meta charset="utf-8">
          <title>FLOSSPA Asistencia Evento </title>
          <link rel="stylesheet" href="{{asset('css/app.css')}}">
          <link rel="stylesheet" href="{{asset('css/registry.css')}}">
          
      
        </head>
        <body>
          <div class="container">
          <img alt="FLOSSPA" srcset="{{ URL::to('/images/logo-flosspa.svg') }}">
          
          @if ($errors->any())
            <div class="alert alert-danger">
                <ul>
                    @foreach ($errors->all() as $error)
                        <li>{{ $error }}</li>
                    @endforeach
                </ul>
            </div><br />
            @endif
            @if (\Session::has('success'))
            <div class="alert alert-success">
                <p>{{ \Session::get('success') }}</p>
            </div><br />
            @endif
      
          <form method="post" action="{{url('registries')}}" id="formRegistry">
          {{csrf_field()}}
              <div class="row">
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Primer Nombre:</label>
                  <input type="text" class="form-control" name="first_name"  value={{old('first_name')}}>
                </div>
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Segundo Nombre:</label>
                  <input type="text" class="form-control" name="second_name" value={{old('second_name')}}>
                </div>
              </div>
      
               <div class="row">
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Primer Apellido:</label>
                  <input type="text" class="form-control" name="surename" value={{old('surename')}}>
                </div>
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Segundo Apellido:</label>
                  <input type="text" class="form-control" name="second_surename" value={{old('second_surename')}}>
                </div>
              </div>
      
              <div class="row">
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Email:</label>
                  <input type="text" class="form-control" name="email"  value={{old('email')}}>
                </div>
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Edad:</label>
                  <input type="text" class="form-control" name="age" value={{old('age')}}>
                </div>
              </div>
      
              <div class="row">
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Télefono Residencial:</label>
                  <input type="text" class="form-control" name="phone" value={{old('phone')}}> 
                </div>
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Celular:</label>
                  <input type="text" class="form-control" name="cell_phone" value={{old('cell_phone')}}>
                </div>
              </div>
      
              <div class="row">
              
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                <label for="name">Desea recibir información de:</label>
                <div class="checkbox">
                
                  <label> <input type="checkbox" name="flosspainfo" @if(old('flosspainfo') !== NULL ){{ 'checked' }}@endif> FLOSSPA</label>
                </div>
                <div class="checkbox">
                  <label> <input type="checkbox" name="fedorainfo" @if(old('fedorainfo') !== NULL ){{ 'checked' }}@endif> FEDORA</label>
                </div>
                <div class="checkbox disabled">
                  <label> <input type="checkbox" name="latansecinfo" @if(old('latansecinfo') !== NULL ){{ 'checked' }}@endif> LATANSEC</label>
                </div>
                </div>
                
              </div>
              
              <div class="row">
                <div class="form-group col-xs-12 col-sm-12 col-md-12">
                  <label for="name">Comentarios:</label>
                  <textarea class="form-control" rows="5" name="coments" >{{old('coments')}}</textarea>
                </div>
              </div>
      
              <div class="row">
                
                <div class="form-group col-xs-12 col-sm-12 col-md-12">
                  <button type="submit" class="btn btn-success">Agregar Asistencia</button>
                </div>
              </div>
            </form>
          </div>
          <div id="toast-container" class="toast-top-right">
          </div
        </body>
      </html>

      To avoid CSRF attacks (Cross-site request forgery or cross-site request forgery) is a type of malicious exploit of a website in which unauthorized commands are transmitted by a user in which the website trusts. This vulnerability is also known by other names such as XSRF, hostile link, session thrust, and automatic attack. Larvel provides the protection {{csrf_field ()}}.

      To preserve the information in the form, it is achieved by adding value = {{old (‘first_name’)}} in each of the added controls.

      Now create the route to display the view, execute the following command:

      php artisan make:controller RegistryController --resource

      Inside the folder app -> Http -> Controllers you will find the controller, automatically generated with all the necessary methods for the CRUD.

      In order to have access to the methods created, a route referring to the controller must be added, the file app -> routes -> web.php is modified

      <?php
      
      /*
      |--------------------------------------------------------------------------
      | Web Routes
      |--------------------------------------------------------------------------
      |
      | Here is where you can register web routes for your application. These
      | routes are loaded by the RouteServiceProvider within a group which
      | contains the "web" middleware group. Now create something great!
      |
      */
      
      
      Route::resource('registries','RegistryController');
      

      To watch the list of routes execute in the terminal:

      php artisan route:list

      In the create method of the controller, the following code is added:

      public function create()
      {  
         return view('registries.create');
      }

      Start the Laravel development server and you will watch the first view:

      php artisan serve

      Load the next URL in the browser:

      http://localhost:8000/registries/create

      <figure class="wp-caption aligncenter" style="width: 1279px">
      Laravel application view
      <figcaption class="wp-caption-text">Laravel application view</figcaption></figure>

      In the public -> css folder, custom CSS was added and the images folder was created to add the image that is displayed on the screen. You can download the code from my GitHub account.

      In the model, you must handle the Mass Assignment vulnerability, the class found in app -> Registry.php is modified.

      <?php
      
      namespace App;
      
      use Illuminate\Database\Eloquent\Model;
      
      class Registry extends Model
      {
         
          protected $fillable = ['first_name','second_name','surename','second_surename',
                              'email','age','phone','cell_phone','flosspainfo','fedorainfo','latansecinfo','coments']; 
      }

      For to save the data that is sent from the view, the RegistryController.php controller must be modified, including the Registry class and the static interface (Facades) Input.

      use App\Registry;
      use Illuminate\Support\Facades\Input;

      Now you must modify the store method:

      public function store(Request $request)
          {
             
              $registry = $this->validate(request(), [
                  'first_name' => 'required',
                  'surename' => 'required',
                  'email' => 'required|email',
                  'age' => 'integer|min:0'
                ]);
      
              
                $flosspainfo = Input::get('flosspainfo') == 'on' ? true :false;
                $fedorainfo = Input::get('fedorainfo') == 'on' ? true :false;
                $latansecinfo = Input::get('latansecinfo') == 'on' ? true :false;
                
      
                Registry::create([
                  'first_name'=> Input::get('first_name'),
                  'second_name'=> Input::get('second_name'),
                  'surename'=> Input::get('surename'),
                  'second_surename'=> Input::get('second_surename'),
                  'email'=> Input::get('email'),
                  'cell_phone' => Input::get('cell_phone'),
                  'phone'=> Input::get('phone'),
                  'coments'=> Input::get('coments'),
                  'age'=> Input::get('age'),
                  'flosspainfo'=> $flosspainfo,
                  'fedorainfo'=>  $fedorainfo,
                  'latansecinfo'=>  $latansecinfo
                ]);
        
                return back()->with('success', 'Información almacenada con éxito');
          }

      Load the view and try to save without filling the data, validation messages will be received.

      To display the messages in Spanish you can use the following translation package.

      List View

      Now you will create a list of all saved records, create the view in

      app–>resources–>views–> registries–>index.blade.php

      <!DOCTYPE html>
      <html>
        <head>
          <meta charset="utf-8">
          <title>Index Page</title>
          <link rel="stylesheet" href="{{asset('css/app.css')}}">
        </head>
        <body>
          <div class="container">
          <img src="equilateral.png" alt="FLOSSPA" srcset="{{ URL::to('/images/logo-flosspa.svg') }}">
          <br />
          @if (\Session::has('success'))
            <div class="alert alert-success">
              <p>{{ \Session::get('success') }}</p>
            </div><br />
           @endif
          <table class="table table-striped">
          <thead>
            <tr>
              <th>ID</th>
              <th>Nombre</th>
              <th>Apellido</th>
              <th >Email</th>
              <th >Teléfono</th>
              <th >Celular</th>
              <th colspan="2">Acción</th>
            </tr>
          </thead>
          <tbody>
            @foreach($registries as $registry)
            <tr>
              <td>{{$registry['id']}}</td>
              <td>{{$registry['first_name']}}</td>
              <td>{{$registry['surename']}}</td>
              <td>{{$registry['email']}}</td>
              <td>{{$registry['phone']}}</td>
              <td>{{$registry['cell_phone']}}</td>
              <td><a href="{{action('RegistryController@edit', $registry['id'])}}" class="btn btn-warning">Edit</a></td>
              <td>
                <form  onsubmit="return confirm('Do you really want to delete?');" action="{{action('RegistryController@destroy', $registry['id'])}}" method="post">
                  {{csrf_field()}}
                  <input name="_method" type="hidden" value="DELETE">
                  <button class="btn btn-danger" type="submit">Delete</button>
                </form>
              </td>
            </tr>
            @endforeach
          </tbody>
        </table>
        </div>
        </body>
      </html

      Modify the controller by adding the view load:

      public function index()
      {
          $registries = Registry::all()->toArray();
          return view('registries.index', compact('registries'));
      }

      Load the view http://localhost:8000/registries

      <figure class="wp-caption aligncenter" style="width: 1279px">
      View of the list of records
      <figcaption class="wp-caption-text">View of the list of records</figcaption></figure>

      Edit View

      Create the editing functionality of a record, for it starts creating the view that is quite similar to that of the creation app -> resources -> views -> registries -> edit.blade.php

      <!DOCTYPE html>
      <html>
        <head>
          <meta charset="utf-8">
          <title>FLOSSPA Editar Asistencia Evento </title>
          <link rel="stylesheet" href="{{asset('css/app.css')}}">
          <link rel="stylesheet" href="{{asset('css/registry.css')}}">
          
      
        </head>
        <body>
          <div class="container">
          <img src="equilateral.png" alt="FLOSSPA" srcset="{{ URL::to('/images/logo-flosspa.svg') }}">
          
          @if ($errors->any())
            <div class="alert alert-danger">
                <ul>
                    @foreach ($errors->all() as $error)
                        <li>{{ $error }}</li>
                    @endforeach
                </ul>
            </div><br />
            @endif
            @if (\Session::has('success'))
            <div class="alert alert-success">
                <p>{{ \Session::get('success') }}</p>
            </div><br />
            @endif
      
          <form method="post" action="{{action('RegistryController@update', $id)}}" id="formRegistry">
          {{csrf_field()}}
          <input name="_method" type="hidden" value="PATCH">
              <div class="row">
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Primer Nombre:</label>
                  <input type="text" class="form-control" name="first_name"  value="{{$registry->first_name}}" >
                </div>
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Segundo Nombre:</label>
                  <input type="text" class="form-control" name="second_name" value="{{$registry->second_name}}">
                </div>
              </div>
      
               <div class="row">
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Primer Apellido:</label>
                  <input type="text" class="form-control" name="surename" value="{{$registry->surename}}">
                </div>
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Segundo Apellido:</label>
                  <input type="text" class="form-control" name="second_surename" value="{{$registry->second_surename}}">
                </div>
              </div>
      
              <div class="row">
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Email:</label>
                  <input type="text" class="form-control" name="email"  value="{{$registry->email}}">
                </div>
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Edad:</label>
                  <input type="text" class="form-control" name="age" value="{{$registry->age}}">
                </div>
              </div>
      
              <div class="row">
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Télefono Residencial:</label>
                  <input type="text" class="form-control" name="phone" value="{{$registry->phone}}"> 
                </div>
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                  <label for="name">Celular:</label>
                  <input type="text" class="form-control" name="cell_phone" value="{{$registry->cell_phone}}">
                </div>
              </div>
      
              <div class="row">
              
                <div class="form-group col-xs-12 col-sm-6 col-md-6">
                <label for="name">Desea recibir información de:</label>
                <div class="checkbox">
                
                  <label> <input type="checkbox" name="flosspainfo" @if($registry->flosspainfo == true ){{ 'checked' }}@endif> FLOSSPA</label>
                </div>
                <div class="checkbox">
                  <label> <input type="checkbox" name="fedorainfo" @if($registry->fedorainfo == true){{ 'checked' }}@endif> FEDORA</label>
                </div>
                <div class="checkbox disabled">
                  <label> <input type="checkbox" name="latansecinfo" @if($registry->latansecinfo == true ){{ 'checked' }}@endif> LATANSEC</label>
                </div>
                </div>
                
              </div>
              
              <div class="row">
                <div class="form-group col-xs-12 col-sm-12 col-md-12">
                  <label for="name">Comentarios:</label>
                  <textarea class="form-control" rows="5" name="coments" >{{$registry->coments}}</textarea>
                </div>
              </div>
      
              <div class="row">
                
                <div class="form-group col-xs-12 col-sm-12 col-md-12">
                  <button type="submit" class="btn btn-success">Update</button>
                </div>
              </div>
            </form>
          </div>
          <div id="toast-container" class="toast-top-right">
          </div
        </body>
      </html>

      Modify the edit method, as well as the update method of the controller:

      /**
           * Show the form for editing the specified resource.
           *
           * @param  int  $id
           * @return \Illuminate\Http\Response
           */
          public function edit($id)
          {
              $registry = Registry::find($id);
              return view('registries.edit',compact('registry','id'));
          }
      
          /**
           * Update the specified resource in storage.
           *
           * @param  \Illuminate\Http\Request  $request
           * @param  int  $id
           * @return \Illuminate\Http\Response
           */
          public function update(Request $request, $id)
          {
            
             $registry = Registry::find($id);
              $this->validate(request(), [
                  'first_name' => 'required',
                  'surename' => 'required',
                  'email' => 'required|email',
                  'age' => 'integer|min:0'
                ]);
      
              
                $registry->first_name = Input::get('first_name');
                $registry->second_name =  Input::get('second_name');
                $registry->surename = Input::get('surename');
                $registry->second_surename =  Input::get('second_surename');
                $registry->email = Input::get('email');
                $registry->cell_phone =  Input::get('cell_phone');
                $registry->phone = Input::get('phone');
                $registry->coments = Input::get('coments');
                $registry->age = Input::get('age');
                $registry->flosspainfo = Input::get('flosspainfo') == 'on' ? true :false;
                $registry->fedorainfo = Input::get('fedorainfo') == 'on' ? true :false;
                $registry->latansecinfo = Input::get('latansecinfo') == 'on' ? true :false;
                $registry->save();
                
        
                return back()->with('success', 'Registry updated successfully');
                
          }

      Deleting Record

      To delete a record, add the following in the destroy method of the controller:

      public function destroy($id)
      {
        $registry = Registry::find($id);
        $registry->delete();
        return redirect('registries')->with('success','Registry has been  deleted');
      }

      You can find the fully functional project in my GitHub account.

      The post Create your First Application with Laravel appeared first on Davis Álvarez.

      FPgM report: 2019-15

      Posted by Fedora Community Blog on April 12, 2019 07:15 PM
      Fedora Program Manager weekly report on Fedora Project development and progress

      Here’s your report of what has happened in Fedora Program Management this week.

      I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

      Announcements and help wanted

      Help wanted

      Meetings and test days

      Fedora 30 Status

      Fedora 30 Beta is released. The Fedora 30 GA is scheduled for 30 April 2019.

      Schedule

      • 2019-04-16 — Final freeze begins
      • 2019-04-30 — Final preferred target
      • 2019-05-07 — Final target date #1

      Blocker bugs

      Bug IDBlocker statusComponentBug Status
      1683197
      Accepted (Final)gdmASSIGNED
      1691909
      Accepted (Final)gdmNEW
      1690429Accepted (Final)gnome-shellNEW
      1688462
      Accepted (Final)libdnfNEW
      1666920Accepted (Final)systemdPOST
      1699179Proposed (Final)anaconda
      ASSIGNED
      1693409Proposed (Final)gdmNEW
      1693409Proposed (Final)selinux-policyNEW
      1699099Proposed (Final)selinux-policyNEW
      1698550Proposed (Final)shimNEW
      1697591Proposed (Final)xorg-x11-serverNEW

      Fedora 31 Status

      Changes

      Approved by FESCo

      Submitted to FESCo

      Rejected by FESCo

      The post FPgM report: 2019-15 appeared first on Fedora Community Blog.

      Fedora 29 : Thonny editor for python.

      Posted by mythcat on April 12, 2019 02:34 PM
      This Python IDE for beginners named Thonny is a simple editor with Python 3.7 built in.
      The official webpage can be found here and the GitHub project page is this.
      The development team is from the University of Tartu, Estonia with the help from the open-source community. Thonny grew up in University of Tartu (https://www.ut.ee), Institute of Computer Science (https://www.cs.ut.ee).
      I test it today with Fedora 29 and works well.
      Let's start with the first step:
      [mythcat@desk ~]$ pip3 install thonny --user
      Collecting thonny
      ...
      Successfully installed astroid-2.2.5 asttokens-1.1.13 docutils-0.14 isort-4.3.17 jedi-0.13.3 lazy-object-proxy-1.3.1
      mccabe-0.6.1 mypy-0.700 mypy-extensions-0.4.1 parso-0.4.0 pylint-2.3.1 pyperclip-1.7.0 pyserial-3.4 thonny-3.1.2
      typed-ast-1.3.1
      ...
      [root@desk mythcat]# dnf install python3-tkinter.x86_64
      Last metadata expiration check: 0:21:20 ago on Tue 09 Apr 2019 09:57:24 PM EEST.

      Installed:
      python3-tkinter-3.7.2-5.fc29.x86_64 tk-1:8.6.8-1.fc29.x86_64

      Complete!
      This editor can be found on Fedora repo, but I used the last released version software.
      [root@desk mythcat]# dnf search thonny
      Last metadata expiration check: 0:36:55 ago on Tue 09 Apr 2019 09:57:24 PM EEST.
      ========================= Name Exactly Matched: thonny =========================
      thonny.noarch : Python IDE for beginners

      Using Rust Generics to Enforce DB Record State

      Posted by William Brown on April 12, 2019 02:00 PM

      Using Rust Generics to Enforce DB Record State

      In a database, entries go through a lifecycle which represents what attributes they have have, db record keys, and if they have conformed to schema checking.

      I’m currently working on a (private in 2019, public in july 2019) project which is a NoSQL database writting in Rust. To help us manage the correctness and lifecycle of database entries, I have been using advice from the Rust Embedded Group’s Book.

      As I have mentioned in the past, state machines are a great way to design code, so let’s plot out the state machine we have for Entries:

      Entry State Machine

      The lifecyle is:

      • A new entry is submitted by the user for creation
      • We schema check that entry
      • If it passes schema, we commit it and assign internal ID’s
      • When we search the entry, we retrieve it by internal ID’s
      • When we modify the entry, we need to recheck it’s schema before we commit it back
      • When we delete, we just remove the entry.

      This leads to a state machine of:

                          |
                   (create operation)
                          |
                          v
                  [ New + Invalid ] -(schema check)-> [ New + Valid ]
                                                            |
                                                     (send to backend)
                                                            |
                                                            v    v-------------\
      [Commited + Invalid] <-(modify operation)- [ Commited + Valid ]          |
                |                                          ^   \       (write to backend)
                \--------------(schema check)-------------/     ---------------/
      

      This is a bit rough - The version on my whiteboard was better :)

      The main observation is that we are focused only on the commitability and validty of entries - not about where they are or if the commit was a success.

      Entry Structs

      So to make these states work we have the following structs:

      struct EntryNew;
      struct EntryCommited;
      
      struct EntryValid;
      struct EntryInvalid;
      
      struct Entry<STATE, VALID> {
          state: STATE,
          valid: VALID,
          // Other db junk goes here :)
      }
      

      We can then use these to establish the lifecycle with functions (similar) to this:

      impl Entry<EntryNew, EntryInvalid> {
          fn new() -> Self {
              Entry {
                  state: EntryNew,
                  valid: EntryInvalid,
                  ...
              }
          }
      
      }
      
      impl<STATE> Entry<STATE, EntryInvalid> {
          fn validate(self, schema: Schema) -> Result<Entry<STATE, EntryValid>, ()> {
              if schema.check(self) {
                  Ok(Entry {
                      state: self.state,
                      valid: EntryValid,
                      ...
                  })
              } else {
                  Err(())
              }
          }
      
          fn modify(&mut self, ...) {
              // Perform any modifications on the entry you like, only works
              // on invalidated entries.
          }
      }
      
      impl<STATE> Entry<STATE, EntryValid> {
          fn seal(self) -> Entry<EntryCommitted, EntryValid> {
              // Assign internal id's etc
              Entry {
                  state: EntryCommited,
                  valid: EntryValid,
              }
          }
      
          fn compare(&self, other: Entry<STATE, EntryValid>) -> ... {
              // Only allow compares on schema validated/normalised
              // entries, so that checks don't have to be schema aware
              // as the entries are already in a comparable state.
          }
      }
      
      impl Entry<EntryCommited, EntryValid> {
          fn invalidate(self) -> Entry<EntryCommited, EntryInvalid> {
              // Invalidate an entry, to allow modifications to be performed
              // note that modifications can only be applied once an entry is created!
              Entry {
                  state: self.state,
                  valid: EntryInvalid,
              }
          }
      }
      

      What this allows us to do importantly is to control when we apply search terms, send entries to the backend for storage and more. Benefit is this is compile time checked, so you can never send an entry to a backend that is not schema checked, or run comparisons or searches on entries that aren’t schema checked, and you can even only modify or delete something once it’s created. For example other parts of the code now have:

      impl BackendStorage {
          // Can only create if no db id's are assigned, IE it must be new.
          fn create(&self, ..., entry: Entry<EntryNew, EntryValid>) -> Result<...> {
          }
      
          // Can only modify IF it has been created, and is validated.
          fn modify(&self, ..., entry: Entry<EntryCommited, EntryValid>) -> Result<...> {
          }
      
          // Can only delete IF it has been created and committed.
          fn delete(&self, ..., entry: Entry<EntryCommited, EntryValid>) -> Result<...> {
          }
      }
      
      impl Filter<STATE> {
          // Can only apply filters (searches) if the entry is schema checked. This has an
          // important behaviour, where we can schema normalise. Consider a case-insensitive
          // type, we can schema-normalise this on the entry, then our compare can simply
          // be a string.compare, because we assert both entries *must* have been through
          // the normalisation routines!
          fn apply_filter(&self, ..., entry: &Entry<STATE, EntryValid>) -> Result<bool, ...> {
          }
      }
      

      Using this with Serde?

      I have noticed that when we serialise the entry, that this causes the valid/state field to not be compiled away - because they have to be serialised, regardless of the empty content meaning the compiler can’t eliminate them.

      A future cleanup will be to have a serialised DBEntry form such as the following:

      struct DBEV1 {
          // entry data here
      }
      
      enum DBEntryVersion {
          V1(DBEV1)
      }
      
      struct DBEntry {
          data: DBEntryVersion
      }
      
      impl From<Entry<EntryNew, EntryValid>> for DBEntry {
          fn from(e: Entry<EntryNew, EntryValid>) -> Self {
              // assign db id's, and return a serialisable entry.
          }
      }
      
      impl From<Entry<EntryCommited, EntryValid>> for DBEntry {
          fn from(e: Entry<EntryCommited, EntryValid>) -> Self {
              // Just translate the entry to a serialisable form
          }
      }
      

      This way we still have the zero-cost state on Entry, but we are able to move to a versioned seralised structure, and we minimise the run time cost.

      Testing the Entry

      To help with testing, I needed to be able to shortcut and move between anystate of the entry so I could quickly make fake entries, so I added some unsafe methods:

      #[cfg(test)]
      unsafe fn to_new_valid(self, Entry<EntryNew, EntryInvalid>) -> {
          Entry {
              state: EntryNew,
              valid: EntryValid
          }
      }
      

      These allow me to setup and create small unit tests where I may not have a full backend or schema infrastructure, so I can test specific aspects of the entries and their lifecycle. It’s limited to test runs only, and marked unsafe. It’s not “technically” memory unsafe, but it’s unsafe from the view of “it could absolutely mess up your database consistency guarantees” so you have to really want it.

      Summary

      Using statemachines like this, really helped me to clean up my code, make stronger assertions about the correctness of what I was doing for entry lifecycles, and means that I have more faith when I and future-contributors will work on the code base that we’ll have compile time checks to ensure we are doing the right thing - to prevent data corruption and inconsistency.

      Bodhi 3.14.0 released

      Posted by Bodhi on April 12, 2019 10:21 AM

      This is a feature release.

      Features

      • Use flatpaks/ namespace for Flatpaks, and make it configurable.
        (#2924, #3052).
      • Add json response handler for server internal errors (#3035).

      Bug fixes

      • Fix HTTP 500 errors when viewing composes (#2826).
      • Set log level to ERROR in bodhi-approve-testing (#3021).

      Development improvements

      • Log why buildroot overrides are expired (#3060).

      QElectroTech on the road to 0.7

      Posted by Remi Collet on April 12, 2019 08:39 AM

      RPM of QElectroTech version 0.70-rc1 (release candidate), an application to design electric diagrams, are available in remi-test for Fedora and Enterprise Linux 7.

      While the version 0.6, available in the official repository is already 1 year old, the project is working on a new major version of their electric diagrams editor.

      Official web site : http://qelectrotech.org/.

      Installation by YUM :

      yum --enablerepo=remi-test install qelectrotech

      RPM (version 0.70~rc1-1) are available for Fedora ≥ 27 and Enterprise Linux 7 (RHEL, CentOS, ...)

      Follow this entry which will be updated on each new version (beta, RC, ...) until the finale version release.

      Notice :a Copr / Qelectrotech repository also exists, which provides "development" versions (0.70-dev for now).

      Joe Doss: How Do You Fedora?

      Posted by Fedora Magazine on April 12, 2019 08:00 AM

      We recently interviewed Joe Doss on how he uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming a interviewee.

      Who is Joe Doss?

      Joe Doss lives in Chicago, Illinois USA and his favorite food is pizza. He is the Director of Engineering Operations and Kenna Security, Inc. Doss describes his employer this way: “Kenna uses data science to help enterprises combine their infrastructure and application vulnerability data with exploit intelligence to measure risk, predict attacks and prioritize remediation.”

      His first Linux distribution was Red Hat Linux 5. A friend of his showed him a computer that wasn’t running Windows. Doss thought it was just a program to install on Windows when his friend gave him a Red Hat Linux 5 install disk. “I proceeded to install this Linux ‘program’ on my Father’s PC,” he says. Luckily for Doss, his father supported his interest in computers. “I ended up totally wiping out the Windows 95 install as a result and this was how I got my first computer.”

      At Kenna, Doss’ group makes use of Fedora and Ansible: “We run Fedora Cloud in multiple VPC deployments in AWS and Google Compute with over 200 virtual machines. We use Ansible to automate everything we do with Fedora.”

      Doss brews beer at home and contributes to open source in his free time. He also has a cat named Tibby. “I rescued Tibby off the street the Hyde Park neighborhood of Chicago when she was 7 months old. She is not very smart, but she makes up for that with cuteness.” His favorite place to visit is his childhood home of Michigan, but Doss says, “anywhere with a warm beach, a cool drink, and the ocean is pretty nice too.”

      <figure class="wp-block-image"><figcaption>Tibby the cute cat!</figcaption></figure>

      The Fedora community

      Doss became involved with Fedora and the Fedora community through his job at Kenna Security. When he first joined the company they were using Ubuntu and Chef in production. There was a desire to make the infrastructure more reproducible and reliable, and he says, “I was able to greenfield our deployments with Fedora Cloud and Ansible.” This project got him involved in the Fedora Cloud release.

      When asked about his first impression of the Fedora community, Doss said, “Overwhelming to be honest. There is so much going on and it is hard to figure out who are the stakeholders of each part of Fedora.” Once he figured out who he needed to talk to he found the community very welcoming and super supportive.

      One of the ideas he had to improve the community was to unite the various projects and team under on bug tracking tool and community resource. “Pagure, Bugzilla, Github, Fedora Forums, Discourse Forums, Mailing lists… it is all over the place and hard to navigate at first.” Despite the initial complexity of becoming familiar with the Fedora Project, Doss feels it is amazingly rewarding to be involved. “It feels awesome it to be apart of a Linux distro that impacts so many people in very positive ways. You can make a difference.”

      Doss called out Dusty Mabe at Red Hat for helping him become involved, saying Dusty “has been an amazing mentor and resource for enabling me to contribute back to Fedora.”

      Doss has an interesting way of explaining to non-technical friends what he does. “Imagine changing the tires on a very large bus while it is going down the highway at 70 MPH and sometimes you need to get involved with the tire manufacturer to help make this process work well.” This metaphor helps people understand what replacing 200-plus VMs across more than five production VPCs in AWS and Google Compute with every Fedora release.

      Doss drew my attention to one specific incident with Fedora 29 and Vagrant. “Recently we encountered an issue where Vagrant wouldn’t set the hostname on a Fresh Fedora 29 Beta VM. This was due to Fedora 29 Cloud no longer shipping the network service stub in favor of NetworkManager. This led to me working with a colleague at Kenna Security to send a patch upstream to the Vagrant project to help their developers produce a fix for Fedora 29. Vagrant usage with Fedora is a very large part of our development cycle at Kenna, and having this broken before the Fedora 29 release would have impacted us a lot.” As Doss said, “Sometimes you need to help make the tires before they go on the bus.”

      Doss is the COPR Fedora, RHEL, and CentOS package maintainer for WireGuard VPN. “The CentOS repo just went over 60 thousand downloads last month which is pretty awesome.”

      What Hardware?

      Doss uses Fedora 29 cloud in the over five VPC deployments in AWS and Google computer. At home he has a SuperMicro SYS-5019A-FTN4 1U Server that runs Fedora 29 Server with Openshift OKD installed on it. His laptops are all Lenovo. “For Laptops I use a ThinkPad T460s for work and a ThinkPad 25 at home. Both have Fedora 29 installed. ThinkPads are the best with Fedora.”

      What Software?

      Doss used GNOME 3 as his preferred desktop on Fedora Workstation. “I use Sublime Text 3 for my text editor on the desktop or vim on servers.” For development and testing he uses Vagrant. “Ansible is what I use for any kind of automation with Fedora. I maintain an Ansible playbook for setting up my workstation.”

      Ansible

      I asked Doss if he had advice for people trying to learn Ansible.

      “Start small. Automate the stuff that makes your life easier, but don’t over complicate it. Ansible Galaxy is a great resource to get things done quickly, but if you truly want to learn how to use Ansible, writing your own roles and playbooks the path I would take.

      “I have helped a lot of my coworkers that have joined my Operations team at Kenna get up to speed on using Ansible by buying them a copy of Ansible for Devops by Jeff Geerling. This book will give anyone new to Ansible the foundation they need to start using it everyday. #ansible on Freenode is a great resource as well along with the official Ansible docs.”

      Doss also said, “Knowing what to automate is most likely the most difficult thing to master without over complicating things. Debugging complex playbooks and roles is a close second.”

      Home lab

      He recommended setting up a home lab. “At Kenna and at home I use Vagrant with the Vagrant-libvirt plugin for developing Ansible roles and playbooks. You can iterate quickly to build your roles and playbooks on your laptop with your favorite editor and run vagrant provision to run your playbook. Quick feedback loop and the ability to burn down your Vagrant VM and start over quickly is an amazing workflow. Below is a sample Vagrant file that I keep handy to spin up a Fedora VM to test my playbooks.”

      -- mode: ruby --
      vi: set ft=ruby :
      Vagrant.configure(2) do |config|
      config.vm.provision "shell", inline: "dnf install nfs-utils rpcbind @development-tools @ansible-node redhat-rpm-config gcc-c++ -y"
      config.ssh.forward_agent = true
      config.vm.define "f29", autostart: false do |f29|
      f29.vm.box = "fedora/29-cloud-base"
      f29.vm.hostname = "f29.example.com"
      f29.vm.provider "libvirt" do |vm|
      vm.memory = 2048
      vm.cpus = 2
      vm.driver = "kvm"
      vm.nic_model_type = "e1000"
      end
      config.vm.synced_folder '.', '/vagrant', disabled: true

      config.vm.provision "ansible" do |ansible|
      ansible.groups = {
      }
      ansible.playbook = "playbooks/main.yml"
      ansible.inventory_path = "inventory/development"
      ansible.extra_vars = {
      ansible_python_interpreter: "/usr/bin/python3"
      }
      # ansible.verbose = 'vvv' end
      end
      end

      Internationalization test day report for Fedora 30

      Posted by Fedora Community Blog on April 11, 2019 11:47 AM
      Internationalization test day report for Fedora 25

      In the preparation for Fedora 30 release, the Internationalization Team organized an Internationalization (i18n) Test Day on March 19. This test day like all the previous i18n test days, its seen that people came from all over the world to participate in this test event. Since the early morning, internationalization engineers were present in #fedora-test-day channel to help people testing on this day.

      Internationalization changes for Fedora 30

      For Fedora 30, we had this time only one Change accepted.

          1. Replace Comps Language Group With Langpacks Language support groups in Comps file will get replaced by weak rich dependencies in the langpacks package.

      Test Day Results

      We kept open testing for few more days so that in case anyone missed the actual test day date then they can do testing in the next few days. We have picked all the test results from the test day app and consolidated it here.

      This time also, we had many test users. They were around nineteen users from all over the world. There were thirteen languages tested on this i18n test day. Five bugs were filed and work is going on them. Mostly people found interest in testing Langpacks Change.

      Thanks to all the participants for testing internationalization Changes and test cases. Let’s continue testing internationalization for all upcoming Fedora releases.

      The post Internationalization test day report for Fedora 30 appeared first on Fedora Community Blog.

      Red Hat Brno Open House

      Posted by Paul Mellors [MooDoo] on April 11, 2019 09:50 AM

      I don’t work for Redhat, I’m just a fan and a Fedora Ambassador, so with that in mind, if you’re in the area.

      <figure class="wp-block-image"></figure>

      Red Hat Brno is having an Open House! Come listen to great talks and meet our teams!

      https://openhouse.redhat.com/cz/

      All systems go

      Posted by Fedora Infrastructure Status on April 11, 2019 01:32 AM
      New status good: Everything seems to be working. for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package Database, Package maintainers git repositories, Fedora Container Registry, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

      There are scheduled downtimes in progress

      Posted by Fedora Infrastructure Status on April 10, 2019 08:58 PM
      New status scheduled: All systems are being updated/rebooted for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package Database, Package maintainers git repositories, Fedora Container Registry, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

      Using a client certificate to set the attestation checksum

      Posted by Richard Hughes on April 10, 2019 08:02 PM

      For a while, fwupd has been able to verify the PCR0 checksum for the system firmware. The attestation checksum can be used to verify that the installed firmware matches that supplied by the vendor and means the end user is confident the firmware has not been modified by a 3rd party. I think this is really an important and useful thing the LVFS can provide. The PCR0 value can easily be found using tpm2_pcrlist if the TPM is in v2.0 mode, or cat /sys/class/tpm/tpm0/pcrs if the TPM is still in v1.2 mode. It is also reported in the fwupdmgr get-devices output for versions of fwupd >= 1.2.2.

      The device checksum as a PCR0 is slightly different than a device checksum for a typical firmware. For instance, a DFU device checksum can be created using sha256sum firmware.bin (assuming the image is 100% filling the device) and you don’t actually have to flash the image to the hardware to get the device checksum out. For a UEFI UpdateCapsule you need to schedule the update, reboot, then read back the PCR0 from the hardware. There must be an easier way…

      Assuming you have a vendor account on the LVFS, first upload the client certificate for your user account to the LVFS:

      Then, assuming you’re using fwupd >= 1.2.6 you can now do this:

      fwupdmgr refresh
      fwupdmgr update
      …reboot…
      fwupdmgr report-history --sign
      

      Notice the –sign there? Looking back at the LVFS, there now exists a device checksum:

      This means the firmware gets the magic extra green tick that makes everyone feel a lot happier:

      nbdkit 1.12

      Posted by Richard W.M. Jones on April 10, 2019 05:40 PM

      The new stable release of nbdkit, our flexible Network Block Device server, is out. You can read the announcement and release notes here.

      The big new features are SSH support, the linuxdisk plugin, writing plugins in Rust, and extents. Extents allows NBD clients to work out which parts of a disk are sparse or zeroes and skip reading them. It was hellishly difficult to write because of the number of obscure corner cases.

      Also in this release, are a couple of interesting filters. The rate filter lets you add a bandwidth limit to connections. We will use this in virt-v2v to allow v2v instances to be rate limited (even dynamically). The readahead filter makes sequential copying and scanning of plugins more efficient by prefetching data ahead of time. It is self-configuring and in most cases simply adding the filter into your filter stack is sufficient to get a nice performance boost, assuming your client’s access patterns are mostly sequential.

      Fedora – Nvidia

      Posted by Paul Mellors [MooDoo] on April 10, 2019 01:45 PM

      ** My notes **, only use if you know what you’re doing otherwise you might hose your system and i’m not responsible for that.

      1, Go HERE and find latest version of installer package. chmod +x .run file
      2, su –
      3, dnf update
      4, reboot if any updates were installed
      5, dnf install kernel-devel kernel-headers gcc make dkms acpid libglvnd-glx libglvnd-opengl libglvnd-devel pkgconfig
      6, echo “blacklist nouveau” >> /etc/modprobe.d/blacklist.conf
      7, edit /etc/sysconfig/grub, add in the blacklist bit
      GRUB_CMDLINE_LINUX=”rd.lvm.lv=fedora/swap rd.lvm.lv=fedora/root rhgb quiet rd.driver.blacklist=nouveau”
      8,
      ## BIOS ##
      grub2-mkconfig -o /boot/grub2/grub.cfg
      UEFI ##
      grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
      9, dnf remove xorg-x11-drv-nouveau
      10. ## Backup old initramfs nouveau image ##
      mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img

      ## Create new initramfs image ##
      dracut /boot/initramfs-$(uname -r).img $(uname -r)
      11, systemctl set-default multi-user.target
      12, reboot
      13, ./NVIDIA-Linux-*.run – follow on screen stuff
      14, systemctl set-default graphical.target
      15, reboot

      NVIDIA drivers installed.

      ** Update **
      There is another way, thanks HRW 🙂 – Do it using RPM Fusion [Which I’ll do a seperate post about]

      https://rpmfusion.org/Howto/NVIDIA

      Fix dbus-broker failing to start with status=226/NAMESPACE after F30 upgrade

      Posted by Hans de Goede on April 10, 2019 01:08 PM

      After upgrading my main workstation to F30 a while ago (soon after it branched) dbus-broker failed to start, making my machine pretty-much unusable. I tried putting selinux in permissive mode and that fixed it, so I made a note to revisit this later.

      Fast-forward to today, I applied all updates, did a full-relabel for good measure and things were still broken. Spinning up a fresh F30 vm does not exhibit this problem, so the problem had to be something specific to my machine. After lots of debugging I found bug 1663040 which is about the same thing happen on the live media and only on the live media, the problem turns out to be the selinux attributes on the mount-points (/dev, /proc, /sys) in / which cannot be updated by a relabel because at that time they already have a filesystem mounted on them.

      I created the problem of the wrong labels myself when I moved from a hdd to a ssd and did a cp -pr of the non mount dirs and a straight forward mkdir to create the mount-points on the ssd. Zbigniew gives a need trick to detect this problem from a running system in bug 1663040:

      mkdir /tmp/foo
      sudo mount --bind / /tmp/foo
      ls -lZd /tmp/foo/* | grep unlabeled

      If the output of the last command show any files/dirs with unlabeled_t as type then your system has the same issue as mine had. To fix this boot from a livecd, mount your / on /mnt, cd into /mnt and then run:

      chcon -t device_t dev
      chcon -t home_root_t home
      chcon -t root_t proc sys
      chcon -t var_run_t run

      Then umount /mnt and reboot. After this your system should be able to run in enforcing mode again without problems.

      All systems go

      Posted by Fedora Infrastructure Status on April 10, 2019 11:27 AM
      Service 'Package maintainers git repositories' now has status: good: Everything seems to be working.

      Major service disruption

      Posted by Fedora Infrastructure Status on April 10, 2019 11:03 AM
      Service 'Package maintainers git repositories' now has status: major: Service is down for needed upgrade

      Insider 2019-04: Tetris; Docker; Podman; python-fetcher

      Posted by Peter Czanik on April 10, 2019 10:57 AM

      Dear syslog-ng users,

      This is the 74th issue of syslog-ng Insider, a monthly newsletter that brings you news related to syslog-ng.

      NEWS

      Tetris destination

      In this blog post we show you a fun way of using the Python destination of syslog-ng. We will write a Tetris destination. We will use the built-in Tetris implementation of Emacs. The syslog-ng Python destination will connect to an Emacs server. The log messages will be turned into Tetris commands inside Emacs. Using an stdin source, users can interactively feed syslog-ng with messages that will control the Tetris in the end.

      https://www.syslog-ng.com/community/b/blog/posts/tetris-destination

      A simplified guide to logging Docker to Elasticsearch in 2019 using syslog-ng

      This simplified guide shows you how to send logs of containers into Elasticsearch. Although there are several tutorials on logging Docker to Elasticsearch, this one is entirely different, as it uses syslog-ng. You can also visualize your Docker logs on a nice dashboard in Kibana.

      https://balagetech.com/simplified-logging-docker-elasticsearch-syslog-ng/

      Replacing Docker with Podman in the syslog-ng build container

      The syslog-ng source code includes a container-based build system. You can use this build system to generate source tarballs (the official syslog-ng release tarball is also generated this way) and to build packages for RHEL 7 as well as different Debian and Ubuntu releases. Although it was originally built around Docker, with the general availability of RHEL 8 is drawing near, I wanted to know how difficult it is to replace Docker with Podman in the syslog-ng build system. Originally I tested this replacement on Fedora Silverblue (Silverblue), then a week later on RHEL 8 Beta. While the syslog-ng build scripts do not support these distributions (yet), the point was to check Podman as a Docker replacement.

      https://www.syslog-ng.com/community/b/blog/posts/replacing-docker-with-podman-in-the-syslog-ng-build-container

      The syslog-ng python-fetcher(): collecting load average data

      Using python-fetcher() simplifies developing a source driver for syslog-ng even further. You do not have to implement your own event loop, since syslog-ng does it for you. You only need to focus on what information you need and how you (or your code) can fetch it.

      In this blog I will show you two examples. The first one is a dead end: it is a project that looked simple at first but turned out to be problematic later on. The second one is simple but still manages to illustrate most features of the python-fetcher.

      https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-python-fetcher-collecting-load-average-data

      CONFERENCES

      syslog-ng featured in my sudo talk

      One Identity booth

      WEBINARS

      Upcoming:

      You can watch our past webinars:


      Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/

      Managing Partitions with sgdisk

      Posted by Fedora Magazine on April 10, 2019 08:00 AM

      Roderick W. Smith‘s sgdisk command can be used to manage the partitioning of your hard disk drive from the command line. The basics that you need to get started with it are demonstrated below.

      The following six parameters are all that you need to know to make use of sgdisk’s most basic features:

      1. -p
        Print the partition table:
        # sgdisk -p /dev/sda
      2. -d x
        Delete partition x:
        # sgdisk -d 1 /dev/sda
      3. -n x:y:z
        Create a new partition numbered x, starting at y and ending at z:
        # sgdisk -n 1:1MiB:2MiB /dev/sda
      4. -c x:y
        Change the name of partition x to y:
        # sgdisk -c 1:grub /dev/sda
      5. -t x:y
        Change the type of partition x to y:
        # sgdisk -t 1:ef02 /dev/sda
      6. –list-types
        List the partition type codes:
        # sgdisk --list-types

      <figure class="wp-block-image"><figcaption>The SGDisk Command</figcaption></figure>

      As you can see in the above examples, most of the commands require that the device file name of the hard disk drive to operate on be specified as the last parameter.

      The parameters shown above can be combined so that you can completely define a partition with a single run of the sgdisk command:

      # sgdisk -n 1:1MiB:2MiB -t 1:ef02 -c 1:grub /dev/sda

      Relative values can be specified for some fields by prefixing the value with a + or symbol. If you use a relative value, sgdisk will do the math for you. For example, the above example could be written as:

      # sgdisk -n 1:1MiB:+1MiB -t 1:ef02 -c 1:grub /dev/sda

      The value 0 has a special-case meaning for several of the fields:

      • In the partition number field, 0 indicates that the next available number should be used (numbering starts at 1).
      • In the starting address field, 0 indicates that the start of the largest available block of free space should be used. Some space at the start of the hard drive is always reserved for the partition table itself.
      • In the ending address field, 0 indicates that the end of the largest available block of free space should be used.

      By using 0 and relative values in the appropriate fields, you can create a series of partitions without having to pre-calculate any absolute values. For example, the following sequence of sgdisk commands would create all the basic partitions that are needed for a typical Linux installation if in run sequence against a blank hard drive:

      # sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub /dev/sda
      # sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot /dev/sda
      # sgdisk -n 0:0:+4GiB -t 0:8200 -c 0:swap /dev/sda
      # sgdisk -n 0:0:0 -t 0:8300 -c 0:root /dev/sda

      The above example shows how to partition a hard disk for a BIOS-based computer. The grub partition is not needed on a UEFI-based computer. Because sgdisk is calculating all the absolute values for you in the above example, you can just skip running the first command on a UEFI-based computer and the remaining commands can be run without modification. Likewise, you could skip creating the swap partition and the remaining commands would not need to be modified.

      There is also a short-cut for deleting all the partitions from a hard disk with a single command:

      # sgdisk --zap-all /dev/sda

      For the most up-to-date and detailed information, check the man page:

      $ man sgdisk

      The security of dependencies

      Posted by Josh Bressers on April 10, 2019 01:47 AM

      So you’ve written some software. It’s full of open source dependencies. These days all software is full of open source, there’s no way around it at this point. I explain the background in my previous post.

      Now that we have all this open source, how do we keep up with it? If you’re using a lot of open source in your code there could be one or more updated dependencies per day!

      Step one is knowing what you have. There are a ton of ways to do this, but I’m going to bucket things into 3 areas.

      1. Do nothing
      2. Track things on your own
      3. Use an existing tool to track things

      Do nothing

      First up is don’t track anything. Ignore the problem.

      At first glance you may think I’m joking, but this could be a potential solution. There are two ways to think about this one.

      One is you literally ignore the dependencies. You never ever update them. Ever. This is a bad idea, there will be bugs, there will be security problem. They will affect you and someday you’ll regret this decision. I wouldn’t suggest this to anyone ever. If you do this, make sure you keep your résumé up to date.

      The non bananas way you can do this is to let things auto update. I don’t mean ignore things altogether, I mean ignore knowing exactly what you have. If you’re building a container, make sure you update the container to the latest and greatest everything during build. For example if you have a Fedora container, you would run “dnf -y upgrade” on every build. That will pull in the latest and greatest packages from Fedora. If you pull in npm dependencies, you make sure the latest and greatest npm packages are installed every time you build. If you’re operating in a very devops style environment you’re rebuilding everything constantly (right …. RIGHT!) so why not take advantage of it.

      Now, it should be noted that if you operate in this way, sometimes things will break. And by sometimes I mean quite often and by things I mean everything. Updated dependencies will eventually break existing functionality. The more dependencies you have, the more often things will break. It’s not a deal breaker, it’s just something you have to be prepared for.

      Track things on your own

      The next option is to track things on your own. This one is going to be pretty rough. I’ve been part of teams that have done this in the past. It’s a lot of heavy lifting. A lot. You have to keep track of everything you have, everything that gets added, how it’s used, what’s being updated. What has outstanding security problems. I would compare this to juggling 12 balls with one hand.

      Now, even though it’s extremely difficult to try to track all this on your own, you do have the opportunity to track exactly what you need and how you need it. It’s an effort that will require a generous number of people.

      I’m not going to spend any time explaining this because I think it’s a corner case now. It used to be fairly common mostly because options 1 and 3 either didn’t exist or weren’t practical. If this is something you have interest in, feel free to reach out, I’d be happy to convince you not to do it 🙂

      Use an existing tool to track things

      The last option is to use an existing tool. In the past few years there have been quite a few tools and companies to emerge with the purpose of tracking what open source you have in your products. Some have a focus on security vulnerabilities. Some focus on licensing. Some look for code that’s been copy and pasted. It’s really nice to see so many options available.

      There are two really important things you should keep in mind if this is the option you’re interested in. Firstly, understand what your goal is. If your primary concern is keeping your dependencies up to date in a node.js project, make sure you look for that. Some tools do a better job with certain languages. Some tools inspect containers and not source code for example. Some focus on git repositories. Know what you want then go find it.

      The second important thing to keep in mind is none of these tools are going to be 100% correct. You’ll probably see around 80% accuracy, maybe less depending what you’re doing. I often say “perfect and nothing are the same thing”. There is no perfect here so don’t expect it. There are going to be false positives, there will be false negatives. This isn’t a reason to write off tools. Account for this in your planning. Things will get missed, there will be some fire-drills. If you’re prepared to deal with it it won’t be a huge deal.

      The return on investment will be magnitudes greater than trying to build your own perfect tracking system. It’s best to look at a lot of these things from a return on investment perspective. Perfect isn’t realistic. Nothing isn’t realistic. Find your minim viable security.

       

      So now that you know what you’re shipping, how does this all work, what do you do next? We’ll cover that in the near future. Stay tuned.

      All systems go

      Posted by Fedora Infrastructure Status on April 10, 2019 12:50 AM
      New status good: Everything seems to be working. for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package Database, Package maintainers git repositories, Fedora Container Registry, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

      There are scheduled downtimes in progress

      Posted by Fedora Infrastructure Status on April 09, 2019 09:09 PM
      New status scheduled: scheduled outage: https://pagure.io/fedora-infrastructure/issue/7699 for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package Database, Package maintainers git repositories, Fedora Container Registry, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot