Fedora People

Fedora Classroom: RPM Packaging 101

Posted by Fedora Magazine on June 14, 2021 08:00 AM

Fedora Classroom sessions return with a session on RPM packaging targeted at beginners.

About the session

RPMs are the smallest building blocks of the Fedora Linux system. This session will walk through the basics of building an RPM from source code. You will learn how to set up your Fedora system to build RPMs, how to write a spec file that adheres to the Fedora Packaging Guidelines, and how to use it to generate RPMs for distribution. The session will also provide a brief overview of the complete Fedora packaging pipeline.

While no prior knowledge of building RPMs or building software from its source code is required, some software development experience will be useful. The hope is to help users learn the skills required to build and maintain RPM packages, and to encourage them to contribute to Fedora by joining the package collection maintainers.

When and where

The classroom session will be organised on the BlueJeans video platform at 1200 UTC on June 17, 2021 and is expected to last an hour:

Topics covered in the session

  • The basics of a spec file.
  • Source and binary RPMs and how they are built from the spec using rpmbuild.
  • A brief introduction to mock and fedpkg.
  • The life cycle of a Fedora package.
  • How you can join the Fedora package collection maintainers.

Prerequisites

  • A Fedora installation (Workstation or any lab/spin)
  • The following software should be installed and configured:
    • git
      sudo dnf install git
    • fedora-packager
      sudo dnf install fedora-packager
    • mock (configured as per these instructions)

Useful reading

About the instructor

Ankur Sinha has been maintaining packages in Fedora for more than a decade and is currently both a sponsor to the package maintainers group, and a proven packager. Ankur primarily focuses on maintaining neuroscience related software for the NeuroFedora Special Interest Group and contributes to other parts of the community wherever possible.

Fedora Classroom is a project aimed at spreading knowledge on Fedora related topics. If you would like to propose a session, feel free to open a ticket here with the tag classroom. If you are interested in taking a proposed session, please let us know and once you take it, you will be awarded the Sensei Badge too as a token of appreciation. Recordings from the previous sessions can be found here.

Episode 275 – What in the @#$% is going on with ransomware?

Posted by Josh Bressers on June 14, 2021 12:01 AM

Josh and Kurt talk about why it seems like the world of ransomware has gotten out of control in the last few weeks. Every day there’s some new and more bizarre ransomware story than we had yesterday.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2464-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_275_What_in_the_is_going_on_with_ransomware.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_275_What_in_the_is_going_on_with_ransomware.mp3</audio>

Show Notes

Geo replication with syncthing

Posted by Pablo Iranzo Gómez on June 12, 2021 07:40 PM

Some years ago I started using geo replication to keep a copy of all the pictures, docs, etc

After being using BitTorrent sync and later resilio sync (even if I didn’t fully liked the idea of it being not open source), I gave up. My NAS with 16 GB of ram, even if a bit older (HP N54L), seemed not to have enough memory to run it, and was constantly swapping.

Checking the list of processes pointed to the rslsync process as the culprit, and apparently the cause is the way it handles the files it controls.

The problem is that even one file is deleted long ago, rslsync does keep it in the database… and in memory. After checking with their support (as I had a family license), the workaround was to remove the folder and create a new one, which in parallel meant having to configure it again on all the systems that used for keeping a copy.

I finally decide to give syncthing another try after years since last evaluation.

Syncthing is now is covering some of the features I was using with rslsync:

  • Multi-master replication
  • Remote encrypted peers
  • Read only peers
  • Multiple folder support

In addition, it includes systemd support and it’s packaged in the operating system, making it really easy to install and update (´rslsync´ was without updates for almost a year).

Only caveat, if using Debian, is to use the repository they provide as the package included in the distribution is really old, causing some issues with the remote encrypted peers.

For starting as user the command is very simple:

1
2
systemctl enable syncthing@user
systemctl starat syncthing@user

Once the process is started, the browser can be pointed locally at http://127.0.0.1:8384 to start configuration:

  • It is recommended to define a GUI username and password for avoiding other users with access to the system from altering the configuration. Once done, we’re ready to start adding folders and systems.

One difference is that in rslsync having the secret for the key is enough, in syncthing you need to add the hosts in both ways to accept them and be able to share data.

One easing feature here is that one host can be configured as presenter which allows other systems to inherit the know list of hosts from the host marked as presenter, making it easier to do the both-ways initial introduction.

Best outcome, is that the use (or abuse) of RAM has been completely slashed what rslsync was using.

Currently, the only issue is that for some computers in the local network the sync was a bit slow (it even got some remote underpowered devices syncing faster than local ones), but some of the copies were fully in synced already.

The web interface is not bad, even if, for what I was used to, it’s not showing as much detail about the hosts status at glance, having to open each individual folder to see how it is going, as in the general view, it shows the percentage of completion and the amount of data still missing to be synced.

Hope you like it!

Friday’s Fedora Facts: 2021-23

Posted by Fedora Community Blog on June 11, 2021 08:29 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

Don’t forget to take the Annual Fedora Survey and claim your badge!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
AnsibleFestvirtual29–30 Sepcloses 29 June
Nest With Fedoravirtual5-8 Augcloses 16 July
</figure>

Help wanted

Prioritized Bugs

<figure class="wp-block-table">
Bug IDComponentStatus
1953675kf5-akonadi-serverNEW
</figure>

Upcoming meetings

Releases

Fedora Linux 35

Schedule

  • 2021-06-23 — Change submission deadline (Changes requiring infrastructure changes)
  • 2021-06-29 — Change submission deadline (System-Wide Changes, Changes requiring mass rebuild)
  • 2021-07-20 — Change submission deadline (Self-Contained Changes)
  • 2021-07-21 — Mass rebuild begins
  • 2021-08-10 — F35 branches from Rawhide, F36 development begins

For the full schedule, see the schedule website.

Changes

<figure class="wp-block-table">
ProposalTypeStatus
Make btrfs the default file system for Fedora CloudSystem-WideFESCo #2617
Sphinx 4Self-ContainedFESCo #2620
Build Fedora Cloud Images with Hybrid BIOS+UEFI Boot SupportSystem-WideFESCo #2621
Replace the Anaconda product configuration files with profilesSelf-ContainedAnnounced
Use yescrypt as default hashing method for shadow passwordsSystem-WideAnnounced
</figure>

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-23 appeared first on Fedora Community Blog.

PHP 8.1 as Software Collection

Posted by Remi Collet on June 11, 2021 04:34 AM

Version 8.1.0alpha1 is released. It's still in development and will enter soon in the stabilization phase for the developers, and the test phase for the users.

RPM of this upcoming version of PHP 8.1, are available in remi repository for Fedora 33, 34 and Enterprise Linux 7, 8 (RHEL, CentOS, ...) in a fresh new Software Collection (php81) allowing its installation beside the system version.

As I strongly believe in SCL potential to provide a simple way to allow installation of various versions simultaneously, and as I think it is useful to offer this feature to allow developers to test their applications, to allow sysadmin to prepare a migration or simply to use this version for some specific application, I decide to create this new SCL.

I also plan to propose this new version as a Fedora 36 change (as F35 should be released a few weeks before PHP 8.1.0).

Installation :

yum install php81

emblem-important-2-24.pngTo be noticed:

  • the SCL is independant from the system, and doesn't alter it
  • this SCL is available in remi-safe repository (or remi for Fedora)
  • installation is under the /opt/remi/php81 tree, configuration under the /etc/opt/remi/php81 tree
  • the Apache module, php81-php, is available, but of course, only one mod_php can be used (so you have to disable or uninstall any other, the one provided by the default "php" package still have priority)
  • the FPM service (php81-php-fpm) is available, it listens on default port 9000, so you have to change the configuration if you want to use various FPM services simultaneously.
  • the php81 command give a simple access to this new version, however the scl command is still the recommended way (or the module command).
  • for now, the collection provides 8.1.0alpha1, and alpha/beta/RC versions should be released in the next weeks
  • some of the PECL extensions are already available, see the extensions status page
  • traking issue #177 can be used to follow the work in progress on RPMS of PHP and extensions
  • the new php81-syspaths package allows to use it as the system default version

emblem-notice-24.pngAlso read other entries about SCL. especially description of my My PHP workstation.

$ module load php81
$ php --version
PHP 8.1.0alpha1 (cli) (built: Jun  8 2021 16:24:50) (NTS gcc x86_64)
Copyright (c) The PHP Group
Zend Engine v4.1.0-dev, Copyright (c) Zend Technologies

As always, your feedback is welcome, a SCL dedicated forum is open.

Software Collections (php81)

Moving ActionCable over to Webpacker

Posted by Josef Strzibny on June 11, 2021 12:00 AM

This week, I upgraded a little demo application for my book Deployment from Scratch from Rails 6 to Rails 6.1. Since I showcase WebSockets with ActionCable and Redis, I needed to move the ActionCable CoffeeScript from Sprockets to Webpacker.

I started with dependencies. The original application could lose uglifier as Sprockets’ JavaScript processor and coffee-rails in favour of JavaScript. I replaced them with webpacker gem in Gemfile:

gem 'webpacker', '~> 5.4'

Once I generated a new Gemfile.lock, I could run a webpacker:install tasks that creates many files (which I won’t get into here):

$ rails webpacker:install

In case you won’t see the new Webpacker tasks, make sure to delete the Rails cache:

$ rails tmp:cache:clear

It took me a while to realize why I don’t see this Webpacker Rake task.

Once that’s done, let’s see how to move the JavaScript entry point file.

// app/assets/javascripts/application.js
//= require rails-ujs
//= require activestorage
//= require turbolinks
//= require_tree .

All these requirements should now happen in the new app/javascript directory:

// app/javascript/packs/application.js
import Rails from "@rails/ujs"
import Turbolinks from "turbolinks"
import * as ActiveStorage from "@rails/activestorage"
import "channels"

Rails.start()
ActiveStorage.start()

After I had my new application.js ready, I changed javascript_include_tag to javascript_pack_tag in views:

<%= javascript_pack_tag 'application', 'data-turbolinks-track': 'reload' %>

Then I updated the channels. I went from this:

// app/assets/javascripts/cable.js
// Action Cable provides the framework to deal with WebSockets in Rails.
// You can generate new channels where WebSocket features live using the `rails generate channel` command.
//
//= require action_cable
//= require_self
//= require_tree ./channels

(function() {
  this.App || (this.App = {});

  App.cable = ActionCable.createConsumer();

}).call(this);

To new channels structure with channels/index.js and channels/consumer.js:

// app/javascript/channels/index.js
// Load all the channels within this directory and all subdirectories.
// Channel files must be named *_channel.js.

const channels = require.context('.', true, /_channel\.js$/)
channels.keys().forEach(channels)


// app/javascript/channels/consumer.js
// Action Cable provides the framework to deal with WebSockets in Rails.
// You can generate new channels where WebSocket features live using the `bin/rails generate channel` command.

import { createConsumer } from "@rails/actioncable"

export default createConsumer()

And then I rewrote my original subscription file that looked like this:

// app/assets/javascripts/cable/subscriptions/document.coffee
App.cable.subscriptions.create { channel: "DocumentChannel" },
  connected: () ->

  received: (data) ->
    console.log("Received data.")

    alert(data["title"])

To a JavaScript version using the previous consumer.js file:

// app/javascript/channels/documents_channel.js
import consumer from "./consumer"

consumer.subscriptions.create(
  { channel: "DocumentChannel" },
  {
    connect() {},
    received(data) {
      console.log("Received data.")
      alert(data["title"])
    }
  }
)

At this point the all the new files are in place, I just had to go and delete the old app/assets/javascript directory:

$ rm -rf app/assets/javascripts

And remove it from the manifest (the second line):

// app/assets/config/manifest.js
//= link_tree ../images
//= link_directory ../javascripts .js
//= link_directory ../stylesheets .css

Although it’s a small app with only one channel, you might find this useful if you didn’t move to Webpacker yet.

The Wondrous World of Discoverable GPT Disk Images

Posted by Lennart Poettering on June 10, 2021 10:00 PM

TL;DR: Tag your GPT partitions with the right, descriptive partition types, and the world will become a better place.

A number of years ago we started the Discoverable Partitions Specification which defines GPT partition type UUIDs and partition flags for the various partitions Linux systems typically deal with. Before the specification all Linux partitions usually just used the same type, basically saying "Hey, I am a Linux partition" and not much else. With this specification the GPT partition type, flags and label system becomes a lot more expressive, as it can tell you:

  1. What kind of data a partition contains (i.e. is this swap data, a file system or Verity data?)
  2. What the purpose/mount point of a partition is (i.e. is this a /home/ partition or a root file system?)
  3. What CPU architecture a partition is intended for (i.e. is this a root partition for x86-64 or for aarch64?)
  4. Shall this partition be mounted automatically? (i.e. without specifically be configured via /etc/fstab)
  5. And if so, shall it be mounted read-only?
  6. And if so, shall the file system be grown to its enclosing partition size, if smaller?
  7. Which partition contains the newer version of the same data (i.e. multiple root file systems, with different versions)

By embedding all of this information inside the GPT partition table disk images become self-descriptive: without requiring any other source of information (such as /etc/fstab) if you look at a compliant GPT disk image it is clear how an image is put together and how it should be used and mounted. This self-descriptiveness in particular breaks one philosophical weirdness of traditional Linux installations: the original source of information which file system the root file system is, typically is embedded in the root file system itself, in /etc/fstab. Thus, in a way, in order to know what the root file system is you need to know what the root file system is. 🤯 🤯 🤯

(Of course, the way this recursion is traditionally broken up is by then copying the root file system information from /etc/fstab into the boot loader configuration, resulting in a situation where the primary source of information for this — i.e. /etc/fstab — is actually mostly irrelevant, and the secondary source — i.e. the copy in the boot loader — becomes the configuration that actually matters.)

Today, the GPT partition type UUIDs defined by the specification have been adopted quite widely, by distributions and their installers, as well as a variety of partitioning tools and other tools.

In this article I want to highlight how the various tools the systemd project provides make use of the concepts the specification introduces.

But before we start with that, let's underline why tagging partitions with these descriptive partition type UUIDs (and the associated partition flags) is a good thing, besides the philosophical points made above.

  1. Simplicity: in particular OS installers become simpler — adjusting /etc/fstab as part of the installation is not necessary anymore, as the partitioning step already put all information into place for assembling the system properly at boot. i.e. installing doesn't mean that you always have to get fdisk and /etc/fstab into place, the former suffices entirely.

  2. Robustness: since partition tables mostly remain static after installation the chance of corruption is much lower than if the data is stored in file systems (e.g. in /etc/fstab). Moreover by associating the metadata directly with the objects it describes the chance of things getting out of sync is reduced. (i.e. if you lose /etc/fstab, or forget to rerun your initrd builder you still know what a partition is supposed to be just by looking at it.)

  3. Programmability: if partitions are self-descriptive it's much easier to automatically process them with various tools. In fact, this blog story is mostly about that: various systemd tools can naturally process disk images prepared like this.

  4. Alternative entry points: on traditional disk images, the boot loader needs to be told which kernel command line option root= to use, which then provides access to the root file system, where /etc/fstab is then found which describes the rest of the file systems. Where precisely root= is configured for the boot loader highly depends on the boot loader and distribution used, and is typically encoded in a Turing complete programming language (Grub…). This makes it very hard to automatically determine the right root file system to use, to implement alternative entry points to the system. By alternative entry points I mean other ways to boot the disk image, specifically for running it as a systemd-nspawn container — but this extends to other mechanisms where the boot loader may be bypassed to boot up the system, for example qemu when configured without a boot loader.

  5. User friendliness: it's simply a lot nicer for the user looking at a partition table if the partition table explains what is what, instead of just saying "Hey, this is a Linux partition!" and nothing else.

Uses for the concept

Now that we cleared up the Why?, lets have a closer look how this is currently used and exposed in systemd's various components.

Use #1: Running a disk image in a container

If a disk image follows the Discoverable Partition Specification then systemd-nspawn has all it needs to just boot it up. Specifically, if you have a GPT disk image in a file foobar.raw and you want to boot it up in a container, just run systemd-nspawn -i foobar.raw -b, and that's it (you can specify a block device like /dev/sdb too if you like). It becomes easy and natural to prepare disk images that can be booted either on a physical machine, inside a virtual machine manager or inside such a container manager: the necessary meta-information is included in the image, easily accessible before actually looking into its file systems.

Use #2: Booting an OS image on bare-metal without /etc/fstab or kernel command line root=

If a disk image follows the specification in many cases you can remove /etc/fstab (or never even install it) — as the basic information needed is already included in the partition table. The systemd-gpt-auto-generator logic implements automatic discovery of the root file system as well as all auxiliary file systems. (Note that the former requires an initrd that uses systemd, some more conservative distributions do not support that yet, unfortunately). Effectively this means you can boot up a kernel/initrd with an entirely empty kernel command line, and the initrd will automatically find the root file system (by looking for a suitably marked partition on the same drive the EFI System Partition was found on).

(Note, if /etc/fstab or root= exist and contain relevant information they always takes precedence over the automatic logic. This is in particular useful to tweaks thing by specifying additional mount options and such.)

Use #3: Mounting a complex disk image for introspection or manipulation

The systemd-dissect tool may be used to introspect and manipulate OS disk images that implement the specification. If you pass the path to a disk image (or block device) it will extract various bits of useful information from the image (e.g. what OS is this? what partitions to mount?) and display it.

With the --mount switch a disk image (or block device) can be mounted to some location. This is useful for looking what is inside it, or changing its contents. This will dissect the image and then automatically mount all contained file systems matching their GPT partition description to the right places, so that you subsequently could chroot into it. (But why chroot if you can just use systemd-nspawn? 😎)

Use #4: Copying files in and out of a disk image

The systemd-dissect tool also has two switches --copy-from and --copy-to which allow copying files out of or into a compliant disk image, taking all included file systems and the resulting mount hierarchy into account.

Use #5: Running services directly off a disk image

The RootImage= setting in service unit files accepts paths to compliant disk images (or block device nodes), and can mount them automatically, running service binaries directly off them (in chroot() style). In fact, this is the base for the Portable Service concept of systemd.

Use #6: Provisioning disk images

systemd provides various tools that can run operations provisioning disk images in an "offline" mode. Specifically:

systemd-tmpfiles

With the --image= switch systemd-tmpfiles can directly operate on a disk image, and for example create all directories and other inodes defined in its declarative configuration files included in the image. This can be useful for example to set up the /var/ or /etc/ tree according to such configuration before first boot.

systemd-sysusers

Similar, the --image= switch of systemd-sysusers tells the tool to read the declarative system user specifications included in the image and synthesizes system users from it, writing them to the /etc/passwd (and related) files in the image. This is useful for provisioning these users before the first boot, for example to ensure UID/GID numbers are pre-allocated, and such allocations not delayed until first boot.

systemd-machine-id-setup

The --image= switch of systemd-machine-id-setup may be used to provision a fresh machine ID into /etc/machine-id of a disk image, before first boot.

systemd-firstboot

The --image= switch of systemd-firstboot may be used to set various basic system setting (such as root password, locale information, hostname, …) on the specified disk image, before booting it up.

Use #7: Extracting log information

The journalctl switch --image= may be used to show the journal log data included in a disk image (or, as usual, the specified block device). This is very useful for analyzing failed systems offline, as it gives direct access to the logs without any further, manual analysis.

Use #8: Automatic repartitioning/growing of file systems

The systemd-repart tool may be used to repartition a disk or image in an declarative and additive way. One primary use-case for it is to run during boot on physical or VM systems to grow the root file system to the disk size, or to add in, format, encrypt, populate additional partitions at boot.

With its --image= switch it the tool may operate on compliant disk images in offline mode of operation: it will then read the partition definitions that shall be grown or created off the image itself, and then apply them to the image. This is particularly useful in combination with the --size= which allows growing disk images to the specified size.

Specifically, consider the following work-flow: you download a minimized disk image foobar.raw that contains only the minimized root file system (and maybe an ESP, if you want to boot it on bare-metal, too). You then run systemd-repart --image=foo.raw --size=15G to enlarge the image to the 15G, based on the declarative rules defined in the repart.d/ drop-in files included in the image (this means this can grow the root partition, and/or add in more partitions, for example for /srv or so, maybe encrypted with a locally generated key or so). Then, you proceed to boot it up with systemd-nspawn --image=foo.raw -b, making use of the full 15G.

Versioning + Multi-Arch

Disk images implementing this specifications can carry OS executables in one of three ways:

  1. Only a root file system

  2. Only a /usr/ file system (in which case the root file system is automatically picked as tmpfs).

  3. Both a root and a /usr/file system (in which case the two are combined, the /usr/ file system mounted into the root file system, and the former possibly in read-only fashion`)

They may also contain OS executables for different architectures, permitting "multi-arch" disk images that can safely boot up on multiple CPU architectures. As the root and /usr/ partition type UUIDs are specific to architectures this is easily done by including one such partition for x86-64, and another for aarch64. If the image is now used on an x86-64 system automatically the former partition is used, on aarch64 the latter.

Moreover, these OS executables may be contained in different versions, to implement a simple versioning scheme: when tools such as systemd-nspawn or systemd-gpt-auto-generator dissect a disk image, and they find two or more root or /usr/ partitions of the same type UUID, they will automatically pick the one whose GPT partition label (a 36 character free-form string every GPT partition may have) is the newest according to strverscmp() (OK, truth be told, we don't use strverscmp() as-is, but a modified version with some more modern syntax and semantics, but conceptually identical).

This logic allows to implement a very simple and natural A/B update scheme: an updater can drop multiple versions of the OS into separate root or /usr/ partitions, always updating the partition label to the version included there-in once the download is complete. All of the tools described here will then honour this, and always automatically pick the newest version of the OS.

Verity

When building modern OS appliances, security is highly relevant. Specifically, offline security matters: an attacker with physical access should have a difficult time modifying the OS in a way that isn't noticed. i.e. think of a car or a cell network base station: these appliances are usually parked/deployed in environments attackers can get physical access to: it's essential that in this case the OS itself sufficiently protected, so that the attacker cannot just mount the OS file system image, make modifications (inserting a backdoor, spying software or similar) and the system otherwise continues to run without this being immediately detected.

A great way to implement offline security is via Linux' dm-verity subsystem: it allows to securely bind immutable disk IO to a single, short trusted hash value: if an attacker manages to offline modify the disk image the modified disk image won't match the trusted hash anymore, and will not be trusted anymore (depending on policy this then just result in IO errors being generated, or automatic reboot/power-off).

The Discoverable Partitions Specification declares how to include Verity validation data in disk images, and how to relate them to the file systems they protect, thus making if very easy to deploy and work with such protected images. For example systemd-nspawn supports a --root-hash= switch, which accepts the Verity root hash and then will automatically assemble dm-verity with this, automatically matching up the payload and verity partitions. (Alternatively, just place a .roothash file next to the image file).

Future

The above already is a powerful tool set for working with disk images. However, there are some more areas I'd like to extend this logic to:

bootctl

Similar to the other tools mentioned above, bootctl (which is a tool to interface with the boot loader, and install/update systemd's own EFI boot loader sd-boot) should learn a --image= switch, to make installation of the boot loader on disk images easy and natural. It would automatically find the ESP and other relevant partitions in the image, and copy the boot loader binaries into them (or update them).

coredumpctl

Similar to the existing journalctl --image= logic the coredumpctl tool should also gain an --image= switch for extracting coredumps from compliant disk images. The combination of journalctl --image= and coredumpctl --image= would make it exceptionally easy to work with OS disk images of appliances and extracting logging and debugging information from them after failures.

And that's all for now. Please refer to the specification and the man pages for further details. If your distribution's installer does not yet tag the GPT partition it creates with the right GPT type UUIDs, consider asking them to do so.

Thank you for your time.

Introduce yourself Outreachy

Posted by Fedora Community Blog on June 10, 2021 05:19 PM
<figure class="wp-block-image size-large"></figure>

Hello everyone!

I’m Manisha Kanyal, a sophomore B.Tech in Computer Science & Engineering student from India. I’m passionate about open source and software development. The project for which I’ve been selected as an Outreachy intern is “Improve Fedora QA dashboard” and I’m enthusiastic and grateful for this opportunity. It’s going to be a great learning experience for me.

My core values

My core values are optimism and faith.

The optimism in me makes me positive, whatever I do whether it’s related to my field or not, I do it by thinking the best possible thing will happen and hope for it even if it’s not likely. My optimistic attitude reflects a belief or hope that the outcome of some specific endeavor, or outcomes in general, will be positive, favorable, and desirable. That is what keeps me believing in good things most and is very helpful to me.

 I tried to get my application accepted twice but didn’t make it for the first time but the faith in me never really got demotivated by that fact. I kept learning after that, kept putting in the efforts, and finally, this year came with what I was hoping for. Yes, I got selected, I made it. That made me believe that no matter what, even if you are a beginner, if you have faith in yourself you can reach great heights.

What motivated me to apply to Outreachy?

Outreachy is a great opportunity for me to prove myself worthy enough to be in this industry. When I heard about this for the first time, I got really excited to know about it, as one of my friends told me how helpful this opportunity is for people like me who are underrepresented in their fields.

 I wanted to clear my application for Outreachy because I was excited to connect with other females in computing fields and to share my experience and be exposed to open source. This is a great opportunity for me to know other women in my field and to hear their stories, as well as to share my own during and after Outreachy. This motivated me a lot to grab this opportunity asmeeting inspirational people figures at Outreachy will be a great boost for me to try harder and have faith in myself as a woman in tech. Part of me regrets not knowing about Outreachy enough in my freshman year, but now I have this opportunity and I can’t be happy enough.

<figure class="wp-block-image size-large"></figure>

The post Introduce yourself Outreachy appeared first on Fedora Community Blog.

Using proper FreeIPA certificates on Cockpit

Posted by Maxim Burgerhout on June 10, 2021 04:39 PM

Cockpit and FreeIPA

A couple of years ago, I did a video on Youtube on using FreeIPA / IdM certificates in Cockpit. According to some comments (that I only saw way after the fact…), for some people, my way of doing that didn’t work.

Therefore, I redid the video for RHEL7 and RHEL8, connected to IdM from RHEL8. This should work with recent Fedora as well, since I’m using that at home :)

How it works

#SELinux

Both on RHEL7 and RHEL8, the certmonger process that is actually “in charge” of getting the certificates, cannot write to /etc/cockpit/ws-certs.d due to SELinux. Therefore, before we tell it to go fetch certificates through ipa-getcert, we need to tweak SELinux a bit.

The following command works on RHEL7, RHEL8 and recent Fedora and relabels /etc/cockpit/ws-certs.d to certs_t instead of etc_t. This makes it possible for certmonger to write there.

semanage fcontext -a -t cert_t "/etc/cockpit/ws-certs.d(/.*)?"
restorecon -FvR /etc/cockpit/ws-certs.d

RHEL7

On RHEL7, cockpit expects a combined file for the certificate and key information, so we need to concatenate what we get from certmonger before we give to cockpit.

We can pass ipa-getcert a post-save command, that is issued after storing the certificate, but that can be only a single command. Therefore we use a script:

#!/bin/bash

name=$1

cat /etc/pki/tls/certs/${name}.cert /etc/pki/tls/private/${name}.key /etc/cockpit/ws-certs.d/50-${name}.cert
chown root:cockpit-ws /etc/cockpit/ws-certs.d/50-${name}.cert
chmod 0640 /etc/cockpit/ws-certs.d/50-${name}.cert

After we issue that command, we can request the certificate:

ipa-getcert request -f /etc/pki/tls/certs/$(hostname -f).cert -k /etc/pki/tls/private/$(hostname -f).key -D $(hostname -f) -C "/usr/local/sbin/cockpit_certs.sh $(hostname -f)" -K host/$(hostname -f)

This should result in a certificate in /etc/cockpit/ws-certs.d that we’ll never have to touch again :)

RHEL8

On RHEL8 and recent Fedora, we don’t need a script to concatenate the key and the certificate, because recent cockpit can handle two separate files for them.

Therefore, we only have to issue the ipa-getcert command:

ipa-getcert request -f /etc/cockpit/ws-certs.d/$(hostname -f).cert -k /etc/cockpit/ws-certs.d/$(hostname -f).key -D $(hostname -f) -K host/$(hostname -f) -m 0640  -o root:cockpit-ws -O root:root -M 0644

This again should result in a certificate that we’ll never have to touch again until we decommission this machine!

Hope this helps!

<style> div.ytcontainer { position: relative; width: 100%; height: 0; padding-bottom: 56.25%; } iframe.yt { position: absolute; top: 0; left: 0; width: 100%; height: 100%; border: 0; } </style>
<iframe allowfullscreen="" class="yt" src="https://www.youtube.com/embed/W26rWtEqToc"></iframe>

The syslog-ng Insider 2021-06: Alerting; EoL technologies; Google Summer of Code;

Posted by Peter Czanik on June 10, 2021 10:09 AM

Dear syslog-ng users,

This is the 92nd issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.

NEWS

First steps of sending alerts to Discord and others from syslog-ng: http() and Apprise

A returning question I get is: “I see, that you can send alerts from syslog-ng to Slack and Telegram, but do you happen to support XYZ?” Replace XYZ with Discord and countless others. Up until recently, my regular answer has been: “Take a look at the Slack destination of syslog-ng, and based on that, you can add support for your favorite service”. Then I learned about Apprise, a notification library for Python, supporting dozens of different services. This blog is the first part of a series. It covers how to send log messages to Discord using the http() destination of syslog-ng and an initial try at using Apprise for alerting.

https://www.syslog-ng.com/community/b/blog/posts/first-steps-of-sending-alerts-to-discord-and-others-from-syslog-ng-http-and-apprise

Changes in technologies supported by syslog-ng: Python 2, CentOS 6 & Co.

Technology is continuously evolving. There are regular changes in platforms running syslog-ng: old technologies disappear, and new technologies are introduced. While we try to provide stability and continuity to our users, we also need to adapt. Python 2 reached its end of life a year ago, CentOS 6 in November 2020. Using Java-based drivers has been problematic for many, so they were mostly replaced with native implementations. From this blog you can learn about recent changes affecting syslog-ng development and packaging.

https://www.syslog-ng.com/community/b/blog/posts/changes-in-technologies-supported-by-syslog-ng-python-2-centos-6-co

Google Summer of Code 2021

This year, the syslog-ng team participates in Google Summer of Code (GSoC) again as a mentoring organization. Two students paid by GSoC work on syslog-ng under the mentoring of syslog-ng developers. One of the students works on MacOS support, including the new ARM-based systems, while the other one is working developing on a new regular expression parser:

https://summerofcode.withgoogle.com/organizations/5548293561516032/

WEBINARS


Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/

Use cpulimit to free up your CPU

Posted by Fedora Magazine on June 10, 2021 08:00 AM

The recommended tool for managing system resources on Linux systems is cgroups. While very powerful in terms of what sorts of limits can be tuned (CPU, memory, disk I/O, network, etc.), configuring cgroups is non-trivial. The nice command has been available since 1973. But it only adjusts the scheduling priority among processes that are competing for time on a processor. The nice command will not limit the percentage of CPU cycles that a process can consume per unit of time. The cpulimit command provides the best of both worlds. It limits the percentage of CPU cycles that a process can allocate per unit of time and it is relatively easy to invoke.

The cpulimit command is mainly useful for long-running and CPU-intensive processes. Compiling software and converting videos are common examples of long-running processes that can max out a computer’s CPU. Limiting the CPU usage of such processes will free up processor time for use by other tasks that may be running on the computer. Limiting CPU-intensive processes will also reduce the power consumption, heat output, and possibly the fan noise of the system. The trade-off for limiting a process’s CPU usage is that it will require more time to run to completion.

Install cpulimit

The cpulimit command is available in the default Fedora Linux repositories. Run the following command to install cpulimit on a Fedora Linux system.

$ sudo dnf install cpulimit

View the documentation for cpulimit

The cpulimit package does not come with a man page. Use the following command to view cpulimit’s built-in documentation. The output is provided below. But you may want to run the command on your own system in case the options have changed since this article was written.

$ cpulimit --help
Usage: cpulimit [OPTIONS…] TARGET
   OPTIONS
      -l, --limit=N percentage of cpu allowed from 0 to 800 (required)
      -v, --verbose show control statistics
      -z, --lazy exit if there is no target process, or if it dies
      -i, --include-children limit also the children processes
      -h, --help display this help and exit
   TARGET must be exactly one of these:
      -p, --pid=N pid of the process (implies -z)
      -e, --exe=FILE name of the executable program file or path name
      COMMAND [ARGS] run this command and limit it (implies -z)

A demonstration

To demonstrate using the cpulimit command, a contrived, computationally-intensive Python script is provided below. The script is run first with no limit and then with a limit of 50%. It computes the value of the 42nd Fibonacci number. The script is run as a child process of the time command in both cases to show the total time that was required to compute the answer.

$ /bin/time -f '(computed in %e seconds)' /bin/python -c 'f = lambda n: n if n<2 else f(n-1)+f(n-2); print(f(42), end=" ")'
267914296 (computed in 51.80 seconds)
$ /bin/cpulimit -i -l 50 /bin/time -f '(computed in %e seconds)' /bin/python -c 'f = lambda n: n if n<2 else f(n-1)+f(n-2); print(f(42), end=" ")'
267914296 (computed in 127.38 seconds)

You might hear the CPU fan on your PC rev up when running the first version of the command. But you should not when running the second version. The first version of the command is not CPU limited but it should not cause your PC to become bogged down. It is written in such a way that it can only use at most one CPU. Most modern PCs have multiple CPUs and can simultaneously run other tasks without difficulty when one of the CPUs is 100% busy. To verify that the first command is maxing out one of your processors, run the top command in a separate terminal window and press the 1 key. Press the Q key to quit the top command.

Setting a limit above 100% is only meaningful on a program that is capable of task parallelism. For such programs, each increment of 100% represents full utilization of a CPU (200% = 2 CPUs, 300% = 3 CPUs, etc.).

Notice that the -i option has been passed to the cpulimit command in the above example. This is necessary because the command to be limited is not a direct child process of the cpulimit command. Rather it is a child process of the time command which in turn is a child process of the cpulimit command. Without the -i option, cpulimit would only limit the time command.

Final notes

If you want to limit a graphical application that you start from a desktop icon, copy the application’s .desktop file (often located under the /usr/share/applications directory) to your ~/.local/share/applications directory and modify the Exec line accordingly. Then run the following command to apply the changes.

$ update-desktop-database ~/.local/share/applications

Community Blog monthly update: May 2021

Posted by Fedora Community Blog on June 10, 2021 08:00 AM
Community Blog update

This is the latest in our monthly series summarizing the past month on the Community Blog. Please leave a comment below to let me know what you think.

Stats

In May, we published 27 posts. The site had 4,601 visits from 2,382 unique viewers. 744 visits came from search engines, while468 came from Twitter and 134 from the WordPress Android app.

The most read post last month was IRC announcement.

Badges

Last month, contributors earned the following badges:

Your content here!

The Community Blog is the place to publish community-facing updates on what you’re working on in Fedora. The process is easy, so submit early and submit often.

The post Community Blog monthly update: May 2021 appeared first on Fedora Community Blog.

A lethal spurious interrupt

Posted by Javier Martinez Canillas on June 09, 2021 10:22 AM

A big part of my work on Fedora/RHEL is to troubleshoot and do root cause analysis across the software stack. Because many of these projects are decades old, this usually feels like being stuck somewhere between being an archaeologist and a detective.

Many bugs are boring but some are interesting, either because the investigation made me learn something new or due to the amount of effort that was sunk into figuring out the problem. So I thought that it would be a nice experiment to share a little about the ones that are worth mentioning. This is the first of such posts, I may write more in the future if have time and remember to do it:

It was a dark and stormy night when Peter Robinson mentioned a crime to me, a Rockpro64 board was found to not boot when the CONFIG_PCIE_ROCKCHIP_HOST option was enabled. He also already had found the criminal, it was the CONFIG_DEBUG_SHIRQ option.

I have to admit that I only knew CONFIG_DEBUG_SHIRQ by name and that it was a debug option for shared interrupts, but didn’t even know what this option was about. So the first step was to read the help text of the Kconfig symbol to learn more on this option.

Enable this to generate a spurious interrupt just before a shared interrupt handler is deregistered (generating one when registering is currently disabled). Drivers need to handle this correctly. Some don’t and need to be caught.

This was the smoking gun: a spurious interrupt!

We now knew what was the weapon used but we still had questions, why would triggering an interrupt lead to the board being hung? The next step then was to figure out where exactly this was happening, it certainly would have to be somewhere in the driver’s IRQ handler code path.

By looking at the pcie-rockchip-host driver code, we see two IRQ handlers registered: rockchip_pcie_subsys_irq_handler() for the "pcie-sys" IRQ and rockchip_pcie_client_irq_handler() for the "pcie-client" IRQ.

Adding some debug printouts to both would show us that the issue was happening in the latter, when calling to rockchip_pcie_read(). This function just hangs indefinitely and never returns.

Peter wrote in the filled bug that this issue was reported before and there was even an RFC patch posted by a Rockchip engineer, who mentioned the assessment of the problem:

With CONFIG_DEBUG_SHIRQ enabled, the irq tear down routine
would still access the irq handler register as a shared irq.
Per the comment within the function of __free_irq, it says
"It’s a shared IRQ — the driver ought to be prepared for
an IRQ event to happen even now it’s being freed". However
when failing to probe the driver, it may disable the clock
for accessing the register and the following check for shared
irq state would call the irq handler which accesses the register
w/o the clk enabled. That will hang the system forever.

The proposed solution was to check in the rockchip_pcie_read() function if a rockchip->hclk_pcie clock was enabled before trying to access the PCIe registers’ address space. But that wasn’t accepted because it was solving the symptom and not the cause.

But it did confirm our findings, that the problem was an IRQ handler being called before it was expected and that the PCIe register access hangs due to a clock not being enabled.

With all of that information and reading once more the pcie-rockchip-host driver code, we could finally reconstruct the crime scene:

  1. "pcie-sys" IRQ is requested and its handler registered.
  2. "pcie-client" IRQ is requested and its handler registered.
  3. probe later fails due to readl_poll_timeout() returning a timeout.
  4. the "pcie-sys" IRQ handler is unregistered.
  5. CONFIG_DEBUG_SHIRQ triggers a spurious interrupt.
  6. "pcie-client" IRQ handler is called for this spurious interrupt.
  7. IRQ handler tries to read PCIE_CLIENT_INT_STATUS with clocks gated.
  8. the machine hangs because rockchip_pcie_read() call never returns.

The root cause of the problem then is that the IRQ handlers are registered too early, before all the required resources have been properly set up.

Our proposed solution then is to move all the IRQ initialization into a later stage of the probe function. That makes it safe for the IRQ handlers to be called as soon as they are registered.

Until the next mystery!

Removing assets dependencies from Rails applications for runtime

Posted by Josef Strzibny on June 09, 2021 12:00 AM

Rails provides a smooth assets:precompile task to prepare application assets but keeps all required gems for assets generation as a standard part of the generated Gemfile. Let’s see if we can avoid these dependencies for runtime.

A new Rails application comes with various gems concerning assets compilation and minification:

$ cat Gemfile
...
# Use SCSS for stylesheets
gem 'sass-rails', '>= 6'
# Transpile app-like JavaScript. Read more: https://github.com/rails/webpacker
gem 'webpacker', '~> 5.0'

We might see other gems in older versions of Rails, like uglifier or coffee-rails.

It makes sense since the Rails’s assets:precompile tasks is usually run within the PRODUCTION environment, where the CSS concerns are defined:

$ cat config/environment/production.rb
...
  # Compress CSS using a preprocessor.
  # config.assets.css_compressor = :sass

  # Do not fallback to assets pipeline if a precompiled asset is missed.
  config.assets.compile = false

Applications without Webpacker might configure a JavaScript processor for the Asset Pipeline:

  # example of older applications
  config.assets.css_compressor = :sass
  config.assets.js_compressor = :uglifier

All of this works but includes extra gems and might have implications about system dependencies. For example, both webpacker and uglifier would need a JavaScript runtime like Node.js for running assets:precompile and possibly for starting the Rails server (Webpacker shouldn’t complain, but uglifier/execjs would).

But what if we want to handle assets outside the Rails application’s deployment, thus removing these dependencies? What if we’re going to build an optimized Docker container using the multi-stage build and not provide Node.js in the final image?

Well, we can do something we already do with development and test dependencies – omit these gems for production. We can move them into a new assets group in the Gemfile:

group :assets do
  gem 'sass-rails', '>= 6'
  gem 'webpacker', '~> 5.0'

  # Or
  gem 'uglifier'
  ...
end

Then we can rely on Bundler configuration to set the right groups for the task at hand:

$ export RAILS_ENV=production
$ bundle config set --local without development:test
$ rails assets:precompile
$ bundle config set --local without development:test:assets
$ rails s

Setting the without option won’t load these assets gems but will fail whenever you try to use them in the configuration directly. This shouldn’t be an issue for a brand new Rails 6.1 application with Webpacker, but if your js_compressor is set to :uglifier, then omitting the gem ends up not starting the Rails server:

/home/rails-user/.rubies/ruby-2.6.5/lib/ruby/gems/2.6.0/gems/execjs-2.8.1/lib/execjs/runtimes.rb:58:in `autodetect': Could not find a JavaScript runtime. See https://github.com/rails/execjs for a list of available runtimes. (ExecJS::RuntimeUnavailable)

That happens because uglifier relies on execjs which tries to autodetect a JavaScript processor. With a RubyGems’ Gem.loaded_specs, we can check if we are loading a specific gem and not set these configuration options:

...
  if Gem.loaded_specs.has_key?('uglifier')
    config.assets.js_compressor = :uglifier
  end
...

Now the uglifier is used only if we are loading it – and we only load it while running assets:precompile.

With the assets group and possibly updating the production.rb configuration, we could omit certain gems while running the Rails application server and leave out Node.js from the production server or a final container image.

What I learned this month: Github Actions and pre-commit

Posted by Aurélien Bompard on June 08, 2021 08:17 PM

This is yet again another attempt to reboot the dev part of this blog. I’m not successful, but at least I’m persistent 😉

Introduction

I’m kicking off a new serie of posts. Every month, I plan on writing about the new stuff that I’ve discovered in the broader field of software development. It’s an attempt to share the knowledge that I may have gained during that time, and also to show the world that you can be a somewhat experienced software developer and still be discovering new stuff every month.

I have observed that I tend to discover the latest hype about at the time it stops being cool. If you’re like me, we can make up for this latency by sharing ideas. Also, it requires quite a bit of time to play with software that isn’t yet ready for anything besides its original intended use case. So, let’s not waste our time.

I don’t plan on writing well researched articles, because aiming too high is a sure way for me to fail at building a habit, and that’s one of the main goals here. I’ll probably be a bit terse, with only the main links and my impressions. I hope you’ll understand. (and if you don’t, I’m not making you read this 😉 )

The two tools that I’ve selected for this month’s article are Github Actions and pre-commit.

Github Actions

D’oh, you might say. It’s been around for a while but I didn’t have a chance to use it yet, because in Fedora we run our CI on a Jenkins instance that the good CentOS folks provide us with.

But for some projects, it does not make much sense to run our unit tests on specific Fedora versions, and it’s always good to have an alternative. I’ve set up Github Actions to run CI on a couple projects that I maintain, such as fedora-messaging and flask-healthz, to test-drive it. The Python SIG in Fedora also provides a Github Action to run tox on a Fedora container, which is nice.

Another use: since Dependabot has eaten by Github, they have removed the feature to auto-merge dependency updates according to their semver classification. And I did not enjoy manually approving patch updates to my dependencies, since what I basically did was to make sure CI passed. That’s a bot’s job. So I’ve set up a Github Action that either merges patch and minor updates (but not backwards-incompatible major versions), or just approves them, waiting for Mergify to do the actual merge.

It’s nice, it’s fast, I like it. Maybe I’ll end up keeping our CentOS CI Jenkins instance only for integration tests.

pre-commit

I discovered pre-commit recently and I think it has potential. It’s a well known fact of software development that the sooner you catch a bug, the lesser that bug costs you. Find a bug during development (and even while typing the code, thanks to linters embedded in your editor) and you’ll be much better off than if you found it after production deployment.

The point of pre-commit is to run checks before the code is committed to Git. That means fast checks such as linters and formatters, and probably now your entire unit tests suite, but your mileage may vary. As for me, I usually run black, isort and flake8 as part of Visual Studio Code, so I catch issues even sooner. However, I’m not the only one to work on my projects, and for those not using advanced text editors it is a nice safety net to run the checks before committing.

I’ve converted a couple projects to pre-commit, we’ll see how it goes.

Conclusion

It’s all for this first edition. Hopefully I’ll see you next month with more findings. In the meantime, happy hacking! 🙂

Heroes of Fedora (HoF) – F34 Beta

Posted by Fedora Community Blog on June 08, 2021 08:00 AM

Hello everyone, welcome to the Fedora Linux 34 Beta installation of Heroes of Fedora! In this post, we’ll look at the stats concerning the testing of Fedora Linux 34 Beta. The purpose of Heroes of Fedora is to provide a summation of testing activity on each milestone release of Fedora. Without community support, Fedora would not exist, so thank you to all who contributed to this release! Without further ado, let’s get started!

Updates Testing

Test period: Fedora Linux 34 Beta (2021-02-09 – 2021-03-16)
Testers: 56
Comments1: 155

<figure class="wp-block-table">
NameUpdates commented
Geraldo S. Simião Kutz (geraldosimiao)26
Chris Murphy (chrismurphy)15
Adam Williamson (adamwill)13
Charles-Antoine Couret (renault)11
Lukáš Růžička (lruzicka)7
František Zatloukal (frantisekz)7
brett hassall (bretth)4
Peter Robinson (pbrobinson)4
Jaap Bosman (jpbn)4
Ludovic Hirlimann (lhirlimann)3
Paul Whalen (pwhalen)3
Jorma Penala (jpenala)3
Seppo Yli-Olli (nanonyme)3
Nicolas Chauvet (kwizart)3
josevaldo da rocha santos (josevaldo)2
Patrick Vavrina (patux)2
Hans Müller (cairo)2
Onuralp SEZER (thunderbirdtr)2
Bob McBobbington (bobob)2
Pete Walter (pwalter)2
Vasco Rodrigues (vvro)2
…and also 35 other reporters who created fewer than 2 reports each, but 35 reports combined!
</figure>

1 If a person provides multiple comments to a single update, it is considered as a single comment. Karma value is not taken into account.

Validation Testing

Test period: Fedora Linux 34 Beta (2021-02-09 – 2021-03-16)
Testers: 17
Reports: 580
Unique referenced bugs: 14

<figure class="wp-block-table">
NameReports submittedReferenced bugs1
geraldosimiao1951928594 1930514 1932447 1932961 1933704 (5)
pwhalen1561924908 1936147 (2)
lruzicka811936935 1937783 (2)
frantisekz72 
jpbn15
cra13 
robatino12 
nielsenb111921924 1924908 (2)
cmurf8 
alciregi5 
adamwill41937550 (1)
lnie2 
tflink2 
copperi1 
pablodav1 
coremodule1 
</figure>

1 This is a list of bug reports referenced in test results. The bug itself may not be created by the same person.

Bug Reports

Test period: Fedora Linux 34 Beta (2021-02-09 – 2021-03-16)
Reporters: 174
New reports: 480

<figure class="wp-block-table">
NameReports submitted1Excess reports2Accepted blockers3
Miro Hrončok3812 (31%)0
Chris Murphy318 (25%)1
Matt Fagnani220 (0%)0
customercare at resellerdesktop.de211 (4%)0
Adam Williamson172 (11%)3
Mike FABIAN150 (0%)0
lnie110 (0%)0
Lukas Ruzicka111 (9%)0
robert fairbrother91 (11%)0
Paul Whalen80 (0%)2
Ian Laurie81 (12%)0
James80 (0%)0
Zbigniew Jędrzejewski-Szmek70 (0%)0
Peter60 (0%)0
pmkellly at frontier.com60 (0%)0
Thiago Sueto60 (0%)0
Martin Pitt51 (20%)1
Geraldo Simião51 (20%)0
Michel Alexandre Salim50 (0%)0
Mikhail50 (0%)0
Neal Gompa50 (0%)0
Petr Viktorin50 (0%)0
Ludovic Hirlimann [:Paul-muadib]40 (0%)0
Marius Andreiana42 (50%)0
mouseyklfyt41 (25%)0
Máirín Duffy40 (0%)0
Alessio30 (0%)0
ByteEnable at outlook.com30 (0%)0
christian.kirbach at googlemail.com30 (0%)0
fednuc30 (0%)0
Jeroen31 (33%)0
jpbn30 (0%)0
Katerina Koukiou30 (0%)0
klfytmouse31 (33%)0
Langdon White30 (0%)0
Lars Pastewka30 (0%)0
Luya Tshimbalanga30 (0%)0
Martin31 (33%)0
Michael Catanzaro30 (0%)0
Miroslav Suchý31 (33%)0
Nicolas Chauvet (kwizart)30 (0%)0
Patrik Novotný30 (0%)0
Petr Pisar30 (0%)0
Tom London31 (33%)0
Vasco Rodrigues30 (0%)0
…and also 129 other reporters who created fewer than 3 reports each, but 153 reports combined!
</figure>

1 The total number of new reports (including “excess reports”). Reopened reports or reports with a changed version are not included, because it was not technically easy to retrieve those. This is one of the reasons why you shouldn’t take the numbers too seriously, but just as interesting and fun data.
2 Excess reports are those that were closed as NOTABUG, WONTFIX, WORKSFORME, CANTFIX or INSUFFICIENT_DATA. Excess reports are not necessarily a bad thing, but they make for interesting statistics. Close manual inspection is required to separate valuable excess reports from those which are less valuable.
3 This only includes reports that were created by that particular user and accepted as blockers afterwards. The user might have proposed other people’s reports as blockers, but this is not reflected in this number.

The post Heroes of Fedora (HoF) – F34 Beta appeared first on Fedora Community Blog.

Leaving Fedora Infrastructure

Posted by Stephen Smoogen on June 07, 2021 02:16 PM

 In June 2009, I was given the opportunity to work in Fedora Infrastructure as Mike McGrath's assistant so that he could take some vacation. At the time I was living in New Mexico and had worked at the University of New Mexico for several years. I started working remote for the first time in my life, and had to learn all the nuances of IRC meetings and typing clearly and quickly. With the assistance of Seth Vidal, Luke Macken, Ricky Zhou, and many others I got quickly into 'the swing of things' with only 2 or 3 times taking all of Fedora offline because of a missed ; in a dns config file. 


For the last 4300+ days, I have worked with multiple great and wonderful system administrators and programmers to keep the Fedora Infrastructure running and growing so that the number of systems using 'deliverables' has grow into the millions of systems. I am highly indebted to everyone from volunteers to paid Red Hatters who has helped me grow. I want to especially thank Kevin Fenzi, Rick Elrod, and Pierre Yves-Chibon for the insights I have needed. 


Over the years, we have maintained a constantly spinning set of plates which allow for packagers to commit changes, build software, produce deliverables, and start all over again. We have moved our build systems physically at least 3 times, once across the North American continent. We have dealt with security breaches, mass password changes, and the undead project of replacing the 'Fedora Account System' which had been going on since before I started. [To the team which finished that monumental task in the last 3 months, we are all highly indebted. There may be pain-points but they did a herculean task.]


All in all, it has been a very good decade of working on a project that many have said would be 'gone' by next release. However, it is time for me to move onto other projects, and find new challenges that excite me. Starting next week, I will be moving to a group working with a strong focus on embedded hardware. I have been interested in embedded in some form or another since the 1970's. My first computer memories were of systems my dad showed me which would have been in an A-6 plane. From there I remember my dad taking me to see a friend who repaired PDP systems for textile mills and let me work on my first Unix running on a Dec Rainbow. Whenever I came home from those visits, I would have a smile and hum of excitement which would not leave me for days. I remember having that humm when in 1992, a student teacher showed me MCC Linux running on an i386 which we had been repairing from spare parts. I could do everything and anything on that box for a fraction of the price of the big Unix boxes I had to pay account time for. And recently I went to a set of talks on embedded projects and found myself with the same hum. It was a surprise for me but I found myself more and more interested in it as the weeks have gone by.


I was offered a chance to move over, and I decided to take it. I will still be in the Fedora community but will not be able to work much on Infrastructure issues. If I have tasks that you are waiting for, please let me know, and I will finish them either by myself or by doing a full handoff to someone else in Infrastructure. Thank you all for your help and patience over these last 11+ years. 

Next Open NeuroFedora meeting: 07 June 1300 UTC

Posted by The NeuroFedora Blog on June 07, 2021 09:25 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 07 June at 1300UTC in #fedora-neuro on IRC (Libera.chat). The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 today'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

HTTP/2 in libsoup3, WebKitGTK, and Epiphany

Posted by Patrick Griffis on June 07, 2021 04:00 AM

The latest development release of libsoup 3, 2.99.8, now enables HTTP/2 by default. So lets look into what that means and how you can try it out.

Performance

In simple terms what HTTP/2 provides for improved performance is more efficient network usage when requesting multiple files from a single host. It does this by avoiding making new connections whenever possible and over that single connection allowing multiple requests to happen at the same time.

It is easy to imagine many workloads this would improve, such as flatpak downloading a lot of objects from a single server.

Here are some examples in Epiphany:

gophertiles

This is a benchmark made to directly test the best case for HTTP/2. As you can see in the inspector (which has been improved to show more network information) you can see HTTP/2 creates a single connection and completes in 229ms. HTTP/1 on the other-hand creates 6 connections taking 1.5 seconds. This all happens on a network which is a best case for HTTP/1, a low latency wired gigabit connection; As network latency increases HTTP/2’s lead grows dramatically.

browser screenshot using http2

browser screenshot using http1

Youtube

For a more real world example Youtube is a great demo. It hosts a lot of files for a webpage but it isn’t a perfect benchmark as it still involves multiple hosts that don’t share connections. HTTP/2 still has the slight lead, again versus HTTP/1’s best case.

inspector screenshot using http2

inspector screenshot using http1

Testing

This work is all new and we would really like some testing and feedback. The easiest way to run this yourself is with this custom Epiphany Flatpak (sorry for the slow download server, and it will not work with NVidia drivers).

You can get useful debug output both through the WebKit inspector (ctrl+shift+i) and by running with --env='G_MESSAGES_DEBUG=libsoup-http2;nghttp2'.

Please report any bugs you find to https://gitlab.gnome.org/gnome/libsoup/issues.

Episode 274 – Mr. Amazon’s Neighborhood

Posted by Josh Bressers on June 07, 2021 12:01 AM

Josh and Kurt talk about Amazon sidewalk. There is a lot of attention, but how is this any different than the surveillance networks Apple and Google have built?

<audio class="wp-audio-shortcode" controls="controls" id="audio-2459-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_274_Mr_Amazons_Neighborhood.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_274_Mr_Amazons_Neighborhood.mp3</audio>

Show Notes

Experiment with SSHFS

Posted by Farhaan Bukhsh on June 06, 2021 02:46 PM

Motivation

I have been using a PKM called Zim Wiki for a long time now. The thing that I have struggled with Zim Wiki is when I have to work on different devices and I don't have my copy of Zim Wiki.

To be honest Zim Wiki is not all fancy with cloud-sync and multi-device support but following the Unix Philosophy of doing just one thing and doing it well, helps me to organize and retrieve information that is valuable to me.

There was a time when I use to back up my copy of Zim on dropbox. Now, in order to maintain a single source of truth, I use to sync the copy of my Zim notes to dropbox every minute using a cronjob.

The thing that goes wrong is if the cronjob fails for whatever reason, or I shut down the device before cron job and that will land me up in a conflicting situation.

These problems pushed me to come up with a slightly better management system.

What is sshfs

I didn't know I could mount a filesystem of a VPS on my local system. Previously, I use to use either scp or rsync to transfer files from and to the remote machine. SSHFS is a filesystem client that can help to mount files and directories on the remote server to your local machine.

This is a revelation to me because now I can use my remote server as a centralized place to serve my zim-wiki.

How to achieve sshfs backup

This is very simple, first, we need to install sshfs on the local machine.

sudo apt install sshfs

Then we need to mount the remote files and directories in a pre-existing directory.

sudo sshfs -o nonempty,reconnect,allow_other,default_permissions,IdentityFile=<ssh-private-key> <user>@<hostname>:/path-to-remote /local-path

And now we can just rsync zim-wiki files by:

rsync -avzh ~/zim-notes/notebooks ~/aws-vps/zim-wiki/

Make sure that directories are synced. Now just open zim-wiki and add the notebook. :)

And voila!

Gotchas!

When internet connection goes down, the application will start hanging in that case first we need to kill the sshfs process and then unmount the directory.

sudo pkill -kill -f "sshfs"

And then unmount the directory

sudo fusermount -u /home/aws-vps

I have tried to accommodate the erratic connection in mind and introduce reconnect option in sshfs. This arrangement has workout miracles for me in terms of management.

Video: AlmaLinux 8.4 installable Live XFCE Media

Posted by Scott Dowdle on June 04, 2021 08:05 PM
Video: AlmaLinux 8.4 installable Live XFCE Media Scott Dowdle Fri, 06/04/2021 - 14:05

I've been wanting and trying to create live media for EL8 since the initial 8.0 release of CentOS.  The main problem I ran into is that RHEL has decided that their customers aren't interested in live media and they didn't produce any... and CentOS hasn't either.  I've been using livecd-creator from the livecd-tools package for years for making personal remixes of Fedora and CentOS 7.  In EL8, livecd-creator comes from EPEL and it has had various issues since the initial 8.0 release... and I've only been able to produce broken .iso media if I could get it to build at all.  Luckily one or more Fedora developers have taken pity on me and been updating / fixing livecd-creator in EPEL recently.

Another problem is that RHEL also decided that since they don't have live media anymore, the RHEL Anaconda installer no longer needs to support live media installs, and they have removed the anaconda-live package from their stock repositories... although I did learn today that it is built by CentOS but just not placed in the public repositories... but if you look for it hard enough, it can be found in their newly opened up build servers.

I've been working with AlmaLinux a bit lately and they provide the anaconda-live package in their off-by-default (and shouldn't really be used for production systems) devel repository.

With the updated livecd-creator and the newly found source(s) for anaconda-live... I've renewed my efforts and finally was able to produce an AlmaLinux 8.4 installable XFCE live media.  I did run into some qwerks that are explained in the screencast below that shows me booting the media in a KVM virtual machine, doing and install, and then showing a little bit of the post-install desktop system.  The .iso includes a /root/livecd-creator directory that has all of the files I used to build the media with and the system has all of the needed packages pre-installed for building.  Anyone who might want to make their own remix can do some minor editing (updating the repository URLs as they currently point to a local mirror I'm using... as well as customizing the package list as desired) of the included files and build their own.  Enjoy!

<video controls="yes" height="576" poster="https://www.montanalinux.org/files/videos/ML-Alma8-livemedia-20210604.png" width="768"><source src="”/files/videos/ML-Alma8-livemedia-20210604.webm”" type="video/webm"></source><source src="https://www.montanalinux.org/files/videos/ML-Alma8-livemedia-20210604.mp4" type="video/mp4"></source></video>

If anyone wants a copy of the .iso, just email me (see web page footer for contact info) asking for a download link and I'll reply back... as I do not publicly promote MontanaLinux as it is primarily a personal remix.

UPDATE: Anyone who wants to build their own media needs to be aware that there are currently two bugs in livecd-creator... one already fixed in the livecd-tools package currently in epel-updates-testing, and one that needs to be manually patched.   The manual patch is easy though, just edit /usr/lib/python-3.6/site-packages/imgcreate/live.py and add a new after line 239 (Line 239: # XXX-BCL; does this need --label?).  Put in the following:
            makedirs(isodir + "/images")
So make sure you are using livecd-tools-28.1-1 from epel-updates-testing with the given 0ne-line patch and you should be able to build working media.

UPDATE 2:  I fixed the XFCE media so now it uses SDDM rather than GDM and the live media automatically logs into XFCE.  I have also added media for KDE Plasma and GNOME (aka WORK).  They all seem to be working well but I haven't tested them on UEFI.

UPDATE3: I believe I'm missing one or more packages needed to install the bootloader on UEFI systems.  All of my (working) testing has been on Legacy BIOS-based VMs.  I'll get it fixed ASAP.

<section class="field field--name-comment field--type-comment field--label-above comment-wrapper" rel="schema:comment"></section>

Friday’s Fedora Facts: 2021-22

Posted by Fedora Community Blog on June 04, 2021 08:01 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

Congratulations to the winners in the Fedora elections!

Don’t forget to take the Annual Fedora Survey and claim your badge!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
Nest With Fedoravirtual5-8 Augcloses 16 July
</figure>

Help wanted

Prioritized Bugs

<figure class="wp-block-table">
Bug IDComponentStatus
1953675kf5-akonadi-serverNEW
</figure>

Upcoming meetings

Releases

Fedora Linux 35

Schedule

  • 2021-06-23 — Change submission deadline (Changes requiring infrastructure changes)
  • 2021-06-29 — Change submission deadline (System-Wide Changes, Changes requiring mass rebuild)
  • 2021-07-20 — Change submission deadline (Self-Contained Changes)
  • 2021-07-21 — Mass rebuild begins
  • 2021-08-10 — F35 branches from Rawhide, F36 development begins

For the full schedule, see the schedule website.

Changes

<figure class="wp-block-table">
ProposalTypeStatus
Replace SDL 1.2 with sdl12-compat using SDL 2.0Self-ContainedApproved
Make btrfs the default file system for Fedora CloudSystem-WideFESCo #2617
Support using a GPT partition table in KickstartSystem-WideWithdrawn
Sphinx 4Self-ContainedAnnounced
Build Fedora Cloud Images with Hybrid BIOS+UEFI Boot SupportSystem-WideAnnounced
Replace the Anaconda product configuration files with profilesSelf-ContainedAnnounced
</figure>

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-22 appeared first on Fedora Community Blog.

s390x builders offline

Posted by Fedora Infrastructure Status on June 04, 2021 06:00 PM

The facility hosting the fedora s390x builders has a site wide power outage scheduled for June 4th to June 6th. We will be powering off these builders at 22UTC on 2021-06-04. It's likely that they will be back sometime late in the day of 2021-06-05, but it could be they …

LibreOffice GTK4 Port: More MenuButtons

Posted by Caolán McNamara on June 04, 2021 04:02 PM

 

More MenuButton features working under GTK4. Now with working radio entries in GtkPopoverMenu dropdowns.

Fedora Linux 34 elections results

Posted by Fedora Community Blog on June 04, 2021 02:50 PM
Fedora 26 Supplementary Wallpapers: Vote now!

The Fedora Linux 34 election cycle has concluded. Here are the results for each election. Congratulations to the winning candidates, and thank you all candidates for running in this election!

Results

Council

One Council seat was open this election. A total of 226 ballots were cast, meaning a candidate could accumulate up to 678 votes (226 * 3).

<figure class="wp-block-table">
# votesCandidate
496Aleksandra Fedorova
333Eduard Lucena
234Damian Tometzki
</figure>

FESCo

Four FESCo seats were open this election. A total of 241 ballots were cast, meaning a candidate could accumulate up to 1687 votes (241 * 7).

<figure class="wp-block-table">
# votesCandidate
1156Neal Gompa
931Stephen Gallagher
804Dan Čermák
771Mohan Boddu
716František Zatloukal
602Robbie Harwood
564Frank Ch. Eigler
</figure>

Mindshare

One Mindshare seat was open this election. A total of 190 ballots were cast, meaning a candidate could accumulate up to 190 votes (190 * 1).

<figure class="wp-block-table">
# votesCandidate
168Onuralp Sezer
</figure>

Stats

The Fedora Linux 34 election cycle showed a slight uptick in engagement for Council and FESCo.

<figure class="wp-block-image size-large"><figcaption>Voter counts in the Fedora elections</figcaption></figure>

The FESCo and Council elections saw an increase in candidates over the previous cycle while Mindshare had only one candidate.

<figure class="wp-block-image size-large"><figcaption>Candidate counts in Fedora elections</figcaption></figure>

Edited 2021-06-07 at 1730 UTC to correct a misstatement about the number of candidates compared to the previous election.

The post Fedora Linux 34 elections results appeared first on Fedora Community Blog.

GNOME LATAM 2021 was a real blast!

Posted by Felipe Borges on June 04, 2021 11:31 AM

This year, motivated by the success of virtual events like GNOME Asia and GNOME Onboard Africa, we decided to organize a GNOME LATAM (virtual) conference. The event was a success, with a nice mix of Spanish and Portuguese-speaking presenters. The recordings are now available and (if you understand Spanish or Portuguese) I highly encourage you to check what the Latin American GNOMies are up to. 🙂

  • Juan Pablo Ugarte, from Argentina, that most of you GNOME people know from his work on Glade, had an interesting talk showing his new project: “Cambalache UI Maker”: A modern Glade replacement for GTK4. Juan hasn’t open sourced it yet, but you’ll see it when he pops up in Planet GNOME.
  • Claudio Wunder, from Germany, that you may know from the GNOME engagement team, did a presentation about the engagement team’s work in GNOME and discussed the challenges of managing online communities with its cultural differences and all. Claudio studied in Brazil and speaks Portuguese fluently.
  • Daniel Garcia Moreno, from Spain, that you may know from Endless and Fractal, had a talk sharing his experiences mentoring in GSoC and Outreachy. This was also a good opportunity to introduce the programs to the Latin American community, which is underrepresented in FOSS.
  • me, from Brazil :D, presented a “Developing native apps with GTK talk” where I write up a simple web browser in Python, with GTK and WebKitGtk, while I comment on the app development practices we use in GNOME, and present our tooling such as DevHelp, GtkInspector, Icon Browser, GNOME Builder, Flatpak, etc…
  • Martín Abente Lahaye, from Paraguay, that you may know from GNOME, Sugar Labs, Endless, and Flatseal, had a presentation about GNOME on phones. He commented on the UX of GNOME applications and Phosh in phones, and highlighted areas where things can be improved.
  • Cesar Fabian Orccon Chipana, from Perú, former GSoC intern for GNOME, GStreamer, did an extensive demo of GStreamer pipelines, explaining GStreamer concepts and all. He had super cool live demos!
  • Rafael Fontenelle, from Brazil, is a coordinator of the pt_BR translation team for many years and translates a huge portion of GNOME himself. He did a walk-through of the GNOME translation processes, sharing tips and tricks.
  • Daniel Galleguillos + Fernanda Morales, from Chile, from the GNOME Engagement team, presented design work for the GNOME engagement team. Showing tools and patterns they use for doing event banners, swag, social media posts, and all. Daniel was also responsible for editing the event recordings. Thanks a lot, Daniel!
  • Fabio Duran Verdugo and Matías Rojas-Tapia, from Chile, a long-time GNOME member, presented Handibox. An accessibility tool they are working on at their university to help users with motor impairment use desktop computers. Inspiring!
  • Georges Basile Stavracas Neto, from Brazil, you may know from Endless and GNOME Shell, presented a very nice summary about the GNOME design philosophy and the changes in GNOME Shell 40 and their plans for the future.
  • The event was opened and closed by Julita Inca Chiroque, from Peru, a long-time GNOME Foundation member. Thanks a lot, Julita!

I hope we can make this a tradition and have a GNOME LATAM edition yearly! Thanks a lot to all attendees!

PHP version 7.4.20 and 8.0.7

Posted by Remi Collet on June 04, 2021 05:26 AM

RPMs of PHP version 8.0.7 are available in remi-php80 repository for Fedora 32-34 and Enterprise Linux (RHEL, CentOS).

RPMs of PHP version 7.4.20 are available in remi repository for Fedora 32-34 and remi-php74 repository Enterprise Linux (RHEL, CentOS).

emblem-notice-24.pngNo security fix this month, so no update for version 7.3.28.

emblem-important-2-24.pngPHP version 7.2 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository and as module for Fedora 32-34 and EL-8.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.0 installation (simplest):

yum-config-manager --enable remi-php80
yum update

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf update php\*

Parallel installation of version 8.0 as Software Collection

yum install php80

Replacement of default PHP by version 7.4 installation (simplest):

yum-config-manager --enable remi-php74
yum update

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf update php\*

Parallel installation of version 7.4 as Software Collection

yum install php74

Replacement of default PHP by version 7.3 installation (simplest):

yum-config-manager --enable remi-php73
yum update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.3
dnf update php\*

Parallel installation of version 7.3 as Software Collection

yum install php73

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-8 RPMs are build using RHEL-8.3 (next will use 8.4)
  • EL-7 RPMs are build using RHEL-7.9
  • EL-7 builds now use libicu65 (version 65.1)
  • EL builds now uses oniguruma5php (version 6.9.5, instead of outdated system library)
  • oci8 extension now uses Oracle Client version 21.1
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php73 / php74 / php80)

Mike Lindell's Cyber "Evidence"

Posted by Matthew Garrett on June 04, 2021 05:14 AM
Mike Lindell, notable for absolutely nothing relevant in this field, today filed a lawsuit against a couple of voting machine manufacturers in response to them suing him for defamation after he claimed that they were covering up hacks that had altered the course of the US election. Paragraph 104 of his suit asserts that he has evidence of at least 20 documented hacks, including the number of votes that were changed. The citation is just a link to a video called Absolute 9-0, which claims to present sufficient evidence that the US supreme court will come to a 9-0 decision that the election was tampered with.

The claim is that Lindell was provided with a set of files on the 9th of January, and gave these to some cyber experts to verify. These experts identified them as packet captures. The video contains scrolling hex, and we are told that this is the raw encrypted data from the files. In reality, the hex values correspond very clearly to printable ASCII, and appear to just be the Pennsylvania voter roll. They're not encrypted, and they're not packet captures (they contain no packet headers).

20 of these packet captures were then selected and analysed, giving us the tables contained within Exhibit 12. The alleged source IPs appear to correspond to the networks the tables claim, and the latitude and longitude presumably just come from a geoip lookup of some sort (although clearly those values are far too precise to be accurate). But if we look at the target IPs, we find something interesting. Most of them resolve to the website for the county that was the nominal target (eg, 198.108.253.104 is www.deltacountymi.org). So, we're supposed to believe that in many cases, the county voting infrastructure was hosted on the county website.

Unfortunately we're not given the destination port, but 198.108.253.104 isn't listening on anything other than 80 and 443. We're told that the packet data is encrypted, so presumably it's over HTTPS. So, uh, how did they decrypt this to figure out how many votes were switched? If Mike's hackers have broken TLS, they really don't need to be dealing with this.

We're also given some background information on how it's impossible to reconstruct packet captures after the fact (untrue), or that modifying them would change their hashes (true, but in the absence of known good hash values that tells us nothing), but it's pretty clear that nothing we're shown actually demonstrates what we're told it does.

In summary: yes, any supreme court decision on this would be 9-0, just not the way he's hoping for.

Update: It was pointed out that this data appears to be part of a larger dataset. This one is even more dubious - it somehow has MAC addresses for both the source and destination (which is impossible), and almost none of these addresses are in actual issued ranges.

comment count unavailable comments

LibreOffice GTK4 Port: MenuButtons

Posted by Caolán McNamara on June 03, 2021 04:18 PM

 

Initial MenuButton and Popover support under GTK4 now working

Summer Intern Restart Up

Posted by Madeline Peck on June 03, 2021 01:58 PM

Red Hat Internship Round 2 for Summer 2021 is officially under way! Last summer it was a new and fairly wild time especially with covid being fully remote and starting at Red Hat. But this first week actually included a four day weekend, as well as a lot of new hire intern orientation meetings. Some open source, kubernetes, and red hat products bootcamps also sprinkled in there.

In between meetings and bootcamps I’ve been going over how much of the coloring book to get done so I can finish as much as possible by the end of June- and I’ve attached below one of the pages I completed this week.

<figure class=" sqs-block-image-figure intrinsic "> 7th page.png </figure>

This summer I’m working on some illustrations for the Red Hat Research Team Podcast, as well as a comic strip to explain open source licensing to creatives (to help make the case for them to open license their work). As well as the element matrix wallpaper which I’ve been trying to reformat so the login bubble that will go in the middle wont cover too much of the design.

<figure class=" sqs-block-image-figure intrinsic "> Wallpaper.png </figure>

I’m attending design team meetings as well as Open Studio team meetings and finally getting in the feel of being back full time! Happy new month and Happy Pride!

-Madeline

<figure class=" sqs-block-image-figure intrinsic "> Logo image.png </figure>

Showcase at CNS*2021

Posted by The NeuroFedora Blog on June 03, 2021 07:40 AM
Photo by Greg Rosenke on Unsplash

Photo by Greg Rosenke on Unsplash.


Join us for a NeuroFedora showcase at the 30th Annual meeting of the Organization for Computational Neuroscience (OCNS) on July 3, 2021. You can register for CNS*2021 here. The showcase will also be recorded for later viewing. The description of the showcase is below:


Open Neuroscience is heavily dependent on the availability of Free/Open Source Software (FOSS) tools that support the modern scientific process. While more and more tools are now being developed using FOSS driven methods to ensure free (free to use, study, modify, and share---and so also free of cost) access to all, the complexity of these domain specific tools makes their uptake by the multi-disciplinary neuroscience target audience non-trivial.

The NeuroFedora community initiative aims to make it easier for all to use neuroscience software tools. Using the resources of the FOSS Fedora community, NeuroFedora volunteers identify, package, test, document, and disseminate neuroscience software for easy usage on the general purpose FOSS Fedora Linux Operating System (OS). As a result, users can easily install a myriad of software tools in only two steps: install any flavour of the Fedora OS; install the required tools using the in-built package manager.

To make common computational neuroscience tools even more accessible, NeuroFedora now provides an OS image that is ready to download and use. Users can obtain the CompNeuroFedora OS image from the community website at https://labs.fedoraproject.org/ . They can either install it, or run it “live” from the installation image. The software showcase will introduce the audience to the NeuroFedora community initiative. It will demonstrate the CompNeuroFedora installation image and the plethora of software tools for computational neuroscience that it includes. It will also give the audience a quick overview of how the NeuroFedora community functions and how they may contribute.

User documentation for NeuroFedora can be found at https://neuro.fedoraproject.org

The reMarkable 2 needs reFinement: Writing, workflow and usability

Posted by Joe Brockmeier on June 02, 2021 10:45 PM

I’ve been putting the reMarkable 2 through its paces since I got it a few days ago. In this post I’m going to jot down...

The post The reMarkable 2 needs reFinement: Writing, workflow and usability appeared first on Dissociated Press.

GTK4 LibreOffice Port: Print Dialog

Posted by Caolán McNamara on June 02, 2021 07:04 PM

 

LibreOffice's Print Dialog in GTK4 Port with fancy "suggested-action" blue "Print" button.

Help make Fedora awesome by taking the first Annual Contributor Survey!

Posted by Fedora Community Blog on June 02, 2021 01:03 PM
Contributor Survey Link

The Fedora Council is running the first Annual Fedora Contributor Survey and we want to hear from you! The survey will be open to take for the month of June, and there is a shiny Fedora Badge to earn. Our goal is to gather authentic and valuable feedback to better support the Fedora contributor community. We plan to analyze the results and share findings at Nest with Fedora, 2021. Take the Annual Fedora Contributor Survey today!

The survey was proposed and developed by Council member, Aleksandra Fedorova, with support from Marie Nordin (promotion & feedback coordination) and Vipul Siddharth (LimeSurvey wrangler). The Council as well as the Mindshare Committee gave input and feedback on the survey several times as it was being developed. The Community Outreach Revamp Objective team also pitched in on the Community Engagement section of the survey. The development of a yearly survey falls under the Revamp’s activities and we want to keep survey fatigue at a minimum so it made sense to tie these two initiatives together.

The post Help make Fedora awesome by taking the first Annual Contributor Survey! appeared first on Fedora Community Blog.

Producing a trustworthy x86-based Linux appliance

Posted by Matthew Garrett on June 02, 2021 04:21 AM
Let's say you're building some form of appliance on top of general purpose x86 hardware. You want to be able to verify the software it's running hasn't been tampered with. What's the best approach with existing technology?

Let's split this into two separate problems. The first is to do as much as we can to ensure that the software can't be modified without our consent[1]. This requires that each component in the boot chain verify that the next component is legitimate. We call the first component in this chain the root of trust, and in the x86 world this is the system firmware[2]. This firmware is responsible for verifying the bootloader, and the easiest way to do this on x86 is to use UEFI Secure Boot. In this setup the firmware contains a set of trusted signing certificates and will only boot executables with a chain of trust to one of these certificates. Switching the system into setup mode from the firmware menu will allow you to remove the existing keys and install new ones.

(Note: You shouldn't use the trusted certificate directly for signing bootloaders - instead, the trusted certificate should be used to sign another certificate and the key for that certificate used to sign your bootloader. This way, if you ever need to revoke the signing certificate, you can simply sign a new one with the trusted parent and push out a revocation update instead of having to provision new keys)

But what do you want to sign? In the general purpose Linux world, we use an intermediate bootloader called Shim to bridge from the Microsoft signing authority to a distribution one. Shim then verifies the signature on grub, and grub in turn verifies the signature on the kernel. This is a large body of code that exists because of the use cases that general purpose distributions need to support - primarily, booting on arbitrary off the shelf hardware, and allowing arbitrary and complicated boot setups. This is unnecessary in the appliance case, where the hardware target can be well defined, where there's no need for interoperability with the Microsoft signing authority, and where the boot configuration can be extremely static.

We can skip all of this complexity using systemd-boot's unified Linux image support. This has the format described here, but the short version is that it's simply a kernel and initramfs linked into a small EFI executable that will run them. Instructions for generating such an image are here, and if you follow them you'll end up with a single static image that can be directly executed by the firmware. Signing this avoids dealing with a whole host of problems associated with relying on shim and grub, but note that you'll be embedding the initramfs as well. Again, this should be fine for appliance use-cases, but you'll need your build system to support building the initramfs at image creation time rather than relying on it being generated on the host.

At this point we have a single image that can be verified by the firmware and will get us to the point of a running kernel and initramfs. Unless you've got enough RAM that you can put your entire workload in the initramfs, you're going to want a filesystem as well, and you're going to want to verify that that filesystem hasn't been tampered with. The easiest approach to this is to use dm-verity, a device-mapper layer that uses a hash tree to verify that the filesystem contents haven't been modified. The kernel needs to know what the root hash is, so this can either be embedded into your initramfs image or into the kernel command line. Either way, it'll end up in the signed boot image, so nobody will be able to tamper with it.

It's important to note that a dm-verity partition is read-only - the kernel doesn't have the cryptographic secret that would be required to generate a new hash tree if the partition is modified. So if you require the ability to write data or logs anywhere, you'll need to add a new partition for that. If this partition is unencrypted, an attacker with access to the device will be able to put whatever they want on there. You should treat any data you read from there as untrusted, and ensure that it's validated before use (ie, don't just feed it to a random parser written in C and expect that everything's going to be ok). On the other hand, if it's encrypted, remember that you can't just put the encryption key in the boot image - an attacker with access to the device is going to be able to dump that and extract it. You'll probably want to use a TPM-sealed encryption secret, which will be discussed later on.

At this point everything in the boot process is cryptographically verified, and so should be difficult to tamper with. Unfortunately this isn't really sufficient - on x86 systems there's typically no verification of the integrity of the secure boot database. An attacker with physical access to the system could attach a programmer directly to the firmware flash and rewrite the secure boot database to include keys they control. They could then replace the boot image with one that they've signed, and the machine would happily boot code that the attacker controlled. We need to be able to demonstrate that the system booted using the correct secure boot keys, and the only way we can do that is to use the TPM.

I wrote an introduction to TPMs a while back. The important thing to know here is that the TPM contains a set of Platform Configuration Registers that are large enough to contain a cryptographic hash. During boot, each component of the boot process will generate a "measurement" of other security critical components, including the next component to be booted. These measurements are a representation of the data in question - they may simply be a hash of the object being measured, or the hash of a structure containing various pieces of metadata. Each measurement is passed to the TPM, along with the PCR it should be measured into. The TPM takes the new measurement, appends it to the existing value, and then stores the hash of this concatenated data in the PCR. This means that the final PCR value depends not only on the measurement, but also on every previous measurement. Without breaking the hash algorithm, there's no way to set the PCR to an arbitrary value. The hash values and some associated data are stored in a log that's kept in system RAM, which we'll come back to later.

Different PCRs store different pieces of information, but the one that's most interesting to us is PCR 7. Its use is documented in the TCG PC Client Platform Firmware Profile (section 3.3.4.8), but the short version is that the firmware will measure the secure boot keys that are used to boot the system. If the secure boot keys are altered (such as by an attacker flashing new ones), the PCR 7 value will change.

What can we do with this? There's a couple of choices. For devices that are online, we can perform remote attestation, a process where the device can provide a signed copy of the PCR values to another system. If the system also provides a copy of the TPM event log, the individual events in the log can be replayed in the same way that the TPM would use to calculate the PCR values, and then compared to the actual PCR values. If they match, that implies that the log values are correct, and we can then analyse individual log entries to make assumptions about system state. If a device has been tampered with, the PCR 7 values and associated log entries won't match the expected values, and we can detect the tampering.

If a device is offline, or if there's a need to permit local verification of the device state, we still have options. First, we can perform remote attestation to a local device. I demonstrated doing this over Bluetooth at LCA back in 2020. Alternatively, we can take advantage of other TPM features. TPMs can be configured to store secrets or keys in a way that renders them inaccessible unless a chosen set of PCRs have specific values. This is used in tpm2-totp, which uses a secret stored in the TPM to generate a TOTP value. If the same secret is enrolled in any standard TOTP app, the value generated by the machine can be compared to the value in the app. If they match, the PCR values the secret was sealed to are unmodified. If they don't, or if no numbers are generated at all, that demonstrates that PCR 7 is no longer the same value, and that the system has been tampered with.

Unfortunately, TOTP requires that both sides have possession of the same secret. This is fine when a user is making that association themselves, but works less well if you need some way to ship the secret on a machine and then separately ship the secret to a user. If the user can simply download the secret via some API, so can an attacker. If an attacker has the secret, they can modify the secure boot database and re-seal the secret to the new PCR 7 value. That means having to add some form of authentication, along with a strong binding of machine serial number to a user (in order to avoid someone with valid credentials simply downloading all the secrets).

Instead, we probably want some mechanism that uses asymmetric cryptography. A keypair can be generated on the TPM, which will refuse to release an unencrypted copy of the private key. The public key, however, can be exported and stored. If it's acceptable for a verification app to connect to the internet then the public key can simply be obtained that way - if not, a certificate can be issued to the key, and this exposed to the verifier via a QR code. The app then verifies that the certificate is signed by the vendor, and if so extracts the public key from that. The private key can have an associated policy that only permits its use when PCR 7 has an appropriate value, so the app then generates a nonce and asks the user to type that into the device. The device generates a signature over that nonce and displays that as a QR code. The app verifies the signature matches, and can then assert that PCR 7 has the expected value.

Once we can assert that PCR 7 has the expected value, we can assert that the system booted something signed by us and thus infer that the rest of the boot chain is also secure. But this is still dependent on the TPM obtaining trustworthy information, and unfortunately the bus that the TPM sits on isn't really terribly secure (TPM Genie is an example of an interposer for i2c-connected TPMs, but there's no reason an LPC one can't be constructed to attack the sort usually used on PCs). TPMs do support encrypted communication channels, but bootstrapping those isn't straightforward without firmware support. The easiest way around this is to make use of a firmware-based TPM, where the TPM is implemented in software running on an ancillary controller. Intel's solution is part of their Platform Trust Technology and runs on the Management Engine, AMD run it on the Platform Security Processor. In both cases it's not terribly feasible to intercept the communications, so we avoid this attack. The downside is that we're then placing more trust in components that are running much more code than a TPM would and which have a correspondingly larger attack surface. Which is preferable is going to depend on your threat model.

Most of this should be achievable using Yocto, which now has support for dm-verity built in. It's almost certainly going to be easier using this than trying to base on top of a general purpose distribution. I'd love to see this become a largely push button receive secure image process, so might take a go at that if I have some free time in the near future.

[1] Obviously technologies that can be used to ensure nobody other than me is able to modify the software on devices I own can also be used to ensure that nobody other than the manufacturer is able to modify the software on devices that they sell to third parties. There's no real technological solution to this problem, but we shouldn't allow the fact that a technology can be used in ways that are hostile to user freedom to cause us to reject that technology outright.
[2] This is slightly complicated due to the interactions with the Management Engine (on Intel) or the Platform Security Processor (on AMD). Here's a good writeup on the Intel side of things.

comment count unavailable comments

F34-20210601 updated lives isos released

Posted by Ben Williams on June 01, 2021 07:16 PM


The Fedora Respins SIG is pleased to announce the latest release of Updated F34-20210601-Live ISOs, carrying the 5.12-8-300 kernel.

This set of updated isos will save considerable amounts  of updates after install.  ((for new installs.)(New installs of Workstation have about 1GB of updates savings )).

A huge thank you goes out to irc nicks: andi89g, barryjgriffin, dbristow , geraldosimiao,  Southern-Gentleman, thunderbirdtr, vdamewood for testing these iso.

And as always our isos can be found at http://tinyurl.com/Live-respins2

Connecting to Libera.Chat through Matrix

Posted by Kamil Páral on June 01, 2021 12:15 PM

After the last IRC changes, some of the Matrix->IRC bridges got disconnected, some rerouted (to Libera.Chat), and everything is work in progress. I’ve been a Matrix user for the past few months, and I definitely don’t want to go back to IRC. But in order to stay connected to the Fedora community, some steps were needed. Here’s a blog post to help me remember the necessary steps, in case I need it again in the future.

Note: Ideally, I wouldn’t need to interact with Libera.Chat in any way, and all important Fedora Matrix rooms would be bridged to IRC. However, that’s not the case at the moment (they are working on it). Also, some IRC rooms require registration, otherwise you can’t talk to them. It is unclear whether some solution is implemented to allow Matrix users to speak in such a room without Libera.Chat registration. So I had to give up and create a Libera.Chat account and set up services to identify me on that network. This guide includes the necessary steps. Hopefully it can be avoided in the future.

This guide will make all necessary steps from your Matrix account. No IRC client is needed.

First, join some bridged room in your Matrix client, #fedora-devel:matrix.org is a popular choice. This should create a connection to Libera.Chat as well, because of the bridge.

Second, create a discussion with a bot named @appservice:libera.chat. That’s your IRC admin room. Type !help for a list of commands.

Type !nick to see your current Libera.Chat nick. Mine was kparal[m]. I changed it to kparal using the same command:

> !nick

Format: '!nick DesiredNick' or '!nick irc.server.name DesiredNick'
Currently connected to IRC networks:
irc.libera.chat as kparal[m]

> !nick kparal

Nick changed from 'kparal[m]' to 'kparal'.

> !nick

Format: '!nick DesiredNick' or '!nick irc.server.name DesiredNick'
Currently connected to IRC networks:
irc.libera.chat as kparal

Now, type !listrooms to list all IRC rooms you’re currently connected to, including where the bridge points to. You should at least see the room you joined originally. If you are connected to a room which is not listed here, it means it is not bridged to Libera.Chat. My example:

> !listrooms

You are joined to 4 rooms:

#fedora-admin which is bridged to Fedora Infrastructure Team, !jaUhEeJGegYfphMOke:libera.chat
#fedora-workstation which is bridged to Fedora Workstation
#fedora-devel which is bridged to Fedora Devel, !OiUqPxkucYgjgQVNoR:libera.chat
#fedora-qa which is bridged to #fedora-qa

Now you have to register your username on Libera.Chat. In your Matrix client, create a discussion with a bot named @NickServ:libera.chat. That’s an account service bot. Type help to receive some basic help.

If you type info, you’ll probably receive a message that you’re not registered:

> info

kparal is not registered.

Now pick a password and your email address and register:

> register your-password your@email

An email containing nickname activation instructions has been sent to your@email.

Check your email for a verification code, then type it in (and wait, this took a few minutes in my case):

> verify register your-nick verification-code

your-nick has now been verified.

Type info, this time you should receive lots of information about your account. You can also use status or acc (the right return value should be 3):

> info

Information on kparal (account kparal):
...

> status

You are logged in as kparal.

> acc kparal

kparal ACC 3

OK, it’s now time to return back to @appservice:libera.chat and set up automatic identification (“logging in”) for Libera.Chat, any time you re-join the IRC network. Store your username and password with the appservice:

> !username your-nick

Successfully stored username for irc.libera.chat. Use !reconnect to use this username now.

> !storepass your-password

Successfully stored password for irc.libera.chat. Use !reconnect to use this password now.

Now test it by reconnecting to Libera.Chat and checking your nick and rooms:

> !reconnect

Reconnecting to network...

> !nick

Format: '!nick DesiredNick' or '!nick irc.server.name DesiredNick'
Currently connected to IRC networks:
irc.libera.chat as kparal

> !listrooms

You are joined to 4 rooms:

#fedora-admin which is bridged to Fedora Infrastructure Team, !jaUhEeJGegYfphMOke:libera.chat
#fedora-workstation which is bridged to Fedora Workstation
#fedora-devel which is bridged to Fedora Devel, !OiUqPxkucYgjgQVNoR:libera.chat
#fedora-qa which is bridged to #fedora-qa

Everything seems to be working now, hopefully.

Remember, you can join Matrix-native rooms by searching for them in your client, and check whether they’re bridged using !listrooms. If you need to join a non-bridged IRC room, you can join it by entering #room-name:libera.chat room.

I hope this helped somebody (of the future me). The user experience is likely to get improved in the future.

Syslog-ng updated in OpenBSD ports

Posted by Peter Czanik on June 01, 2021 09:54 AM

Recently I have found that the number of syslog-ng users on OpenBSD is growing, even with an ancient syslog-ng version in OpenBSD ports that is unable to collect local log messages. Then I remembered that Todd Miller – maintainer of sudo, and my colleague at One Identity – is also an OpenBSD user and developer. I asked him for a little help, which turned out to be quite a lot in the end, but syslog-ng is now updated to the latest version in OpenBSD ports!

Before you begin

Note: the OpenBSD project recommends the use of ready to use packages built from ports instead of using ports directly. Version 6.9 of OpenBSD comes with syslog-ng version 3.12. Version 3.32 of syslog-ng is now in the -CURRENT branch of OpenBSD ports. The next OpenBSD release will already feature an up-to-date syslog-ng package.

In the previous paragraph I tried to discourage you from compiling the latest syslog-ng from ports. If you want just a basic syslog server collecting remote logs, you can safely stay with the old version, you will get the latest version once a new version of OpenBSD is available. Instructions for installing the syslog-ng package are available at: https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-on-bsds

If you would like to know what changed in syslog-ng and in the syslog-ng OpenBSD port and what is still expected to come, read on.

What changed?

This was not simply a version update from 3.12 to 3.32. Many things have changed in syslog-ng and in the port as well:

  • The new openbsd() source can collect local OpenBSD log messages through the new sendsyslog() function of OpenBSD. It means that syslog-ng can replace syslogd from the base system.

  • Syslog-ng needed many patches to compile on OpenBSD. They are now merged upstream and a few additional problems are fixed.

  • Many bugs were fixed in syslog-ng, performance was improved, and many new parsers, source and destination drivers were added to syslog-ng.

  • The port now includes the syslog-ng configuration library (SCL), a growing collection of configuration snippets enabling finding and overwriting credit-card numbers in logs, sending logs to Elasticsearch, Splunk and various cloud services and parsers for logs coming from sudo and various networking devices.

  • Many syslog-ng destinations require syslog-ng to be linked with various client libraries. Trying to keep a careful balance, the most popular ones are now enabled in ports, including http() support through curl.

What is expected to come?

The current state has been a huge step forward already, but there is even more to come:

  • SCL is now available, but not enabled in syslog-ng.conf (you can do it yourself: copy scl.conf from /usr/local/share/examples/syslog-ng/ to /etc/syslog-ng/ and add @include “scl.conf” close to the beginning of syslog-ng.conf

  • once SCL is enabled, replacing the openbsd() source with system() turns on automatic message parsing for sudo and other log messages

  • at least semi-regular syslog-ng updates in ports

Compiling syslog-ng from ports

According to the OpenBSD documentation, it is not recommended, unless you know what you are doing. I was able to do this after booting OpenBSD for the second time, however, I only had syslog-ng and dependencies on the test machine, so your mileage may vary.

First of all, you have to update ports to the latest snapshot of the -CURRENT branch. It is described on the OpenBSD AnonCVS page at https://www.openbsd.org/anoncvs.html

The syslog-ng port itself resides in the /usr/ports/sysutils/syslog-ng directory. In an ideal case, a single make command is enough to compile syslog-ng and its dependencies. In practice, the build complained about version problems. I had to delete all pre-built packages and recompile all dependencies. Along the way the build failed a couple of times, missing various build-time dependencies. Once I had built and installed them, I could successfully build syslog-ng as well.

Using an OpenBSD snapshot

Another option is to use an OpenBSD snapshot, built between official releases. They are not for production use, but for development and testing. However, they also include packages built from the latest ports, so you can skip building syslog-ng on your own and use prebuilt packages. You can learn more about this at https://www.openbsd.org/faq/current.html

What is next?

Congratulations! You are now ready to use the latest syslog-ng on OpenBSD. You can help us by reporting any problems you encounter while using the syslog-ng port at https://github.com/syslog-ng/syslog-ng/issues

-

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @Pczanik.

I am selling the pre-release of my book

Posted by Josef Strzibny on June 01, 2021 12:00 AM

This was a long time coming. 2.5+ years in the making, still unfinished, but it had to go out!

I started my book Deployment from Scratch around October 2018 with a validation and 5 months of full time work before going part-time so I have something to eat ;). Many months later, I am now again working more on the book than for clients and doubling down on finishing it.

I was telling myself there is no point in selling an unpolished version of the product but got feedback from an early mailing list poll I did, and enough people showed interest in getting a beta product sooner.

Still, I was waiting for a more solid version and at least 500 people on the mailing list.

Starting April 2021, I started to sell to a mailing list of 600+, made 20 sales over the first 24 hours, and with two more updates to the mailing list, sold 80 copies in the first month which are slightly over $2000 before Gumroad fees.

This wouldn’t be possible without the mailing list and my blog that is basically my only place to advertise it. There was no other announcement of any sorts. I’ll save that for the final polished version.

I also collected the first four 5-star ratings on Gumroad and a first lovely testimonial, which probably feels even better than money (since it’s not all that much).

I am not a big discount guy – I want people that bought it sooner to get the best price without waiting on a sale day. I will slowly increase the price towards the final version. Right now, it’s $29 on Gumroad (a single price including everything).

Because a lot of developers probably think about writing a technical book one day, I will continue sharing my journey which you can follow here, on @strzibnyj and Indie Hackers.

Benchmarking programs with /usr/bin/time

Posted by Josef Strzibny on June 01, 2021 12:00 AM

If you ever used the time to measure the program’s execution, you might want to know how to improve your results by running with a higher process priority.

time vs /usr/bin/time

First of all, it’s important to realize that running time and /usr/bin/time might not be the same thing. While time should be running the first program on $PATH, it doesn’t necessarily end up running /usr/bin/time:

$ time

real  0m0.000s
user  0m0.000s
sys 0m0.000s

$ /usr/bin/time
/usr/bin/time: missing program to run
Try '/usr/bin/time --help' for more information.

That is because time is also a Bash function (on Fedora-based systems), which can also benchmark your program run, but lacks the detailed -v (as “verbose”) option, among other things.

So ideally, you want to make sure to benchmark programs as:

$ /usr/bin/time -v $PROGRAM

More accurate benchmarks

Because Linux tries to give all running processes access to CPU, there will be a lot of context switching involved during your benchmark. chrt is a small program to manipulate the real-time attributes of a given process that can help us combat it. With its -f option, we can change the FIFO priority to give the process under benchmark the priority it needs. We prepend the whole time command with chrt -f 99:

$ sudo chrt -f 99 /usr/bin/time -v $PROGRAM

Now we get more accurate results when benchmarking with time.

Alternatives

The time program is not the only program we can use. A good alternative to /usr/bin/time is perf stat for which we need the perf package (on Fedora):

$ sudo dnf install -y perf

Again, we should run it with chrt -f 99 and -ddd to get the most accurate and detailed report:

$ sudo chrt -f 99 perf stat -ddd $PROGRAM

perf stat can also show cache misses or instructions-per-cycle.

Test-driving the reMarkable 2 on Linux: paper-like or paper-weight?

Posted by Joe Brockmeier on May 31, 2021 02:43 PM

Is the reMarkable 2 a suitable replacement for pen and paper? Does it work well with Linux? I hope to find out! I’ll be writing...

The post Test-driving the reMarkable 2 on Linux: paper-like or paper-weight? appeared first on Dissociated Press.

بررسی تاریخ انقضای SSL certificate با OpenSSL

Posted by Fedora fans on May 31, 2021 01:41 PM
SSL

SSLابزارها و روش های گوناگونی برای بررسی تاریخ انقضای گواهینامه های SSL یا همان SSL certificate ها وجود دارد که در این مطلب قصد داریم تا با استفاده از ابزار OpenSSL چند روش برای بررسی تاریخ انقضای SSL certificate را شرح دهیم.

ابتدا مطمئن شوید که بسته ی openssl  بر روی سیستم شما نصب می باشد که در غیر اینصورت می توانید آن را با اجرای دستور زیر بر روی سیستم خود نصب کنید:

# dnf install openssl

روش ۱. فایل های crt

برای بررسی فایل های crt. می توانید از این دستور استفاده کنید:

$ cat mycert.crt | openssl x509 -noout -enddate

نکته اینکه بجای mycert.crt باید مسیر و نام فایل مورد نظر خود را بنویسید.

 

روش ۲. فایل های p12

برای بررسی فایل های p12 می توانید از یکی از روش های زیر استفاده کنید:

$ openssl pkcs12 -in mycert.p12 -out mycert.pem -nodes
$ cat mycert.crt | openssl x509 -noout -enddate

روش دیگر:

$ openssl pkcs12 -in mycert.p12 -nodes | openssl x509 -noout -enddate

 

روش ۳. گواهینامه وب سایت

اگر قصد دارید تا گواهینامه ی SSl که هم اکنون بر روی وب سایت نصب شده است را بررسی کنید، کافیست تا مراحل زیر را انجام دهید:

$ export SITE_URL="fedorafans.com"
$ export SITE_SSL_PORT="443"
$ openssl s_client -connect ${SITE_URL}:${SITE_SSL_PORT} -servername ${SITE_URL} 2> /dev/null | openssl x509 -noout -dates

نکته اینکه در خط اول و دوم آدرس وب سایت و پورت مورد نظر خود را بنویسید.

 

امید است تا از این مطلب استفاده لازم را برده باشید.

همیشه فدورا، همیشه امنیت

 

The post بررسی تاریخ انقضای SSL certificate با OpenSSL first appeared on طرفداران فدورا.

Outreachy Interns introduction – 2021 Summer

Posted by Fedora Community Blog on May 31, 2021 04:40 AM
outreachy logo

Recently, Outreachy announced selected Interns for May 2021 to August 2021 round and we have 4 interns with us. This blog introduces them to the community. If you see them around, please welcome them and share some virtual cookies.

Outreachy is a paid, remote internship program that helps traditionally underrepresented people in tech make their first contributions to Free and Open Source Software (FOSS) communities. Fedora is participating in this round of Outreachy as a mentoring organization. We asked our Outreachy interns to tell us some things about themselves!
Here are they, in their own words

Dhairya Chaudhary (FAS Id: dhairya15)

Hey there! I’m Dhairya. I’m an undergraduate student majoring in Computer Science at IIIT Delhi. If we met in person you would be greeted by a lanky, short-haired girl exuding a ton of nervous energy. In my spare time I enjoy reading books, painting (frequently photoshopping myself into those paintings), and letting my imagination run wild.
Getting selected as an Outreachy intern is probably the most exciting thing that has ever happened to me. I enjoyed working so much that I didn’t even feel like I was working during the contribution period. Looking back, it’s wonderful how far my journey has taken me so soon. I used to hear a lot of conversations about open source and programs like Outreachy and GSoC in college, but I always thought I lacked knowledge and skills and found all of it rather intimidating. After the contribution period, I’ve come to love a lot of things I was scared of.
I’ve found the Fedora community to be a genuinely welcoming and friendly place. Needless to say, I’m glad I applied and I’m really happy to be here!

Josseline Perdomo (FAS Id: josseline)

I am Josseline Perdomo, a junior Software Engineer from Caracas, Venezuela 😊. Last year, during the quarantine I had the chance to get started as a contributor in open-source communities. My journey became stronger in the Hacktoberfest 2020, during this time I considered applying to an internship at Outreachy. Although I have previous experience as a user, I never expected to be part of Fedora. After 2 months as a contributor, I found in Fedora a community open to newcomers, open to answer questions, and give any help to start as a contributor. I found in my mentor as well other people folks very compromised with their roles which have been very welcoming and very involved in the diversity in Open Source. Besides coding, I love listening to music, watching movies and series, make some gardening 🌱, and writing.

Rafael Garcia Ruiz (FAS Id: razaloc)

I’m  a computer science student from Seville, Spain. Some years ago I graduated with a degree in art history and then I decided to study computer science. After looking around all projects from this Outreachy season I chose to contribute to rpm-ostree. I found an issue marked as friendly and I asked for it. What I didn’t expect was that the issue wasn’t as easy as it seemed at the beginning. But the project’s mentors divided the task into smaller ones and by the end of the contribution period I found myself capable of solving it all. It is a very rewarding experience to find out that, with some support, we are capable of overcoming those fears and accomplishing more than we thought. I’m thrilled to participate in this experience and so very grateful of how welcoming everyone has been at Fedora and Outreachy.

Manisha Kanyal (FAS Id: manishakanyal)

I’m Manisha Kanyal, a sophomore B.Tech in Computer Science & Engineering student from India. The project for which I’ve been selected as an Outreachy intern is “Improve Fedora QA dashboard” and I’m enthusiastic and grateful for this opportunity. It’s going to be a great learning experience for me under the guidance of Lukas and Jose. The contribution period was a good learning experience. I got to work on a real-life project which will be used by fedora community veterans/newcomers.
The mentors of this project are amazing people, they always respond to the queries asked by contributors and help in any way they can. Contributing to the Fedora QA Landing Page specifically is a great opportunity for me because it gave me exposure to that field that I’m learning and will try to make a career into.
Before applying to this project, I read many articles of previous Outreachy interns and from that, I acknowledged that I wanted to work on something that many people will use in real life. Looking forward to having a great summer.

Best wishes for their internship period and beyond!

We wish them all a successful journey as Outreachy interns and look forward to hearing about their experiences and project updates as we go.

The post Outreachy Interns introduction – 2021 Summer appeared first on Fedora Community Blog.

Episode 273 – Can we stop the coming artificial unintelligence deluge?

Posted by Josh Bressers on May 31, 2021 12:01 AM

Josh and Kurt talk about AI driven comments. We live in a world of massive confusion and disruption where what is true and false, real and fake, are often widely debated. As AI grows and evolves what does it mean for this future? We don’t really have any answers, but we ask a lot of questions. This isn’t easy, nor will it be solved quickly, but solving it is not optional.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2453-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_273_Can_we_stop_the_coming_artificial_unintelligence_deluge.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_273_Can_we_stop_the_coming_artificial_unintelligence_deluge.mp3</audio>

Show Notes

Puma graceful restarts

Posted by Josef Strzibny on May 31, 2021 12:00 AM

How to configure Puma 5 application server for graceful restarts, and what is the difference between regular, hot, and phased restarts?

Application restarts are necessary when things go wrong or whenever we need to push a new application version. But a regular restart isn’t usually anything more than stopping and starting the server again. To keep clients connected or even keep serving requests, we need a better strategy.

Graceful restarts

Because Puma 5 is a multi-process application server, it allows for a few strategies we can choose from when we run Puma in a multi-process mode (which we almost always should):

  • regular restarts: connections are lost, and new ones are established after booting the new application process
  • hot restarts: connections are not lost but remain to wait for the new application server workers to boot
  • phased restarts: current connections finish with old workers and new workers handle new ones

With hot restarts, Puma will try to finish current requests and restart itself with new workers, whereas with phased restarts, Puma will keep processing requests with the old workers until it’s possible to send them to the newly updated workers. Hot restarts will incur some extra latency for the new requests while the process restarts. Phased restarts will take longer to finish. We call both graceful restarts, because they try to finish the requests gracefully.

While the hot restart works in a single mode (one process), to understand phased restarts, we also have to know how Puma forks worker processes in the cluster mode.

Forking is a Linux mechanism for creating new processes out of existing processes. When we start Puma (and, therefore, its main process), Puma is able to create separate workers that would handle web requests from clients. It does it by calling fork() system call and keeping track of these new workers. Whenever Puma receives a new request, it passes it to one of its workers for processing.

This is important to realize when we start talking about preloading the application with the preload_app! option. Preloading the application can keep the memory usage down because all new workers can share the virtual memory in the beginning – thanks to Linux copy-on-write, which prevents copying parent process memory. This is not what we want when we are upgrading the application underneath, though.

Hot

To initiate hot restart for single or cluster mode, send SIGUSR2 signal to the Puma’s master process. Alternatively, run pumactl restart or request /restart if you have a Puma control server running.

$ kill -SIGUSR2 25197

You should take advantage of on_restart hook to clean up everything before the restart takes place.

Phased

To initiate phased restart in a cluster mode, send SIGUSR1 signal to the Puma’s master process. Alternatively, run pumactl phased-restart or request /phased-restart if you have Puma control server running.

$ kill -SIGUSR1 25197

Phased restarts work with the preload_app! option but won’t upgrade the application. To be able to upgrade the application, you cannot take advantage of preload_app! and have to run with the prune_bundler option. Either way, phased restarts won’t upgrade Puma (and its dependencies).

The on_restart hook won’t run, but you can take advantage of directory settings to point to a new application directory (and keep the old path around).

Puma 5 introduces an experimental cluster-mode where it allows to keep copy-on-write functionality for phased restart and application upgrades. The option fork_worker (--fork-worker on the command line) will fork additional workers from worker 0 instead of the Puma master process. The preload_app! option cannot be used (but it’s not necessary anyway).

Testing

We know what different restarts are, their implications, and even how to initiate them. But what if we want to test that things really work as they should? To know Puma behaves as we expect, we should be hitting the application with requests and request a restart in the middle. Keep Puma busy:

$ while true; do wget localhost:3000; done

And request a restart:

$ kill -SIGUSR2 25197

While you could spot that wget will suddenly stop for a second and wait for a restart, it’s hard to be sure requests were not lost. To that end, it’s better to use a load benchmarking tool like siege.

Run it without a restart:

$ siege -t5s -i http://0.0.0.0:3000
Lifting the server siege...
Transactions:           1792 hits
Availability:         100.00 %
Elapsed time:           4.53 secs
Data transferred:         0.07 MB
Response time:            0.05 secs
Transaction rate:       395.58 trans/sec
Throughput:           0.02 MB/sec
Concurrency:           18.83
Successful transactions:        1792
Failed transactions:             0
Longest transaction:          0.10
Shortest transaction:         0.03

And with a hot restart in the middle:

$ siege -t5s -i http://0.0.0.0:3000
Lifting the server siege...
Transactions:            418 hits
Availability:         100.00 %
Elapsed time:           4.90 secs
Data transferred:         0.02 MB
Response time:            0.05 secs
Transaction rate:        85.31 trans/sec
Throughput:           0.00 MB/sec
Concurrency:            4.13
Successful transactions:         418
Failed transactions:             0
Longest transaction:          0.28
Shortest transaction:         0.02

With siege, we can see no requests were lost. They all finished, although we were slowed down and served only 418 requests.

Conclusion

There are various strategies to choose from when it comes to Puma restarts – each with its implications. It’s also worth noting that we can solve application upgrades on a higher level. We can keep connections open while Puma restarts with systemd socket activation, serve old and new applications side by side with Docker, or solve the entire upgrade on a load balancer level. Most applications will likely do with standard hot restarts.

GDPR - 3 years later

Posted by Fabio Alessandro Locati on May 31, 2021 12:00 AM
Three years passed from the moment the GDPR become binding law in the European Union. On the one hand, I’m happy that it has already been three years, but on the other hand, I’m impatient to see GDPR fully applied. Cookies Cookies are always a hot theme when we talk about GDPR. I still see websites handing out cookies (first and third parties ones) without a cookie banner or to users who have not pressed the “accept” button on the cookie banner.

New badge: Community Survey Taker I !

Posted by Fedora Badges on May 29, 2021 12:22 PM
Community Survey Taker IYou took a Fedora Community Survey!

Friday’s Fedora Facts: 2021-21

Posted by Fedora Community Blog on May 28, 2021 02:00 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)! Elections voting is open through 3 June.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
DevConf.USvirtual2-3 Sepcloses 31 May
Nest With Fedoravirtual5-8 Augcloses 16 July
</figure>

Help wanted

Prioritized Bugs

<figure class="wp-block-table">
Bug IDComponentStatus
1953675kf5-akonadi-serverNEW
</figure>

Upcoming meetings

Releases

Fedora Linux 35

Changes

<figure class="wp-block-table">
ProposalTypeStatus
Broken RPATH will fail rpmbuildSystem-WideApproved
Drop the the “Allow SSH root login with password” option from the installer GUISelf-ContainedRejected
Replace SDL 1.2 with sdl12-compat using SDL 2.0Self-ContainedFESCo #2614
Make btrfs the default file system for Fedora CloudSystem-WideAnnounced
Support using a GPT partition table in KickstartSystem-WideAnnounced
</figure>

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-21 appeared first on Fedora Community Blog.

Tailwind is an interesting concept, but I am not convinced yet

Posted by Josef Strzibny on May 28, 2021 12:00 AM

Tailwind 2 is all the rage now. With a beautiful landing page, promising productivity, and thousands of people swearing by it, could Tailwind be the future of front-end design? I am still not convinced.

What is Tailwind? Tailwind is a Tachyons school of thinking that preaches the utility-first approach to CSS. Whereas frameworks like Bootstrap and Bulma give you basic styling, pre-designed components, and utility classes, Tailwind gives you only the utility classes that you can combine to components yourself with just HTML extraction.

There is a lot of praise published on Tailwind – and some critics as well. I don’t feel like repeating it. Rather, I will make this post about my personal experience. I will tell you why I avoided Tailwind, why I gave it a try, my first experience, and my final thoughts.

Why didn’t I try out Tailwind sooner?

I am not a CSS guru, but I can write stylesheets for my use-cases. I depended on frameworks like Bootstrap and Bulma for application development or plain old vanilla CSS for prototyping and small sites. But above all, I am a developer that doesn’t depend on a build system for his styles and JavaScript. I didn’t work on fully separated components in my own work.

This brings me to the reason why I avoided Tailwind. I didn’t want to depend on a build system to ship a few styles for a landing page. You can try Tailwind without it, but you cannot ship Tailwind in the same sense of shipping Bulma due to its size. On top of that, I thought having a lot of classes is ugly and pollution for your templates.

I completely understood that adding a small configuration to Webpack for Tailwind and keeping the long class names inside components is not a big deal if people use it together with React components. But building a plain Jekyll site without Webpack? Not as cool.

What brought me to try Tailwind in the end?

I decided to rework Deployment from Scratch landing page. It’s a small scoped redesign of one page and thus ideal to try new things. I was also thinking about trying Bridgetown, a Jekyll fork with Webpack integration out of the box. If anything wouldn’t work as I wanted, I could be back to vanilla CSS in no time.

But why did I even want to try it? I think I was quite impressed with the design of the v2 landing page, how fast the design process can be, especially considering responsivity. Classes like sm:hidden looked like a nice productivity boost. I wanted to know if what people say about Tailwind can be true. If it can be a little bit less messy in the end despite the long classes.

How did it go?

I think that the start was pretty smooth since Bridgetown has Webpack support built-in. The main issue I had was reusing the parts of the landing page I already did in CSS. It was pain. I couldn’t easily combine my existing CSS, which completely fails the “Tailwind for prototyping” use-case. Yes, I know about @layer utilities {}, and I pushed through.

Then I had to make a security checklist for the book, and I thought hard if I should use Tailwind. I didn’t. I skipped Tailwind because it was just one page again, and with Bulma, it could stay this way. After I finished the security checklist, I looked again at the mess of the new landing page I made. And I didn’t like where this was going. So I have a half-finished landing page and think I’ll just start again with custom CSS or Bulma.

I started to write this post to be published after that homepage redesign is done, but there is no point to wait any longer. My experiment failed. As with every shiny new thing, it’s good to sit down and evaluate it for yourself. Tailwind did not impress me, but I do not completely oppose it either. Maybe I should try again as part of the PETAL stack. If you are evaluating Tailwind, also read previous art of Tailwind criticism.