Fedora People

Virtualización con KVM en Fedora 32

Posted by Bernardo C. Hermitaño Atencio on May 27, 2020 02:18 AM

Una gran alternativa a muchos programas de virtualización es KVM, que es un módulo del Kernel de Linux y muy fácil de usar para poder ejecutar muchos sistemas operativos virtualizados.

Kvm: Tecnología open source que convierte el kernel de Linux en un hipervisor que se puede usar para la virtualización.
QEMU: Es un emulador de máquinas y virtualizador genérico open source.
Bridge: Este paquete que contiene las utilidades de red tipo puente, permite conectar dos o más computadores a Internet cuando ésta llega sólo a uno de ellos.
Libvirt: Es una API de código abierto, herramienta de para administrar la virtualización de la plataforma.

En los siguientes 2 videos se presenta el proceso de como instalar y configurar KVM con qemu, libvirt haciendo uso el adaptador de tipo puente (brigde), ademas también se importa un maquina virtual.

<figure class="wp-block-embed-youtube wp-block-embed is-type-rich wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="401" src="https://www.youtube.com/embed/9Rnt_zclGVU?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" width="712"></iframe>
<figcaption>Instalación y configuración de KVM, qemu, libvirt con adaptador de tipo puente.</figcaption></figure> <figure class="wp-block-embed-youtube wp-block-embed is-type-rich wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="401" src="https://www.youtube.com/embed/D1H8itv2Uu0?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" width="712"></iframe>
<figcaption>Importar máquina virtual para ejecutar en KVM.</figcaption></figure>

GNOME Foundation Board of Directors: a Year in Review

Posted by Allan Day on May 26, 2020 02:49 PM

The 2020 elections for the GNOME Foundation Board of Directors are underway, so it’s a good time to look back over the past 12 months and see what the current board has been up to. This is intended as a general update for members of the GNOME project, as well as a potential motivator for those who might be interested in running in the election!

Who’s on the board?

This year, the board has been Britt Yazel, Tristan Van Berkom, Philip Chimento, Rob McQueen, Carlos Soriano, Federico Mena Quintero, and myself.

Rob’s been president, I’ve been the vice-president and chair, Carlos has been treasurer, Philip has been secretary, and Federico has been vice-secretary.

In addition to these formal roles, each of our board members has brought their existing expertise and areas of interest: Britt has brought a concern with marketing and engagement, Federico has been our code of conduct expert, Rob has brought his knowledge of all things Flatpak and Flathub, Carlos knows everything Gitlab, and Philip and Tristan have both been able to articulate the needs and interests of the GNOME developer community.

Meetings and general organisation

The board has been meeting for 1 hour a week, according to a schedule which we’ve been using for a little while: we have a monthly Executive Director’s report, a monthly working session, and standard meetings in the gaps in-between.

This year we made greater use of our Gitlab issue tracker for planning meeting agendas. A good portion of the issues there are private, but anyone can interact with the public ones.

Making the board into a board

Historically, the GNOME Foundation Board has performed a mix of different roles, some operational and some strategic. We’ve done everything from planning and approving events, to running fundraisers, to negotiating contracts.

Much of this work has been important and valuable, but it’s not really the kind of thing that a Board of Directors is supposed to do. In addition to basic legal responsibilities such as compliance, annual filings, etc, a Board of Directors is really supposed to focus on governance, oversight and long-term planning, and we have been making a concerted effort to shift to this type of role over the past couple of years.

This professionalising trend has continued over the past year, and we even had a special training session about it in January 2020, when we all met in Brussels. Concrete steps that we have taken in this direction include developing high-level goals for the organisation, and passing more operational duties over to our fantastic staff.

This work is already having benefits, and we are now performing a more effective scrutiny role. Over the next year, the goal is to bring this work to its logical conclusion, with a schedule for board meetings which better-reflects its high-level governance and oversight role. As part of this, the hope is that, when the new board is confirmed, we’ll switch from weekly to monthly meetings.

This is also the reason behind our change to the bylaws last year, which is taking effect for the first time in this election. As a result of this, directors will have a term of two years. This will provide more consistency from one year to the next, and will better enable the Foundation and staff to make long-term plans. There has been a concern people would be unwilling to sit as a Director for a two year period, but we have significantly reduced the time commitment required of board members, and hope that this will mitigate any concerns prospective candidates might have.

Notable events

The GNOME Foundation has had a lot going on over the last 12 months! Much of this has been “operational”, in the sense that the board has been consulted and has provided oversight, but hasn’t actually been doing the work. These things include hiring new staff, the coding education challenge that was recently launched, and the Rothschild patent case which was settled only last week.

In each case the board has been kept informed, has given its view and has had to give formal approval when necessary. However, the areas where we’ve been most actively working have, in some respects, been more prosaic. This includes things like:

Code of conduct. The board was involved with the review and drafting of the new GNOME code of conduct, which we subsequently unanimously approved in September 2019. We also set up the new Code of Conduct Committee, which is responsible for administering the code of conduct.

Linux App Summit 2019, which happened in Barcelona. This event happened due to the joint support of the GNOME Foundation and KDE e.V, and the board was active in drafting the agreement that allowed this joint support to take place.

Guidelines for committees. As the board takes a more strategic oversight role, we want our committees to be run and report more consistently (and to operate according to the bylaws), so we’ve created new guidelines.

2020 budget. The foundation has had a lot going on (the coding challenge, patent case, etc) and all of this impacted the budget, and made financial scrutiny particularly important.

GNOME software definition and “Circle” proposal. This is a board-led initiative which addresses a long-standing confusion around which projects should be included within GNOME and make use of our infrastructure, branding and whether the teams involved were eligible for Foundation membership. The initiative was announced on Discourse last week for initial community feedback.

Updated conference policy. This primarily involved passing responsibility for conference approvals to our staff, but we have also clarified the rules for conference bidding processes (see the policy page).

In addition to this, the board has been involved with its usual events and workload, including meeting with our advisory board, the AGM, and voting on any issues which require an OK from the board.

Phew.

2020 Elections

As I mentioned at the beginning of this post, the 2020 board elections are currently happening. Candidates have until Friday to announce their interest. As someone who has served on the board for a while, it’s definitely something that I’d recommend! If you’re interested and want more information, don’t hesitate to reach out. Or, if you’re feeling confident, just throw your hat in the ring.

Using Rust to access Internet over Tor via SOCKS proxy 🦀

Posted by Kushal Das on May 26, 2020 09:46 AM

Tor provides a SOCKS proxy so that you can have any application using the same to connect the Onion network. The default port is 9050. The Tor Browser also provides the same service on port 9150. In this post, we will see how can we use the same SOCKS proxy to access the Internet using Rust.

You can read my previous post to do the same using Python.

Using reqwest and tokio-socks crates

I am using reqwest and tokio-socks crates in this example.

The Cargo.toml file.

[package]
name = "usetor"
version = "0.1.0"
authors = ["Kushal Das <mail@kushaldas.in>"]
edition = "2018"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
tokio = { version = "0.2", features = ["macros"] }
reqwest = { version = "0.10.4", features = ["socks", "json"] }
serde_json = "1.0.53"

The source code:

use reqwest;
use tokio;
use serde_json;

#[tokio::main]
async fn main() -> Result<(), reqwest::Error> {
    let proxy = reqwest::Proxy::all("socks5://127.0.0.1:9050").unwrap();
    let client = reqwest::Client::builder()
        .proxy(proxy)
        .build().unwrap();

    let res = client.get("https://httpbin.org/get").send().await?;
    println!("Status: {}", res.status());

    let text: serde_json::Value = res.json().await?;
    println!("{:#?}", text);

    Ok(())
}

Here we are converting the response data into JSON using serde_json. The output looks like this.

✦ ❯ cargo run
    Finished dev [unoptimized + debuginfo] target(s) in 0.06s
     Running `target/debug/usetor`
Status: 200 OK
Object({
    "args": Object({}),
    "headers": Object({
        "Accept": String(
            "*/*",
        ),
        "Host": String(
            "httpbin.org",
        ),
        "X-Amzn-Trace-Id": String(
            "Root=1-5ecc9e03-09dc572e6db5357f28eecf47",
        ),
    }),
    "origin": String(
        "51.255.45.144",
    ),
    "url": String(
        "https://httpbin.org/get",
    ),
})

Instead of any normal domain, you can also connect to any .onion domain via the same proxy.

Kiwi TCMS is partnering up with Pionir

Posted by Kiwi TCMS on May 26, 2020 09:40 AM

We are happy to announce that Kiwi TCMS is going to partner with Pionir on the development of open source hardware for testers! Pionir is a free school focused on creating a new generation of digital leaders, an exponential culture and solving challenges using technology. They are located in Kikinda, Serbia.

Pionir students

This is not our first collaboration - the students are already aware of the Kiwi TCMS project and last year they participated in presentation & workshop hosted by Alex. Zamphyr, the organization behind Pionir, is also one of the first open source projects on our OSS program!

Black boxes for black-box testing

Black-box testing is a method of software testing that examines the functionality of the subject under test without peering into its internal structures or workings. It is often performed by manipulating the possible inputs and examining the resulting output. Experienced black-box testers often develop a hunch for where bugs my be and it is not uncommon for them to discover some obscure problems nobody else thought about. More often than not the basis for this is developing an understanding/expectation how the SUT works by careful exploration during many iterations. Thus being able to explore a SUT, observe its behavior, keep mental notes about possible relations between input-behavior-output and analyzing what is happening under the hood becomes an important skill for testers.

The idea for having something unknown to explore & train your skills first comes from James Lyndsay’s Black Box Puzzles and was more recently implemented by Claudiu Draghia. Now it's our turn!

Project description

Pionir will be developing hardware black boxes for teaching exploratory testing in cooperation with Kiwi TCMS. We have dedicated €2000 from our bounty program for students of the free school towards completing this project.

The goal of the project is to produce at least 3 boxes and reference designs that will serve as a didactic tool for teaching, but also be free and open hardware, and as such, available to everyone to build from source.

This project will be trusted to the students of the free school who will get opportunity to take part in the challenging process of building a digital appliance, from designing the machine logic, to develop and prototype hardware.

The project includes designing, assembling, programming, documenting and delivering this hardware to us! Everything is expected to be open source: list of components, assembly instructions, 3D design files, source code, documentation and instructions! Our goal is that this will be relatively cheap and easy to build so everyone else can build their own boxes. During the next several months there will be new repositories created under https://github.com/kiwitcms to host the various boxes.

The black boxes are expected to be available in October 2020 - just in time for the upcoming conference season where members of the larger testing and open source communities will be able to practice with them!

Call for sponsors

We are also calling upon teams and organizations who use Kiwi TCMS in their testing workflows. Please consider making a one-time donation or becoming a regular sponsor via our Collective. You can contribute as low as € 1! The entire budget will be distributed to the community!

Vote for Kiwi TCMS

Our website has been nominated in the 2020 .eu Web Awards and we've promised to do everything in our power to greet future FOSDEM visitors with an open source billboard advertising at BRU airport. We need your help to do that!

Happy testing!

Fin de vie de Fedora 30

Posted by Charles-Antoine Couret on May 26, 2020 06:00 AM

C'est en ce mardi 26 mai 2020 que Fedora 30 a été déclaré comme en fin de vie.

Qu'est-ce que c'est ?

Un mois après la sortie d'une version de Fedora n, ici Fedora 32, la version n-2 (donc Fedora 30) est déclarée comme en fin de vie.

Ce mois sert à donner du temps aux utilisateurs pour faire la mise à niveau. Ce qui fait qu'en moyenne une version est officiellement maintenue pendant 13 mois.

En effet, la fin de vie d'une version signifie qu'elle n'aura plus de mises à jour et plus aucun bogue ne sera corrigé. Pour des questions de sécurité, avec des failles non corrigées, il est vivement conseillé aux utilisateurs de Fedora 30 et antérieurs d'effectuer la mise à niveau vers Fedora 32 ou 31.

Que faire ?

Si vous êtes concernés, il est nécessaire de faire la mise à niveau de vos systèmes. Vous pouvez télécharger des images CD ou USB plus récentes.

Il est également possible de faire la mise à niveau sans réinstaller via DNF ou GNOME Logiciels.

GNOME Logiciels a également dû vous prévenir par une pop-up de la disponibilité de Fedora 31 ou 32. N'hésitez pas à lancer la mise à niveau par ce biais.

Working Remotely with FOSS tools

Posted by Harish Pillay 9v1hp on May 26, 2020 02:17 AM

These last few months have been really wonderful in enabling me to catch up on making sure that as much of the technology that I use to be working online is indeed free and open source tools.

Here’s a table that lists all of technology that I am using for the various tasks:

<figure class="wp-block-table">
 LaptopMobile
Operating SystemFedora 32 with GnomeAndroid
BrowserFirefox, Chromium, Tor Browser, ChromeFirefox Focus and Tor
Office toolsLibreOffice 
Email clientmutt 
VPNOpenVPN, Tor VPNTor VPN
Video ConferencingJitsi (and on self managed server as well)Jitsi
SIP PhoneLinphoneLinphone
Video RecorderOpen Broadcast Studio and vokoScreenNG 
Video Playervlcvlc
Drawing toolDraw.io, drawpile, gimp, inkscape 
Layout toolScribus 
Image viewereog, shotwell 
Document viewerevince and pdfmod 
Data analysisJupyter Notebook 
MessagingIRC, Signal, TelegramIRC, Signal, Telegram
Phone Screen Casting https://github.com/dkrivoruchko/ScreenStream
<figcaption>FOSS Tools For Productivity</figcaption></figure>

I mention above that I run my own Jitsi.org server. Here’s the network diagram of how it is deployed.

jitsi-network

Thanks to Andrew, I’ve also added Audacity as the go to for editing audio files. Been using it for years and has been stellar!

Episode 198 – Good advice or bad advice? Hang up, look up, and call back

Posted by Josh Bressers on May 25, 2020 12:54 AM

Josh and Kurt talk about the Krebs blog post titled “When in Doubt: Hang Up, Look Up, & Call Back”. In the world of security there isn’t a lot of actionable advice, it’s worth discussing if something like this will work, or ever if it’s the right way to handle these situations.

<audio class="wp-audio-shortcode" controls="controls" id="audio-1572-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_198_-_Good_advice_or_bad_advice_Hang_up_look_up_and_call_back.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_198_-_Good_advice_or_bad_advice_Hang_up_look_up_and_call_back.mp3</audio>

Show Notes

Comment on Twitter with the #osspodcast hashtag

Episode 198 - Good advice or bad advice? Hang up, look up, and call back

Posted by Open Source Security Podcast on May 25, 2020 12:48 AM
Josh and Kurt talk about the Krebs blog post titled "When in Doubt: Hang Up, Look Up, & Call Back". In the world of security there isn't a lot of actionable advice, it's worth discussing if something like this will work, or ever if it's the right way to handle these situations.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/14548391/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


      What’s new in TeleIRC v2.0.0

      Posted by Justin W. Flory on May 24, 2020 04:13 PM
      RITlug/TeleIRC development update

      The post What’s new in TeleIRC v2.0.0 appeared first on Justin W. Flory's blog.

      Justin W. Flory's blog - Free Software, music, travel, and life reflections

      TeleIRC v2.0.0 is the latest major release of our open source Telegram <=> IRC bridge. Download the latest release and read the release announcement for the full story.

      There are several new and noteworthy changes in TeleIRC v2.0.0. This post walks you through the major changes and differences for TeleIRC v2.0.0. Read on for the highlight reel of this release.

      Full rewrite to Go

      TeleIRC v2.0.0 is a complete and total rewrite. With the lessons learned and best practices of the NodeJS v1.x.x releases under our belt, the team set out in September 2019 to rewrite TeleIRC in Go. The rewrite was motivated by fun and personal interest, but it was also intended to make the future of TeleIRC more sustainable.

      The rewrite makes TeleIRC simple, fast, and lightweight. TeleIRC is unique from other chat bridge software, which usually focus on extensive configuration and supporting many chat platforms.

      Additionally, the success criteria in order to release was feature parity with v1.x.x releases. The team accomplished this almost completely, with one exception. TeleIRC v2.0.0 does not include Imgur image upload for IRC; however, a v2.1.0 feature release will include Imgur support.

      To summarize, TeleIRC v2.0.0 is written to be a simple and excellent Telegram <=> IRC bridge. No more, no less.

      Run TeleIRC v2.0.0 as a compiled binary

      The new release is available as a standalone 8 MB binary. The only deployment assets needed are the binary and a config file. Other pathways, including build from source and Ansible Roles, are also available.

      This is a departure from TeleIRC v1.x.x releases, which required a NodeJS run-time and installing project dependencies. TeleIRC v2.0.0 does not require a Go run-time on the host.

      Improved TeleIRC v2.0.0 documentation

      End user feedback shaped and improved documentation during development.

      Thanks to feedback collected during the pre-release process, the documentation is simplified and written to be easy to understand. We hope you find the TeleIRC Quick Start page a helpful introduction to getting TeleIRC running in little time.

      Future roadmap for containers

      Because of v2.0.0 design decisions, there is a planned future for container and container orchestration use cases. At release time, a Dockerfile is available, but it is not yet tested or documented.

      In future releases, the TeleIRC Team will continue to test the container image and iron out bugs. Future deployment assets and documentation will offer pathways to run TeleIRC in Kubernetes or OpenShift v4.x.x.


      Article format inspired by Ryan Lerch’s format for “What’s new in Fedora Workstation“.

      TeleIRC v2.0.0 is officially here!

      Posted by Justin W. Flory on May 24, 2020 03:29 PM
      RITlug/TeleIRC development update

      The post TeleIRC v2.0.0 is officially here! appeared first on Justin W. Flory's blog.

      Justin W. Flory's blog - Free Software, music, travel, and life reflections

      After almost eight months of work, the TeleIRC Team is happy to announce General Availability of TeleIRC v2.0.0 today. Thanks to the hard work of our volunteer community, we are celebrating an on-time release of a major undertaking to make a more sustainable future for TeleIRC.

      Download TeleIRC v2.0.0 now!

      If you want to skip the text and get to the software, head to the GitHub v2.0.0 release for more info. If you want the story behind this release, read on!

      Eight months later…

      The conversation started in a university hallway after the first RIT Linux Users Group meeting in the Fall 2019 semester. Together, Tim Zabel, Nicholas Jones, and Justin W. Flory set out to rewrite TeleIRC from NodeJS to Go. This was done to address a growing backlog of challenging feature requests on TeleIRC, but also a way for us to gain more experience working with Go. Along the way, we also ended up facilitating an agile-inspired software release process adapted for open source.

      So, what happened in the eight months after the first conversation? The team meet for weekly meetings each Saturday afternoon (at first in-person, later virtually), two new core contributors joined the team, and some drive-by contributors provided feedback andf added code to the new release. There were charts, whiteboards and dry-erase markers, and lots of Blue Jeans video calls. But after all this time, we made it to release day!

      Thank you amazing volunteer contributors!

      This endeavor was a shared commitment by our volunteer committer team. All five of the volunteer core maintainers spread patience and sustained effort over time. At the end, we made something really cool to show for this work.

      A huge thanks to our core maintainers and all current and past contributors to TeleIRC. You have all contributed to the success (and motivation!) for this project. It is fun to work on cool projects with friends!

      A proper shout-out goes to the core maintainers who joined the team over the last eight months working on this release:

      “I found a bug in TeleIRC v2.0.0!”

      If you run into a problem, check out the TeleIRC documentation and open an issue if it does not answer your questions.

      Get in touch!

      If you have questions, get in touch with the developer community. You can find us on Telegram and on IRC (#rit-lug-teleirc on chat.freenode.net).

      A few new generation command line tools

      Posted by Kushal Das on May 24, 2020 02:29 AM

      Many of us live on the terminal. We use tools which allows us to do things faster and let us stay productive. Most of these tools are old. Sometimes we do pick up a few new generation command line tools. Here is a small list of tools I am using daily. All of these are written in Rust .

      ripgrep

      ripgrep screenshot

      ripgrep was the first Rust tool I started using daily as a replacement for grep. It is easy to use. The output looks nice, and also works with my vim.

      exa

      exa is the replacement for ls. It includes many useful flags.

      exa demo

      bat

      bat is the one stop replacement for cat and less. It also provides syntax highlighting with nice colours. I do have an alias cat=/usr/bin/bat -p.

      bat demo

      zoxide

      zoxide allows to move around directories super fast.

      zoxide demo

      starship

      starship is the shell prompt you can see in all of the GIFs above. It allows a lot of customization.

      All of these tools are packaged in Fedora 32 by the amazing fedora-rust SIG.

      Monitoring workstation with Prometheus

      Posted by Bhavin Gandhi on May 23, 2020 02:32 PM
      Prometheus is a monitoring system and a time series database. It can collect metrics from different places and store it as series of values over time. It uses pull based mechanism to collect the metrics. Applications can expose the metrics in a plain text format using HTTP server, which is then fetched by Prometheus. Fetching of metrics is called scraping. For other systems which don’t expose the metrics in Prometheus exposition format, we can use exporters.

      Foreman - Una manera de supervisar y gestionar tus servidores

      Posted by Alvaro Castillo on May 23, 2020 10:22 AM

      Foreman es una plataforma Open Source que permite administrar y gestionar servidores tanto virtuales como físicos y que también ayuda a la hora de realizar tareas automatizadas de una forma muy interactiva y fácil de utilizar. En este entrega vamos a ver cómo podemos desplegar e instalar Foreman en CentOS 8.

      Instalando Foreman en CentOS 8

      Antes de proceder, es aconsejable tomar un snapshot de nuestra máquina si es virtual, o hacer una copia de seguridad si es física o también hacer ambas cosas.

      Paso previo

      Hacemos una copia del fichero /etc/hosts

      # cp -va /etc/hosts /etc/hosts.instalando_foreman

      Editamos el archivo /etc/hosts y añadimos una línea al final referenciando el hostname de nuestro servidor y la ip, guardamos el archivo. NOTA: Si no sabes cuál es el hostname, puedes usar el comando hostnamectl, este te devolverá mucha información, pero solo te interesa aquella que ponga "Static hostname". Y para la ip, identifica que tarjeta tienes con el comando ip addr que esté conectada y accesible a tu red interna.

      NOTA 2: Utiliza como nombre de host minúsculas o te dará un error como este:

      The hostname contains a capital letter.
      
      This is not supported. Please modify the hostname to be all lowercase. If needed, change the hostname permanently via the
      'hostname' or 'hostnamectl set-hostname' command
      and editing the appropriate configuration file.
      (e.g. on Red Hat systems /etc/sysconfig/network,
      on Debian based systems /etc/hostname).
      
      If 'hostname -f' still returns an unexpected result, check /etc/hosts and put
      the hostname entry in the correct order, for example:
      
        1.2.3.4 hostname.example.com hostname
      
      The fully qualified hostname must be the first entry on the line
      Your system does not meet configuration criteria

      Si tienes este problema, basta con aplicar estos cambios:

      # cp -va /etc/hostname /etc/hostname.cambio_hostname

      Editas el nombre del hostname que quieras:

      # hostnamectl set-hostname $(cat /etc/hostname)

      Modificas el nombre del host que hayas puesto en /etc/hosts que hace referencia a tu host actual, cierras sesión en tu servidor, te vuelves a loguear y prosigues con la instalación.

      Ejemplo:

      127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
      ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
      
      192.168.1.122 myhostname.lan

      Es importante este proceso o cuando realices el paso de la instalación de la plataforma te dará un error como este:

      Unable to resolve forward DNS for your system blabla
      Does not meet configuration criteria

      Instalando el software necesario

      Importamos la llave GPG de Foreman

      # rpm --import https://yum.theforeman.org/releases/2.1/RPM-GPG-KEY-foreman

      Instalamos los repositorios necesarios:

      # dnf install -y https://yum.puppet.com/puppet-release-el-8.noarch.rpm 
      # dnf install -y https://yum.theforeman.org/releases/2.1/el8/x86_64/foreman-release.rpm
      # dnf install -y epel-release 

      Editamos la configuración de los repositorios

      # cp -va /etc/yum.repos.d/foreman.repo /etc/yum.repos.d/foreman.repo.cambiar_version
      # cp -va /etc/yum.repos.d/foreman-plugins.repo /etc/yum.repos.d/foreman-plugins.repo.cambiar_version
      # sed -ie 's/el7/el8/g' /etc/yum.repos.d/foreman.repo /etc/yum.repos.d/foreman-plugins.repo
      # sed -ie 's/2.0/2.1/g' /etc/yum.repos.d/foreman.repo /etc/yum.repos.d/foreman-plugins.repo

      NOTA: Esto hay que hacerlo, porque la comunidad de Foreman no automatizó este proceso haciendo uso de variables en su repositorio...

      Instalamos el software de Foreman

      # dnf check-update && dnf install -y foreman-installer

      Lanzamos la instalación de la plataforma

      NOTA: Este proceso se demorará un poco más o un poco menos dependiendo de tu hardware físico.

      # foreman-installer...

      Fedora program update: 2020-21

      Posted by Fedora Community Blog on May 22, 2020 06:51 PM
      Fedora Program Manager weekly report on Fedora Project development and progress

      Here’s your report of what has happened in Fedora this week. Fedora 30 will reach end-of-life on 26 May. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. Announcements Help wanted Upcoming meetings Releases CPE update Announcements Orphaned packages seeking […]

      The post Fedora program update: 2020-21 appeared first on Fedora Community Blog.

      Silverblue: pretty good family OS

      Posted by Jiri Eischmann on May 22, 2020 01:43 PM
      <figure class="wp-block-image size-large"></figure>

      I’m the go-to IT guy in the family, so my relatives rely on me when it comes to computers and software on them. In the past I also helped them with computers with Windows and macOS, but at some point I just gave up. I don’t know those systems well enough to effectively administer them and I don’t even have much interest in them. So I asked them to decide: you either use Linux which I know and can effectively help you with or ask someone else for help.

      Long story short: I (mostly remotely) support quite a few Fedora (Linux of my choice) users in my family now. It’s a fairly easy task. Usually after I set up the machine I don’t hear from the user very often. Just once 6 months and a year typically when I visit them I upgrade the machine to the new release and check whether everything works. But Fedora upgrades became so easy and reliable that recently I usually just found out that they had already done it by themselves.

      But there was still one recurring problem: even though they performed upgrades because it was probably a big enough thing to catch their attention they didn’t act on normal updates and I often found them with outdated applications such as Firefox.

      I could set up automated DNF updates running in the background, but it’s really not the safest option. And that’s where Fedora Silverblue comes to rescue. Applications run as flatpaks which can be and in fact are by default updated automatically in the background. And it’s pretty safe because the updates are atomic and the app is not affected until you restart it.

      The same goes for system updates. rpm-ostree can prepare the update in the background and the user switches to it once the computer is restarted.

      So I thought: the user base of Silverblue typically consists of developers and power users used to the container-based workflow, but hey, it could actually be a pretty good system for the users I support in my family.

      I got an opportunity to try it out some time ago. I visited my mom and decided to upgrade her laptop to Fedora 32. Everything would have gone well if my son hadn’t pulled the power cord out during the process. The laptop is old and has a dead battery, so it resulted in an immediate shutdown. And that’s never good during a system upgrade. Instead of manually fixing broken packages which is a lengthy process I decided to install Silverblue.

      The fruits of it came a week later. My mom called me that she was experiencing some graphical glitches and hangs with Fedora 32. Probably some regression in drivers/mesa. It’s a T400s from 2009 and neither Intel nor anyone else does any thorough regression testing on such old models I suppose. On the standard Fedora Workstation my mom would have been screwed because there is no easy way back.

      But it’s a different story on Silverblue. I just sent her one command over Telegram:

      rpm-ostree rebase fedora/31/x86_64/silverblue

      She copy-pasted it to Terminal, pressed Enter and 5 minutes later she was booting into Fedora 31.

      And the best thing about it is not that it’s so easy to upgrade and rollback in Silverblue, but that the apps are not affected by that. I know that if I let my mom rollback to Fedora 31 she will find her applications there just like she left them in Fedora 32. The same versions, same settings…

      P.S. my mom’s laptop is from 2009, but Fedora Workstation/Silverblue flies on it. Who would have thought that GNOME Shell animations could be smooth on an 11-year-old laptop. Kudos to everyone who helped to land all the performance optimizations in the last several releases!

      15 years

      Posted by Remi Collet on May 22, 2020 07:52 AM

      This repository is open for exactly 15 years today.

      Here is some reference dates

      2005

      Repository is open to share my first  RPMs on my FAI personal pages, mostly for Fedora Core 3

      2006

      In order to have more people taking benefit of my works, I join Fedora project as a contributor

      2007

      RHEL / CentOS 4 repository is Open, hosted on a dedicated server on the famillecollet.com domain

      2008

      With Fedora Core and Extras merge, I start to co-maintain PHP packages for the new Fedora 7

      2011

      As a PECL contributor I can now participate in lot of extensions maintenance

      2012

      As a PHP Contributor I can now participate more actively to the project maintenance

      I join Red Hat where I will also work on PHP

      2015

      Start using the remirepo.net domain

      2017

      I become, with Sara Golemon, Release Manager of PHP 7.2

      2020

      With more than 500 millions of downloads, more than 30 mirrors in the world, my  "little" repository, created 15 years ago, have became (I think) one of the reference pour PHP and RPM users, providing

      • 7 versions of PHP
        • from 5.6 to 7.1 with security backports
        • from 7.2 to 7.4
        • 8.0.0-dev
      • 150 extensions
      • 6 distributions
        • RHEL / CentOS 6, 7 and 8
        • Fedora 30 to 32
      • 3 distribution modes
        • Base packages, 1 repository per version
        • Software Collections for parallel installation
        • Modules

      Regular donations, which cover at least hosting costs are for me a good encouragement sign, proving the usefulness of my work.

      Last Part Of The WiFi-AP Bridge Setup (DHCP Relay/Forwarder) in Py

      Posted by Jon Chiappetta on May 21, 2020 04:36 PM

      I was having troubles getting dnsmasq to be a simple DHCP relay/forwarder/proxy and I didn’t want to add this into the ARP relay C code to keep that as simple as possible so I wrote this little Python script that will basically bridge 2 interfaces together (one that has DHCP clients on it and one that is connected to the DHCP server [1 main server running for the whole network]).

      I stored this in the ARP-relay GIT repo I created prior since it is related to the same bridged setup:
      https://github.com/stoops/arprb/blob/master/dhcprb.py?ts=4

      An example usage & output from running it so far:

      root@OpenWRT:~# python dhcprb.py br-wan wlan0
      ('00:be:ee:ca:fe:00', '192.168.17.51', '<-->', '00:be:ee:ca:fe:ff', '192.168.16.51')
      ...
      request-> ('0.0.0.0', 68) [304] {wlan0}
      <-reply ('192.168.16.1', 67) [300] {br-wan:192.168.16.131}
      ...
      request-> ('0.0.0.0', 68) [300] {wlan0}
      <-reply ('192.168.16.1', 67) [300] {br-wan:192.168.16.153}
      ...
      
      

      Securing your Elastic services using authenticated onion services

      Posted by Kushal Das on May 21, 2020 01:40 PM

      Last year I set up an ElasticSearch box to monitor a few of my servers. The goal was to learn the basics of the elastic ecosystem. I know how powerful it is but never played enough with it before.

      While doing the setup, I was wondering about how to secure communication between nodes. I can not send data over plain HTTP to the nodes, and also have to make sure to have some amount of authentication. I was a bit confused about the subscriptions options.

      Authenticated onion services to rescue

      I use authenticated onion services in many of my regular services. It provides an easy way to connect to services (over TCP) along with encryption and authentication.

      Using the same in the logstash server is an even better option for me as I do not have to open up any port in the firewall. As the logstash was listening to 5044 on localhost, I added the following configuration to the /etc/tor/torrc in the logstash server. You should use v3 addresses, and this blog post will explain how to configure that.

      HiddenServiceDir /var/lib/tor/logstash
      HiddenServiceVersion 2
      HiddenServicePort 5044 127.0.0.1:5044
      HiddenServiceAuthorizeClient stealth logstash
      

      In the client nodes, I first had to configure Tor to reach my Onion service (details is in the blog post above). Next, I added the server address and local proxy (from Tor) details to /etc/filebeat/filebeat.yml.

      output.logstash:
        # The Logstash hosts
        hosts: ["youronionaddress.onion:5044"]
        proxy_url: socks5://localhost:9050
        proxy_use_local_resolver: false
        index: "filebeat-kushaldas"
      

      And done :) Just start the logstash server, and also the filebeat service in every node. The data will start flowing in.

      If you have query about the Tor Project, you can visit our new https://community.torproject.org/ site.

      From a diary of AArch64 porter — firefighting

      Posted by Marcin 'hrw' Juszkiewicz on May 21, 2020 11:53 AM

      When I was a kid there was a children’s book about Wojtek who wanted to be firefighter. It is part of culture for my generation.

      I never wanted to follow Wojtek’s dreams. But during last years I became firefighter. And this is not a good thing in a long term.

      CI failures

      During last months we (Linaro) took care of AArch64 support in OpenStack infrastructure. There are nodes with CentOS 7 and 8, Debian ‘stretch’ and ‘buster, Ubuntu ‘xenial’, ‘bionic’ and ‘focal’. And several CI jobs in some projects (Disk Image Builder, Kolla, Nova and some other).

      And those CI jobs tend to fail. As usual, right? Not quite…

      Missing Python packages

      One day when I joined irc in the morning I was greeted with “aarch64 ci fails — can you take a look?” from one of project developers.

      Quick look into logs:

      ERROR: Could not find a version that satisfies the requirement ntplib (from versions: none)
      ERROR: No matching distribution found for ntplib
      

      Then usual path. Pypi, check release, history, go to homepage, fill issue. Curse. And wait for upstream to fix problem. They fixed, CI was working again.

      Last Monday — I started work ready to do something interesting. And then was greeted with same story “aarch64 ci fails, can you take a look?”.

      ERROR: Could not find a version that satisfies the requirement protobuf==3.12.0 
      (from versions: 2.0.0b0, 2.0.3, 2.3.0, 2.4.1, 2.5.0, 2.6.0, 2.6.1, 3.0.0a2,
      3.0.0a3, 3.0.0b1, 3.0.0b1.post1, 3.0.0b1.post2, 3.0.0b2, 3.0.0b2.post1,
      3.0.0b2.post2, 3.0.0b3, 3.0.0b4, 3.0.0, 3.1.0, 3.1.0.post1, 3.2.0rc1,
      3.2.0rc1.post1, 3.2.0rc2, 3.2.0, 3.3.0, 3.4.0, 3.5.0.post1, 3.5.1, 3.5.2,
      3.5.2.post1, 3.6.0, 3.6.1, 3.7.0rc2, 3.7.0rc3, 3.7.0, 3.7.1, 3.8.0rc1, 3.8.0,
      3.9.0rc1, 3.9.0, 3.9.1, 3.9.2, 3.10.0rc1, 3.10.0, 3.11.0rc1, 3.11.0rc2, 3.11.0,
      3.11.1, 3.11.2, 3.11.3)
      

      Pypi, check, homepage, there was an issue already filled. So far no upstream response.

      Problem got solved by moving all OpenStack projects to previous (3.11.3) release.

      Missing/deprecated versions in distributions

      So I started work on adding another CI job. This time for ‘requirements’ OpenStack project. To make sure that whatever Python package upgrade will be available on AArch64 as well.

      As usual had to add a pile of distro dependencies to get ‘numpy’ and ‘scipy’ built correctly. And bump timeout to 2 hours. Build was going nice.

      And then ‘confluent_kafka’ hit hard:

      /tmp/pip-install-ld4fzu94/confluent-kafka/confluent_kafka/src/confluent_kafka.h:65:2: 
      error: #error "confluent-kafka-python requires librdkafka v1.4.0 or later.
      Install the latest version of librdkafka from the Confluent repositories, 
      see http://docs.confluent.io/current/installation.html"                                                            
      
      

      librdkafka v1.4.0 or later” is not available in any distribution used on OpenStack infra nodes. Fantastic!

      And repositories mentioned in error message are x86-64 only. Fun!

      Sure, can do usual pypi, release, homepage, issue. Even went that way. Chain of GitHub projects building components into repositories. x86-64 only all the way.

      External CI services

      Which gets us to another thing — external CI services. You know: Azure pipelines, GitHub actions or Travis CI (alphabetical order). Used in misc ways by most FOSS projects nowadays.

      Each of them has some kind of AArch64 support nowadays. But it looks like only Travis provides you with hardware. Azure and GitHub only can connect your external machines to their CI service.

      Speed or lack of it

      So you have a project where you need AArch64 support and upstream is already using Travis for their CI needs. Lucky you!

      You work with project developers you get test suite running and then it time outs. It does not matter that CI machine you got have few cores because ‘pytest’ does not know how to run tests in parallel.

      So you cut tests completely or partially. Or just abandon the idea.

      No hardware, no binaries

      If you are less lucky then you may get such answer from upstream. I had such in past few times. And I fully understand them — why support something when you can not even test does it work?

      Firefighting or yak shaving?

      When I discussed it with friends one of them mentioned that this reminds him more yak shaving than firefighting. To be honest it is both most of time.

      CPE achievements during Q1 2020

      Posted by Fedora Community Blog on May 21, 2020 07:00 AM

      2020 has seen a lot of changes for everyone—understatement of the year right? One of these changes has been how the Community Platform Engineering (CPE) Team has decided to adjust how they work. We are on an agile workflow journey. We began this year with quarterly planning, for the first time ever! We kicked off […]

      The post CPE achievements during Q1 2020 appeared first on Fedora Community Blog.

      Curso de Python - Controles de flujo, condicionales y bucles

      Posted by Alvaro Castillo on May 21, 2020 06:54 AM

      Control de flujo

      Los controles de flujo se utilizan para definir cómo va actuar un script, aplicación... y qué va aplicarse inmediatamente después de evaluar la condición cuando se compare.

      if

      Esta estructura de control te permite evaluar una condición y ejecutar un trozo de código si la cumple.

      >>> if (condición):
      >>>  Bloque de código

      if-else

      El if-else es una estructura de control que permite hacer 1 cosa si se cumple la condicioń, si esta no se cumple, únicamente se ejecutará un bloque de código sin contemplar otras posibilidades.

      if (condición 1):
        Bloque de código
      else:
        Bloque de código

      Veamos un ejemplo, Si tenemos un coche de marca Opel, emitirás un mensaje que diga "Tienes un Opel", si no es así, mostraremos un mensaje que diga que "No tienes un coche Opel".

      >>> marca = "Citröen"
      >>> if (marca == "Opel"):
      >>>   print("Tienes un Opel")
      >>> else:
      >>>  print("No tienes un coche Opel")
      'No tienes un coche Opel'

      if-elif-else

      ¿Pero qué pasa cuando queremos comprobar múltiples condiciones? No podemos estar anidando if-else como si no hubiese un mañana uno dentro del otro. Para eso tenemos la estructura if-elif-else. Esta estructura nos permite hacer 1 cosa u otra en base a una condición, la cuál estará compuesto por uno o múltiples operadores como aritméticos, lógicos...

      if (ondición 1):
        Bloque de código
      elif (condición3):
        Bloque de código
      elif (condición2):
        Bloque de código
      else:
        Bloque de código

      Veamos un ejemplo, Si tenemos un coche de marca Opel, emitirás un mensaje que diga "Tienes un Opel", si no es así, mostraremos un mensaje que diga que "No tienes un coche Opel".

      >>> marca = "Citröen"
      >>> if (marca == "Opel")
      >>>   print("Tienes un Opel")
      >>> elif (arca == "Citröen")
      >>>   print("Tienes un coche Opel")
      >>> elif (arca == "Audi"):
      >>>   print("Tienes un Audi")
      >>> else:
      >>>   print("Tu marca de coche no está registrada")
      Tienes un coche Citröen

      Todo esto se puede complicar aún más haciendo uso de otros operadores y de otros if-elif-else anidados, por ejemplo, utilizaremos los operadores de comparación con lógicos tal que así:

      >>> marca_coche = "Toyota"
      >>> modelo_coche = "AE87"
      >>> if (marcha_coche == "Toyota" and modelo_coche == "AE92"):
      >>>   if (motor_coche == 1600):
      >>>     print("Perfecto")
      >>>   elif (motor_coche == 1400):
      >>>     print("Bien")
      >>>   elif (motor_coche == 1200):
      >>>     print("Cuidado con las cuestas")
      >>>   else:
      >>>     print("Esto huele a chasis")
      >>> elif (marca_coche == "Citröen" and modelo_coche == "Saxo"):
      >>>   print("Enhorabuena, tienes un coche que pesa poco y corre mucho.")
      >>> else:
      >>>   print("Error 404, Tu coche no encontrado.")
      Tienes el coche de tus sueños.

      Bucle for

      ¿Qué ocurre si queremos recorrer una lista o generar múltiples ejecuciones de código? Pues evidetenmente con un if no nos vale, ya que solo nos permite validar una condicioń, y cuando la valide, esta dejará de ejecutarse.

      for variable_interactiva in secuencia:
        Bloque código

      ¿Cómo funciona? En secuencia va una condición, podemos poner que recorra todos los valores de una lista y nos lo imprima por variable_interactiva.

      >>> frutas = [...

      Figuring out where a message arrived, and other syslog-ng 3.27 tricks

      Posted by Peter Czanik on May 20, 2020 11:55 AM

      Version 3.27 of syslog-ng has brought many smaller, but useful features to us. The new Sumo Logic destination was already covered in an earlier blog. You can now also check exactly where a message arrived on a network source (IP address, port and protocol). Rewriting the facility of a syslog message was also made easy. For a complete list of new features and changes, check the release notes at https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-3.27.1

      Before you begin

      To test these features, you need to have syslog-ng 3.27 or later. There is a good chance that this version is not yet available as an official package for your Linux distribution. Luckily, there are third party repositories available for the most popular distros that might already carry 3.27. Check https://syslog-ng.com/3rd-party-binaries for further details. I did my tests on FreeBSD as a syslog-ng server and Linux as a client, but the examples below should work everywhere else with minimal modifications.

      Where did the message arrive?

      Many people stick to the KISS principle (https://en.wikipedia.org/wiki/KISS_principle) when it comes to configuring syslog-ng. In essence: have a network source combining all incoming messages from TCP and UDP connections into a single source, and process them together. While it works perfectly well in most situations, sometimes you might need to know exactly where a message arrived and process it accordingly.

      Using the following source in your configuration, when log messages arrive from the same host using both TCP and UDP, they will look exactly the same:

      source src { system();
                   udp(); tcp(port(514)); internal();
      };

      Here are the commands I used to generate test messages:

      logger -n 172.16.167.151 -P 514 -d --rfc3164 this is a UDP test
      logger -n 172.16.167.151 -P 514 -T --rfc3164 this is a TCP test

      When you check the logs, you can see that the only difference is the text I entered, but otherwise the logs would look identical:

      May 19 12:32:21 172.16.167.141 root: this is a UDP test
      May 19 12:32:40 172.16.167.141 root: this is a TCP test

      This is where the new DESTIP/DESTPORT/PROTO macros can come in handy. These can show you where the log messages actually arrived. Here is a configuration snippet that uses the above defined source, does some minimal filtering to lessen the noise, and stores messages in a file using a template that utilizes the new macros:

      destination d_bla {
          file("/var/log/bla" template("destip=$DESTIP destport=$DESTPORT proto=$PROTO message=$MSG\n"));
      };
      log { source(src); filter(f_notice); destination(d_bla); };

      As you can see from the logs below, even if the IP address and the port are the same, the protocol is different:

      destip=172.16.167.151 destport=514 proto=17 message=this is a UDP test
      destip=172.16.167.151 destport=514 proto=6 message=this is a TCP test

      Using the new if/else syntax of syslog-ng, you can keep the convenience of a single source and still easily treat part of the logs differently when necessary. You can find a number of examples in my blog about analyzing Suricata log messages, and also a simple example below.

      Rewriting the syslog facility

      If you filter based on the syslog facility associated with a log message, sometimes you might need to change the facility of a log message. It can be easily done now, using the new set-facility() rewrite function of syslog-ng 3.27. The example below does not make much sense, but at least it is easy to re-create it in your own environment and use it as a starting point. In this log, statement logs from “sudo” are set to facility “mail” and stored together with the rest of your mail logs.

      log {
          source(src);
          if (program("sudo")) {
            rewrite { set-facility("mail"); };
          };
          filter { facility("mail"); };
          destination { file("/var/log/myemail"); };
      };

      In the configuration snippet above, we use the default local log source, called “src” in case of the default syslog-ng configuration on FreeBSD. Next, we filter on the program name “sudo”, and rewrite the facility to “mail” when there is a match. Before writing the logs to disk, we filter on the “mail” facility. Let’s take a look at the logs of an unsuccessful sudo attempt:

      # tail /var/log/myemail
      May 19 14:02:01 fb121 sudo[1649]:   czanik : user NOT in sudoers ; TTY=pts/0 ; PWD=/usr/home/czanik ; USER=root ; COMMAND=/bin/sh
      May 19 14:02:01 fb121 sendmail[1652]: 04JC21JJ001652: from=root, size=223, class=0, nrcpts=1, msgid=<202005191202.04JC21JJ001652@fb121>, relay=root@localhost
      May 19 14:02:01 fb121 sendmail[1652]: STARTTLS=client, relay=[127.0.0.1], version=TLSv1.3, verify=FAIL, cipher=TLS_AES_256_GCM_SHA384, bits=256/256
      May 19 14:02:01 fb121 sm-mta[1653]: STARTTLS=server, relay=localhost [127.0.0.1], version=TLSv1.3, verify=NO, cipher=TLS_AES_256_GCM_SHA384, bits=256/256
      May 19 14:02:01 fb121 sm-mta[1653]: 04JC213F001653: from=<root@fb121>, size=474, class=0, nrcpts=1, msgid=<202005191202.04JC21JJ001652@fb121>, proto=ESMTPS, daemon=Daemon0, relay=localhost [127.0.0.1]
      May 19 14:02:01 fb121 sendmail[1652]: 04JC21JJ001652: to=root, ctladdr=root (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30223, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (04JC213F001653 Message accepted for delivery)
      May 19 14:02:01 fb121 sm-mta[1654]: 04JC213F001653: to=<root@fb121>, ctladdr=<root@fb121> (0/0), delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30762, relay=local, dsn=2.0.0, stat=Sent

      As you can see, right after the unsuccessful sudo attempt, there is also an e-mail alert about the event. Log messages from sudo are now stored together with e-mail logs.

      If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

      Fedora Silverblue, an introduction for developers

      Posted by Fedora Magazine on May 20, 2020 08:00 AM

      The Fedora Silverblue project takes Fedora workstation, libostree and podman, puts them in a blender, and creates a new Immutable Fedora Workstation. Fedora Silverblue is an OS that stops you from changing the core system files arbitrarily, and readily allows you to change the environment system files. The article What is Silverblue describes the big picture, and this article drills down into details for the developer.

      Fedora Silverblue ties together a few different projects to make a system that is a git-like object, capable of layering packages, and has a container focused work flow. Silverblue is not the only distribution going down this road. It is the desktop equivalent of CoreOS, the server OS used by Red Hat Openshift.

      Silverblue’s idea of ‘immutable’ has nothing to do with immutable layers in a container. Silverblue keeps system files immutable by making them read-only.

      Why immutable?

      Has an upgrade left your system in an unusable state? Have you wondered why one server in a pool of identical machines is being weird? These problems can happen when one system library – one tiny little file out of hundreds – is corrupted, badly configured or the wrong version. Or maybe your upgrade works fine but it’s not what you’d hoped for, and you want to roll back to the previous state.

      An immutable OS is intended to stop problems like these biting you. This is not an easy thing to achieve – simple changes, like flipping the file system between read-write and read-only, may only change a fault-finding headache to a maintenance headache.

      Freezing the system is good news for sysadmins, but what about developers? Setting up a development environment means heavily customizing the system, and filling it with living code that changes over time. The answer is partly a case of combining components, and partly the ability to swap between OS versions.

      How it works

      So how do you get the benefits of immutability without losing the ability to do your work? If you’re thinking ‘containers’, good guess – part of the solution uses podman. But much of the work happens underneath the container layer, at the OS level.

      Fedora Silverblue ties together a few different projects to turn an immutable OS into a usable workstation. Silverblue uses libostree to provide the base system, lets you edit config files in /etc/, and provides three different ways to install packages.

      • rpm-ostree installs RPM packages, similar to DNF in the traditional Fedora workstation. Use this for things that shouldn’t go in containers, like KVM/libvirt.
      • flatpak installs packages from a central flathub repo. This is the one-stop shop for graphical desktop apps like LibreOffice.
      • The traditional dnf install still works, but only inside a toolbox (a Fedora container). A developer’s workbench goes in a toolbox.

      If you want to know more about these components, check out Pieces of Silverblue.

      Rolling back and pinning upgrades

      All operating systems need upgrades. Features are added, security holes are plugged and bugs are squashed. But sometimes an upgrade is not a developer’s friend.

      A developer depends on many things to get the job done. A good development environment is stuffed with libraries, editors, toolchains and apps that are controlled by the OS, not the developer. An upgrade may cause trouble. Have any of these situations happened to you?

      • A new encryption library is too strict, and an upgrade stopped an API working.
      • Code works well, but has deprecated syntax. An upgrade brought error-throwing misery.
      • The development environment is lovingly hand-crafted. An upgrade broke dependencies and added conflicts.

      In a traditional environment, unpicking a troublesome upgrade is hard. In Silverblue, it’s easy. Silverblue keeps two copies of the OS – your current upgrade and your previous version. Point the OS at the previous version, reboot, and you’ve got your old system files back.

      You aren’t limited to two copies of your file system – you can keep more by pinning your favorite versions. Dusty Mabe, one of the engineers who has been working on the system since the Project Atomic days, describes how to pin extra copies of the OS in his article Pinning Deployments in OSTree Based Systems.

      Your home directory is not affected by rolling back. Rpm-ostree does not touch /etc/ and /var/.

      System updates and package installs

      Silverblue’s rpm-ostree treats all the files as one object, stored in a repository. The working file system is a checked-out copy of this object. After a system update, you get two objects in that repository – one current object and one updated object. The updated object is checked out and becomes the new file system.

      You install your workhorse applications in toolboxes, which provide container isolation. And you install your desktop applications using Flatpak.

      This new OS requires a shift in approach. For instance, you don’t have to keep only one copy of your system files – you can store a few and select which one you use. That means you can swap back and forth between an old Fedora release and the rawhide (development) version in a matter of minutes.

      Build your own Silverblue VM

      You can safely install Fedora Silverblue in a VM on your workstation. If you’ve got a hypervisor and half an hour to spare (10 minutes for ISO download, and 20 minutes for the build), you can see for yourself.

      1. Download Fedora Silverblue ISO from
      2. https://silverblue.fedoraproject.org/download (not Fedora workstation from https://getfedora.org/).
      3. Boot a VM with the Fedora Silverblue ISO. You can squeeze Fedora into compute resources of 1 CPU, 1024MiB of memory and 12GiB of storage, but bigger is better.
      4. Answer Anaconda’s questions.
      5. Wait for the Gnome desktop to appear.
      6. Answer Initial Setup’s questions.

      Then you’re ready to set up your developer’s tools. If you’re looking for an IDE, check these out. Use flatpak on the desktop to install them.

      Finally, use the CLI to create your first toolbox. Load it with modules using npm, gem, pip, git or your other favorite tools.

      Help!

      If you get stuck, ask questions at the forum.

      If you’re looking for ideas about how to use Silverblue, read articles in the magazine.

      Is Silverblue for you?

      Silverblue is full of shiny new tech. That in itself is enough to attract the cool kids, like moths to a flame. But this OS is not for everyone. It’s a young system, so some bugs will still be lurking in there. And pioneering tech requires a change of habit – that’s extra cognitive load that the new user may not want to take on.

      The OS brings immutable benefits, like keeping your system files safe. It also brings some drawbacks, like the need to reboot after adding system packages. Silverblue also enables new ways of working. If you want to explore new directions in the OS, find out if Silverblue brings benefits to your work.

      Instalando SonarQube en CentOS 8

      Posted by Alvaro Castillo on May 20, 2020 06:40 AM

      Introducción

      SonarQube es un software de código abierto desarrollado por una sociedad anónima fundada en 2008 cuya sede está en Suiza y que permite evaluar la calidad del código que se desarrolla y permite que los equipos de desarrollo crezcan elaborando software de calidad. Actualmente lo estoy descubriendo y me gustaría compartir un poco la instalación que hice en un CentOS 8. Actualmente hay una versión comunitaria y otras versiones que contienen más características pero que son de pago, nosotros nos vamos a centrar en la comunitaria.

      Requisitos recomendados

      Los requisitos mínimos para poder funcionar son los siguientes:

      • Mínimo 3 GB de RAM, 2GB que utilizará SonarQube y 1 GB que utilizará y debe quedar libre para el sistema. Aunque yo lo he podido hacer funcionar en un SO con 2GB de RAM.
      • Tener instalado un sistema gestor de base de datos como:

        • PostgreSQL, será con el que trabajaremos.
        • Microsoft SQL Server
        • Oracle
      • Tenemos que tener almacenamiento bastante rápido para las operaciones de I/O que realiza Elastic Search, sin embargo, si lo quieres para tu uso personal, no hace falta a menos que vayas a calcular la materia oscura del universo.

      Recomendaciones: Si estamos trabajando con máquinas virtuales, es indispensable hacer snapshot antes de empezar, o crear una máquina plantilla, anotar los cambios y luego hacerlos en nuestra futura máquina que irá a producción.

      Tareas y pasos previos

      Instalar el servicio solucionador de problemas de SELinux

      Recordemos que CentOS hace uso de unas políticas de seguridad que trabajan junto con el kernel de Linux llamadas SELinux, y que permiten que no se ejecuten exploits 0 day entre otras cosas. Por eso es importante lidiar con las políticas y NO desactivarlas bajo ningún concepto.

      Verificar que tenemos SELinux activo:

      $ sestatus

      Devolverá una salida como esta:

      SELinux status:                 enabled
      SELinuxfs mount:                /sys/fs/selinux
      SELinux root directory:         /etc/selinux
      Loaded policy name:             targeted
      Current mode:                   enforcing
      Mode from config file:          enforcing
      Policy MLS status:              enabled
      Policy deny_unknown status:     allowed
      Memory protection checking:     actual (secure)
      Max kernel policy version:      31

      Si el valor SELinux status: enabled y Current mode es enforcing, está bien, si no tendremos que:

      1. Revisar si tenemos el paquete selinux-policy-targeted y selinux-policy instalados.
      2. Si tenemos desactivado SELinux, hacer un snapshot, y activarlo, editamos /etc/selinux/config
        SELINUX=enforcing
      3. Activarlo ahora
        # setenforce 1

        Por otro lado, si tenemos las políticas instaladas y activadas instalamos el servidor que nos ayudará a solucionar los problemas con las políticas.

        # dnf install setroubleshoot-server

        Revisar si tenemos algún ejecutable que se ha intentado saltarse las políticas:

        # sealert -a /var/log/audit.log

        El archivo de configuración de este servidor está: /etc/setroubleshoot/setroubleshoot.conf

      Crear usuario/grupo y home

      Para crearlo:

      # useradd sonarqube -m -d /opt/sonarqube
      # passwd sonarqube 

      Este creará el usuario con un home específico en /opt/sonarqube y le asignaremos una contraseña.

      Asignar los valores requeridos

      Tenemos que tener en cuenta estos valores:

      • vm.max_map_count tiene que ser >= 262144
      • fs.file-max es >= 65536
      • El usuario que ejecute SonarQube podrá abrir hasta un total de 65536 descriptores de archivos y también abrir...

      DirectX on Linux - what it is/isn't

      Posted by Dave Airlie on May 20, 2020 12:01 AM
      This morning I saw two things that were Microsoft and Linux graphics related.

      https://devblogs.microsoft.com/commandline/the-windows-subsystem-for-linux-build-2020-summary/

      a) DirectX on Linux for compute workloads
      b) Linux GUI apps on Windows

      At first I thought these were related, but it appears at least presently these are quite orthogonal projects.

      First up clarify for the people who jump to insane conclusions:

      The DX on Linux is a WSL2 only thing. Microsoft are not any way bringing DX12 to Linux outside of the Windows environment. They are also in no way open sourcing any of the DX12 driver code. They are recompiling the DX12 userspace drivers (from GPU vendors) into Linux shared libraries, and running them on a kernel driver shim that transfers the kernel interface up to the closed source Windows kernel driver. This is in no way useful for having DX12 on Linux baremetal or anywhere other than in a WSL2 environment. It is not useful for Linux gaming.

      Microsoft have submitted to the upstream kernel the shim driver to support this. This driver exposes their D3DKMT kernel interface from Windows over virtual channels into a Linux driver that provides an ioctl interface. The kernel drivers are still all running on the Windows side.

      Now I read the Linux GUI apps bit and assumed that these things were the same, well it turns out the DX12 stuff doesn't address presentation at all. It's currently only for compute/ML workloads using CUDA/DirectML. There isn't a way to put the results of DX12 rendering from the Linux guest applications onto the screen at all. The other project is a wayland/RDP integration server, that connects Linux apps via wayland to RDP client on Windows display, integrating that with DX12 will be a tricky project, and then integrating that upstream with the Linux stack another step completely.

      Now I'm sure this will be resolved, but it has certain implications on how the driver architecture works and how much of the rest of the Linux graphics ecosystem you have to interact with, and that means that the current driver might not be a great fit in the long run and upstreaming it prematurely might be a bad idea.

      From my point of view the kernel shim driver doesn't really bring anything to Linux, it's just a tunnel for some binary data between a host windows kernel binary and a guest linux userspace binary. It doesn't enhance the Linux graphics ecosystem in any useful direction, and as such I'm questioning why we'd want this upstream at all.

      xisxwayland checks for Xwayland ... or not

      Posted by Peter Hutterer on May 19, 2020 10:30 AM

      One of the more common issues we encounter debugging things is that users don't always know whether they're running on a Wayland or X11 session. Which I guess is a good advertisement for how far some of the compositors have come. The question "are you running on Xorg or Wayland" thus comes up a lot and suggestions previously included things like "run xeyes", "grep xinput list", "check xrandr" and so on and so forth. None of those are particularly scriptable, so there's a new tool around now: xisxwayland.

      Run without arguments it simply exits with exit code 0 if the X server is Xwayland, or 1 otherwise. Which means use can use it like this:


      $ cat my-xorg-only-script.sh
      #!/bin/bash

      if xisxwayland; then
      echo "This is an Xwayland server!";
      exit 1
      fi

      ...
      Or, in the case where you have a human user (gasp!), you can ask them to run:

      $ xisxwayland --verbose
      Xwayland: YES
      And even non-technical users should be able to interpret that.

      Note that the script checks for Xwayland (hence the name) via the $DISPLAY environment variable, just like any X application. It does not check whether there's a Wayland compositor running but for most use-cases this doesn't matter anyway. For those where it matters you get to write your own script. Congratulations, I guess.

      Complex text shaping fixed in Konsole 20.08

      Posted by Rajeesh K Nambiar on May 19, 2020 09:44 AM

      Konsole was one of the few terminal emulators with proper complex text shaping support. Unfortunately, complex text (including Malayalam) shaping was broken around KDE Applications release 18.08 (see upstream bug 401094 for details).

      <figure class="wp-block-image size-large"><figcaption>Broken Malayalam text shaping in Konsole 20.04</figcaption></figure>

      Mariusz Glebocki fixed the code in January this year which I tested to work correctly. There’s a minor issue of glyphs with deep vertical components being cut-off (notice rendering of “സ്കൂ”), but otherwise the shaping and rendering is good. The patches are also merged upstream and will be part of the KDE Applications Bundle 20.08.

      <figure class="wp-block-image size-large"><figcaption>Proper Malayalam text shaping in Konsole 20.04 with shaping fixes.</figcaption></figure>

      If you don’t want to wait that long, I have made a 20.04 release with the fixes on top available for Fedora 31 & 32 in this COPR.

      Apports de Fedora à l'écosystème du Logiciel Libre partie 3

      Posted by Charles-Antoine Couret on May 19, 2020 06:00 AM

      Il est courant, au sein de la communauté du Logiciel Libre, de présenter une distribution GNU/Linux comme une simple intégration, ou un assemblage de tous les logiciels qu'elle propose. Une sorte de glu entre eux.

      Si c'est sans doute le cas de certaines d'entre elles, nous ne pouvons en conclure que c'est toujours le cas. En particulier, la distribution Fedora va au delà de ce constat. Ses objectifs et sa communauté lui permettent de réaliser d'autres choses. En effet depuis sa création Fedora est une vitrine technologique et à ce titre a essayé de mettre en avant ou de développer des solutions novatrices pour le Logiciel Libre. Mais depuis Fedora 21, sortie fin 2011, Fedora s'est découpée en trois produits distincts. Si finalement une Fedora Workstation et Server ont accès aux mêmes paquets, le projet a souhaité fournir des expériences utilisateur adaptées à chaque cas d'usage dès la fin de l'installation. Par conséquent, Fedora Workstation a sa liste de travail pour intégrer et développer de nouvelles solutions pour améliorer l'usage bureautique de l'utilisateur.

      Et si la distribution Fedora est souvent considérée comme une version de tests pour la distribution Red Hat Enterprise Linux (RHEL) de Red Hat nous allons constater que finalement toute la communauté tire des bénéfices de ses travaux.

      Le présent article est une adaptation des articles de blogs ici et là encore de Christian Schaller qui m'en a donné l'autorisation. Il fait suite à un premier article à ce sujet puis à un second. Le premier article avait donné lieu à une conférence lors des JM2L de 2017 et aux RMLL de 2018 dont la vidéo est disponible ici.

      Wayland

      La transition vers ce nouveau protocole d'affichage requiert toujours des ajustements et des travaux de long terme.

      Actuellement GNOME Shell est en train d'achever cette transition en étant capable de lancer une session GNOME sans utiliser XWayland. Ce dernier ne serait démarré qu'en cas de lancement d'une application ayant toujours besoin de X11. En effet quelques composants internes, comme le démon GNOME Setting, reposaient encore sur l'existence de X11 pour fonctionner.

      Il est possible à titre expérimental de forcer ce comportement à l'aide de la commande suivante :

      $ gsettings set org.gnome.mutter experimental-features "['autostart-xwayland']"
      

      Cela est rendu possible par Carlos Garnacho pour le travail sur GNOME Shell, Olivier Fourdan pour le nettoyage du centre de contrôle de GNOME, Iain Lane (de Canonical) pour l'amélioration des sessions utilisateurs de systemd et Benjamin Berg de manière générale.

      Dans la même veine, Martin Stransky et Jan Horak ont travaillé pour corriger les derniers bogues afin que Firefox puisse utiliser Wayland par défaut dans Fedora 31. Martin Stransky a aussi travaillé pour fournir la prise en charge de l'accélération matérielle pour WebGL.

      hoverclick.png

      L'une des grandes régressions étaient aussi les outils d'accessibilité de X.org qui ne fonctionnaient plus sous Wayland. Mais grâce à Olivier Fourdan cette prise en charge a été améliorée et il est maintenant possible de cliquer grâce survol d'un élément graphique pendant un certain temps.

      Hans de Goede travaille aussi pour autoriser les applications XWayland à être lancées avec les droits super utilisateur. Certes cela n'est pas une action recommandée mais peut être nécessaire pour des raisons de compatibilité.

      Il travaille aussi pour améliorer la prise en charge de la bibliothèque d'affichage SDL pour fonctionner correctement sous Wayland, en particulier pour les jeux vidéo avec une faible résolution.

      Enfin Adam Jackson opère sur la possibilité d'avoir l'accélération matérielle pour les applications XWayland, quand le pilote propriétaire nVidia est utilisé. Les autres pilotes ou cartes matériels n'ont quant à eux pas ce genre de problèmes grâce à leur inclusion dans le noyau Linux officiel en exploitant l'API moderne correspondante pour l'affichage.

      Si vous souhaitez les aider dans ce travail, vous pouvez modifier le fichier /usr/lib/udev/rules.d/61-gdm.rules pour commenter la ligne suivante :

      DRIVER=="nvidia", RUN+="/usr/libexec/gdm-disable-wayland
      

      Qui doit devenir

      # DRIVER=="nvidia", RUN+="/usr/libexec/gdm-disable-wayland
      

      Ainsi même avec le pilote propriétaire de nVidia, la session de GNOME par défaut sera Wayland et non X.org comme c'est le cas pour les autres cas de figure. X.org sera utilisé en cas de problèmes majeurs d'affichage. Et si vous rencontrez un bogue dans cette configuration, n'oubliez pas de le signaler aux développeurs.

      Il n'est plus très loin le moment où X.org ne sera qu'un projet en mode maintenance uniquement.

      Pipewire

      Wim Taymans continue de travailler pour être capable de remplacer Jack et PulseAudio pour la gestion du son. Par ailleurs le remplacement de Jack est considéré comme opérationel aujourd'hui. Pour PulseAudio le remplacement est fonctionnel en ce moment pour la lecture simple de flux audio mais pas au delà.

      Avec Jonas Adahl et Benjamin Berg ils ont apporté la prise en charge de Miracast pour exporter l'écran et le son vers un appareil via le réseau comme un téléviseur.

      GNOME Network Display.png

      Un client de test pour GNOME, Network Displays, a été conçu à cette fin avant une intégration probable dans la base de GNOME à l'avenir.

      Une option de configuration a été ajoutée dans Google Chrome pour prendre en charge le partage d'écran sous Wayland via WebRTC.

      Les avancées de Pipewire rendent envisageable sa mise en place par défaut dans Fedora Workstation 33.

      Flatpak

      Le travail se poursuit pour fournir une infrastructure automatique qui génère des Flatpak à partir de RPM. Les étapes sont à ce jour encore trop manuelles.

      Une intégration future de Flathub et de Quay comme dépôts alternatifs disponibles par défaut suit aussi son chemin.

      Fedora Toolbox

      Debarshi Ray a amélioré l'intégration dans GNOME Terminal. Quand une instance est ouverte dans un onglet, en ouvrir un nouveau à partir de celui-ci va pointer par défaut sur l'espace de ce conteneur plutôt que sur le dossier courant du système hôte.

      L'outil a été réécrit pour améliorer sa maintenance à long terme. En passant d'un immense script shell vers un programme conçu en Go. De plus comme les outils buildah et podman sont eux même conçus en Go, cela simplifiera les synergie et les collaborations entre ces différents projets.

      Fleet commander

      La version 0.14.1 de cette application ajoute la prise en charge des réseaux d'entreprise basés sur Active Directory en plus de FreeIPA. Cela favorisera son adoption dans un réseau d'entreprise centré sur Windows, ce qui est encore souvent le cas.

      Il est aussi possible depuis cette version de déployer une extension GNOME dans le parc de machines.

      Un travail est en cours pour améliorer la prise en charge de la configuration de Firefox par Oliver Gutierrez Suarez.

      Mode jeu

      Le fameux gamemode développé par Christian Kellner continue ses améliorations avec une meilleure intégration avec les applications Flatpak.

      Gestion du matériel

      Lecteurs d'empreintes digitales

      La pile fprint qui est la référence libre pour la prise en charge de ces appareils a été pendant longtemps dans un état léthargique. Bastien Nocera a entreprit une modernisation de ce composant, qui consiste notamment en une amélioration de la documentation du projet, en l'ajout de code d'exemples et une mise à jour de certains pilotes.

      Un nouveau pilote pour prendre en charge certains lecteurs de Synaptics est en voie de finalisation.

      Benjamin Berg quant à lui essaye d'apporter la possibilité d'enregistrer les empreintes digitales au sein du lecteur lui même, quand c'est possible, plutôt que de les stocker sur le disque dur comme cela est fait actuellement.

      Dell Totem

      dell-totem.jpg

      Le dispositif de pointage très particulier Dell Totem est maintenant pris en charge par la bibliothèque libinput. Benjamin Tissoires et Peter Hutterer ont rendu cela possible, alors qu'il est utilisé notamment dans le domaine de la conception assisté par ordinateur.

      Firmware

      GNOME Firmware.png

      Richard Hughes continue son travail sur LVFS pour fournir des mises à jour de firmware sous Linux. Il a écrit une application GNOME Firmware pour afficher les firmwares de votre système, quelques informations à leur propos et s'ils sont compatible la recherche de leur mise à jour.

      Sysprof et les performances

      Sysprof.png

      L'amélioration des performances est un sujet important pour les utilisateurs comme les développeurs. Dans la course à améliorer celles de GNOME Shell, un constat s'est imposé, il manque des outils simples d'usage pour mesurer précisément les performances d'un bureau ou d'une application. Afin d'identifier et de corriger finement les comportements qui réduisent les performances du système.

      Christian Hergert a conçu l'outil GNOME Sysprof pour récupérer et afficher des données relatives à un processus.

      L'application permet de mesurer durant l'utilisation du logiciel à analyser au cours du temps différents paramètres comme l'usage de la mémoire, du CPU, des accès disques ou réseaux. Il affiche également les fonctions qui allouent de la mémoire et le temps processeur passé dans les fonctions.

      D'ailleurs Christian Hergert a découvert et corrigé des appels d'API bloquants dans la boucle principale e GNOME Shell ce qui réduisait la fluidité en cas d'utilisation intensive des entrées / sorties de l'ordinateur. Le système devrait se montrer plus réactif dans ce cas de figure.

      GNOME

      Le nouvel écran de verrouillage

      GNOME écran de verrouillage.png

      Il a été profondément remanié par Allan Day et l'équipe design de GNOME. L'intégration entre l'écran de verrouillage, où les notifications et l'heure sont affichées, et celui de la saisie du mot de passe est meilleure. Ce dernier par ailleurs permet d'afficher le mot de passe si souhaité pour s'assurer qu'il est correctement introduit.

      Le fait d'utiliser le fond d'écran de l'utilisateur en mode flouté permet aussi d'apporter plus de cohérence et d'élégance à l'interface.

      GNOME Extension

      GNOME Extensions.png

      Cette nouvelle application a été conçue pour gérer les extensions de GNOME, qui permettent d'ajouter ou de modifier des comportements de GNOME Shell sans entraver la maintenance du code principal.

      Auparavant les extensions étaient gérés soit via GNOME Ajustements qui est un peu fourre tout ou via https://extensions.gnome.org/ le si... à cet effet. Ce n'était pas spécialement clair pour l'utilisateur comment faire. Ici l'application s’occupera exclusivement de cette tâche et son nom aidera à guider les utilisateurs vers cette fonction s'ils le souhaitent.

      GNOME Classique

      GNOME Classique est un thème de GNOME Shell pour essayer de reproduire l'interface de GNOME 2, tout en gardant les bases techniques de GNOME 3.

      Allan Day a supprimé la vue globale des applications, qui est spécifique de l'interface de GNOME 3, dans ce mode pour le rendre plus proche de l'expérience utilisateur que l'on avait avec GNOME 2. Ce mode était automatiquement utilisé en pointant le curseur de la souris dans le coin supérieur gauche de l'écran.

      QtGNOME

      Cette couche de compatibilité entre les applications écrites en Qt et GNOME permet de s'assurer que les applications s'intègrent du mieux que possible dans GNOME. Utiliser le même thème, choisir l'affichage sombre ou clair, etc.

      Jan Grulich a apporté une mise à jour de ce composant pour refléter les changements du thème par défaut de GNOME, Adwaita, et améliorer l'intégration avec les applications Flatpak.

      Internationalisation

      L'internationalisation est rarement quelque chose de totalement bien prise en charge. Elle requiert souvent des étapes manuelles pour l'utilisateur afin d'avoir tout le contenu de son système dans sa langue natale, avec les polices qui peuvent l'afficher et un format de données qui correspond.

      Malgré les améliorations constantes de ce domaine ces dernières années, changer de langue nécessite toujours de faire des actions à plusieurs endroits, comme configurer l'environnement de bureau et installer les paquets manquants. Sundeep Anand a corrigé ce problème sous GNOME, où choisir la langue dans le centre de contrôle entraînera l'installation des langpacks nécessaires.

      Codec multimédia

      Cisco, Endless, Red Hat et Centricular ont œuvré pour fournir une mise à jour du codec OpenH264 2.0. Cette mise à jour permet la prise en charge de profils additionnels de ce codec, ici High et Advanced, afin de pouvoir lire plus de fichiers employant ce codec.

      Wim Taymans a corrigé certains bogues de qualité audio pour les fichiers audio encodés avec AAC.

      Le codec MPEG2 est également dans la liste des travaux à venir pour améliorer sa prise en charge native avec du Logiciel Libre avec une bonne qualité. Pour les codecs plus exotiques comme Windows Media ou DivX sont en vue mais pas dans les priorités du moment.

      Ce que le futur nous réserve

      GNOME

      Certaines optimisations dans GNOME devraient encore avoir lieu, notamment autour de la gestion du matériel. En ne démarrant les services et les interfaces de configuration uniquement si le matériel sous-jacent les rendent pertinents. Ainsi le démon de prise en charge du Bluetooth ne sera pas démarré si le matériel ne le prend pas en charge.

      Pipewire

      Les avancées de Pipewire rendent envisageable sa mise en place par défaut dans Fedora Workstation 33. Bien qu'il reste à finaliser le remplacement de PulseAudio.

      Par ailleurs, un travail est en cours pour permettre l'enregistrement de flux vidéo avec zéro copie mémoire. Une optimisation nécessaire pour garantir un traitement rapide en évitant de surcharger le processeur dans cette tâche.

      Prise en charge du matériel

      Atomic KMS

      Jonas Ådahl travaille pour utiliser atomic KMS dans le noyau et la pile graphique du système, afin que l'affichage et la configuration de celui-ci se fassent de manière atomique. Cela peut permettre aussi d'utiliser des fonctionnalités plus avancées du matériel.

      Par exemple utiliser le matériel pour stocker le contenu de chaque fenêtre cliente de manière indépendante. Ainsi si seulement le contenu de l'une d'elle change, l'étape de composition peut être sautée au niveau logiciel ce qui accélère la vitesse du rendu. Ou encore permettre d'utiliser des framebuffers sur des écrans plus grands encore avec de bonnes performances.

      De plus, le rendu KMS pourrait être effectué dans un fil d'exécution séparé, ce qui réduirait la latence entre un mouvement de la souris et son affichage.

      Divers

      En collaboration avec Lenovo, certaines fonctionnalités sont attendues dans le futur.

      Tout d'abord la prise en charge des microphones à large champ, ce qui est utile en téléconférence quand la personne n'a pas de casque et est possiblement éloigné du micro.

      Ensuite il y a la détection de l'utilisation de l'ordinateur portable sur des jambes, pour éviter de brûler l'utilisateur dans un tel cas ce qui est courant en déplacement.

      Enfin, il y a la prise en charge de la fonctionnalité matérielle pour limiter l'angle de lecture sur l'ordinateur. La lecture ne serait possible que de face, évitant qu'un voisin ou quelqu'un qui passe derrière vous puisse lire l'intégralité de votre écran. Cela améliorerait la confidentialité de ce qui est affiché.

      Conclusion

      Comme nous pouvons le voir avec cette liste d'exemples, une distribution d’envergure comme Fedora, mais aussi Ubuntu, Debian ou autres peuvent apporter bien plus qu'une liste de logiciels à installer. Ils proposent des nouveaux outils, participent au développement ou à la stabilisation des logiciels qu'ils fournissent, peuvent collaborer avec d'autres entreprises ou communautés pour améliorer la prise en charge de leur produit.

      Ici nous ne parlons que des travaux significatifs de ces dernières années, Fedora a également œuvré pour PulseAudio, systemd, PackageKit, NetworkManager, le pilote libre nouveau et tant d'autres composants par le passé !

      Malgré les liens forts entre Red Hat et Fedora, nous pouvons voir que beaucoup des travaux de Fedora de ces dernières années ont bénéficié à la plupart des distributions aujourd'hui. Et cela n'est pas près de se terminer.

      Par ailleurs, la consécration de ces efforts a été la signature récente du partenariat entre Lenovo et Fedora pour fournir des ordinateurs portables avec Fedora Workstation préinstallé. Cela n'étant possible que parce que le système a atteint une certaine maturité. De plus, ce partenariat sera sans doute le point de départ pour améliorer encore la prise en charge du matériel nativement par le système comme cela a été expliqué brièvement plus haut.

      rpminspect-0.13 released

      Posted by David Cantrell on May 18, 2020 08:27 PM

      I released rpminspect-0.13 today. This release took a little longer to finish up than I was anticipating, but I am pleased with the bug fixes and new features present. Here’s a summary of the major changes in this release:

      New inspections

      • Add LTO inspection to librpminspect (#129)
      • Add the symlinks inspection to librpminspect (#133)
      • Add a new faux-result to the results output for rpminspect. Not really an inspection, but this is useful for debugging and bug reporting because it is easy to see how rpminspect was invoked.

      Bug fixes:

      • Fix some errors when running with libiniparser 3.1
      • Only set CURLOPT_TCP_FASTOPEN if we have it available
      • Make sure the changelog inspection runs with before/after pairs (#130)
      • Ignore debuginfo and debugsource packages in the kmod inspection
      • Skip the kmod inspection if there is no peer_file (#131)
      • Handle kernel modules that move paths between builds (#131)
      • First part of reworking the add_result() API
      • Add init_result_params() to reset the struct result_params structures
      • Remove MPARSE_MAN to let libmandoc autodetect the type (#132)
      • Revise list_to_string() to support optional delimiter
      • Add get_elf_section_names() to librpminspect
      • Do not strdup() header and remedy in add_result_entry()
      • Store package extract root in rpmpeer_entry_t for each package
      • Add strtype() to librpminspect to return string indicating file type
      • Simplify the license inspection routine (#138)
      • Add get_elf_machine() to readelf.c (#139)
      • Elf64_Half -> GElf_Half in dt_needed_driver()
      • Skip eBPF ELF objects in the ‘elf’ inspection (#139)
      • Stop appending a newline to string in strappend()
      • Collect all results from getLatestBuild Koji XML-RPC call (#137)
      • Return EM_NONE in get_elf_machine()
      • In download_build(), fix how srcfmt is set
      • Fix some memory errors associated with the results and parameters
      • Use params.msg for reporting in check_bin_rpm_changelog()
      • Make sure only RPM files are passed to get_rpm_info()
      • get_rpm_info() and add_peer() have void returns
      • When public headers change in ‘changedfiles’, do not free param.details
      • Check if eptr->data is NULL in find_one_peer (#142)
      • Define EM_BPF if elf.h lacks it (impacts EPEL-7 builds)
      • Skip ‘upstream’ inspection if no source packages are provided
      • Simplify how the versions are collected in inspect_upstream()

      Test cases:

      • Expand the template rpminspect.conf file for the test suite
      • Handle ‘localhost.localdomain’ FQDN in the test suite base clases
      • Rework the test_manpage.py tests to work with rpm >= 4.11.x
      • Test cases for kernel modules changing paths between builds (#131)
      • Add ‘LTO’ inspection test cases (#129)
      • Add tests for the ‘symlinks’ inspection to the test suite
      • Add test cases for the ‘ownership’ inspection
      • Add test cases for the ‘upstream’ inspection

      Misc:

      • Remove the GitHub Release page stuff from utils/release.sh
      • Drop meson_version from meson.build
      • Change meson.build to require xmlrpc-c >= 1.32.5
      • BuildRequires xmlrpc-c >= 1.32.5 and iniparser >= 3.1
      • Modify the Makefile so it works with ‘ninja’ or ‘ninja-build’
      • Rename the tests/ subdirectory to test/
      • Split meson.build out in to different meson.build files
      • Move builds.c to lib/, remove builds.h from src/
      • Move rpminspect.conf to data/, expand data/meson.build
      • Fix the —version output to remove ‘@’ wrapping the version number
      • Remove diff.3, the code is gone from lib/
      • Begin doc/Doxyfile for API documentation
      • Add Doxygen documentation for badwords.c, builds.c, and checksums.c
      • Add Doxygen documentation to four C files, update others
      • Support [lto] section with lto_symbol_name_prefixes in rpminspect.conf
      • Add explicit librpminspect Requires to the main package
      • Update translation template

      Some new inspections, bug fixes for existing inspections, and general improvements. Test cases continue to expand. Many thanks to those who have contributed test cases and bug reports that help make test cases.

      The LTO inspection is tied to the LTOByDefault change in Fedora. The inspection reports of any ELF relocatables that sneak through with LTO bytecode. LTO bytecode is not portable across gcc releases so we should not ship ELF .o files with that attached.

      The symlinks inspection checks for dangling symlinks and warns of circular links and other conditions that present problems in RPM packages.

      This release took a little longer to get out because the Koji API had some changes. The module_build_service_id key in the Koji XML-RPC response changed from a string to an integer. This also uncovered some problems in rpminspect with how download URLs are constructed. All of that has been fixed now in rpminspect.

      Lastly, the result reporting structure in librpminspect has changed in an effort to support more varied output types in the future. Miro Hrončok requested a single line style output and I have been working to get to that as well as a few other formats people have requested. Soon!

      See https://github.com/rpminspect/rpminspect/releases/tag/v0.13 for more information. Builds are available in my Copr repository and in rawhide, Fedora 31, Fedora 32, and EPEL-8.

      In addition to the new rpminspect release, there is also a new rpminspect-data-fedora release. This data file package contains updates that match the changes in this new release of rpminspect. The new rpminspect-data-fedora release is available in my Copr repo and in rawhide, Fedora 31, Fedora 32, and EPEL-8.

      New badge: Fedora 36 Change Accepted !

      Posted by Fedora Badges on May 18, 2020 07:51 PM
      Fedora 36 Change AcceptedYou got a "Change" accepted into the Fedora 36 Change list

      New badge: Fedora 35 Change Accepted !

      Posted by Fedora Badges on May 18, 2020 07:51 PM
      Fedora 35 Change AcceptedYou got a "Change" accepted into the Fedora 35 Change list

      New badge: Fedora 34 Change Accepted !

      Posted by Fedora Badges on May 18, 2020 07:50 PM
      Fedora 34 Change AcceptedYou got a "Change" accepted into the Fedora 34 Change list

      F32-20200518 updated Live isos released

      Posted by Ben Williams on May 18, 2020 06:39 PM

      The Fedora Respins SIG is pleased to announce the latest release of Updated F32-20200518-Live ISOs, carrying the 5.6.12-200 kernel.

      Welcome to Fedora 32.

      This set of updated isos will save considerable amounts  of updates after install.  ((for new installs.)(New installs of Workstation have about 776+MB of updates)).

      A huge thank you goes out to irc nicks dowdle, ledini linuxmodder, Southern-Gentleman for testing these iso.

      We would also like to thank Fedora- QA  for running the following Tests on our ISOs.:

      https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=32&build=FedoraRespin-32-updates-20200518.0&groupid=1

       

      As always our isos can be found at  http://tinyurl.com/Live-respins . 

      New badge: Fedora 33 Change Accepted !

      Posted by Fedora Badges on May 18, 2020 06:19 PM
      Fedora 33 Change AcceptedYou got a "Change" accepted into the Fedora 33 Change list

      Patching Vendored Rust Dependencies

      Posted by Michael Catanzaro on May 18, 2020 03:56 PM

      Recently I had a difficult time trying to patch a CVE in librsvg. The issue itself was simple to patch because Federico kindly backported the series of commits required to fix it to the branch we are using downstream. Problem was, one of the vendored deps in the old librsvg tarball did not build with our modern rustc, because the code contained a borrow error that was not caught by older versions of rustc. After finding the appropriate upstream fix, I tried naively patching the vendored dep, but that failed because cargo tries very hard to prevent you from patching its dependencies, and complains if the dependency does not match its checksum in Cargo.lock. I tried modifying the checksum in Cargo.lock, but then it complains that you modified the Cargo.lock. It seems cargo is designed to make patching dependencies as difficult as possible, and that not much thought was put into how cargo would be used from rpmbuild with no network access.

      Anyway, it seems the kosher way to patch Rust dependencies is to add a [patch] section to librsvg’s Cargo.toml, but I could not figure out how to make that work. Eventually, I got some help: you can edit the .cargo-checksum.json of the vendored dependency and change “files” to an empty array, like so:

      diff --git a/vendor/cssparser/.cargo-checksum.json b/vendor/cssparser/.cargo-checksum.json
      index 246bb70..713372d 100644
      --- a/vendor/cssparser/.cargo-checksum.json
      +++ b/vendor/cssparser/.cargo-checksum.json
      @@ -1 +1 @@
      -{"files":{".cargo-ok":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",".travis.yml":"f1fb4b65964c81bc1240544267ea334f554ca38ae7a74d57066f4d47d2b5d568","Cargo.toml":"7807f16d417eb1a6ede56cd4ba2da6c5c63e4530289b3f0848f4b154e18eba02","LICENSE":"fab3dd6bdab226f1c08630b1dd917e11fcb4ec5e1e020e2c16f83a0a13863e85","README.md":"c5781e673335f37ed3d7acb119f8ed33efdf6eb75a7094b7da2abe0c3230adb8","build.rs":"b29fc57747f79914d1c2fb541e2bb15a003028bb62751dcb901081ccc174b119","build/match_byte.rs":"2c84b8ca5884347d2007f49aecbd85b4c7582085526e2704399817249996e19b","docs/.nojekyll":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","docs/404.html":"025861f76f8d1f6d67c20ab624c6e418f4f824385e2dd8ad8732c4ea563c6a2e","docs/index.html":"025861f76f8d1f6d67c20ab624c6e418f4f824385e2dd8ad8732c4ea563c6a2e","src/color.rs":"c60f1b0ab7a2a6213e434604ee33f78e7ef74347f325d86d0b9192d8225ae1cc","src/cow_rc_str.rs":"541216f8ef74ee3cc5cbbc1347e5f32ed66588c401851c9a7d68b867aede1de0","src/from_bytes.rs":"331fe63af2123ae3675b61928a69461b5ac77799fff3ce9978c55cf2c558f4ff","src/lib.rs":"46c377e0c9a75780d5cb0bcf4dfb960f0fb2a996a13e7349bb111b9082252233","src/macros.rs":"adb9773c157890381556ea83d7942dcc676f99eea71abbb6afeffee1e3f28960","src/nth.rs":"5c70fb542d1376cddab69922eeb4c05e4fcf8f413f27563a2af50f72a47c8f8c","src/parser.rs":"9ed4aec998221eb2d2ba99db2f9f82a02399fb0c3b8500627f68f5aab872adde","src/rules_and_declarations.rs":"be2c4f3f3bb673d866575b6cb6084f1879dff07356d583ca9a3595f63b7f916f","src/serializer.rs":"4ccfc9b4fe994aab3803662bbf31cc25052a6a39531073a867b14b224afe42dd","src/size_of_tests.rs":"e5f63c8c18721cc3ff7a5407e84f9889ffa10e66da96e8510a696c3e00ad72d5","src/tests.rs":"80b02c80ab0fd580dad9206615c918e0db7dff63dfed0feeedb66f317d24b24b","src/tokenizer.rs":"429b2cba419cf8b923fbcc32d3bd34c0b39284ebfcb9fc29b8eb8643d8d5f312","src/unicode_range.rs":"c1c4ed2493e09d248c526ce1ef8575a5f8258da3962b64ffc814ef3bdf9780d0"},"package":"8a807ac3ab7a217829c2a3b65732b926b2befe6a35f33b4bf8b503692430f223"}
      \ No newline at end of file
      +{"files":{},"package":"8a807ac3ab7a217829c2a3b65732b926b2befe6a35f33b4bf8b503692430f223"}

      Then cargo will stop complaining and you can patch the dependency. Success!

      Installation of Oracle extensions for PHP

      Posted by Remi Collet on May 18, 2020 01:38 PM

      As the question is raised quite often, here is my installation notes.

       

      1. Context

      2 extensions exist allowing access to Oracle databases from PHP:

      To use these extensions, you need to have the client library available.

      It exists various ways to install this library, including using RPM, but it is not possible to properly handle the dependencies of the packages providing the extensions, so installation often results in an unusable configuration.

      $ php -v
      PHP Warning:  PHP Startup: Unable to load dynamic library 'oci8' (tried: /usr/lib64/php/modules/oci8 (/usr/lib64/php/modules/oci8: cannot open shared object file: No such file or directory), /usr/lib64/php/modules/oci8.so (libclntsh.so.19.1: cannot open shared object file: No such file or directory)) in Unknown on line 0

      2. PHP and extension installation

      For proper installation of PHP, simply follow the configuration wizard instructions.

      Then you can install the Oracle extension

      # yum install php-oci8

      3. Client version

      To know which version of the library is required, see the package description

      $ yum info php-oci8
      ...
                   : The extension is linked with Oracle client libraries 19.6
                   : (Oracle Instant Client).  For details, see Oracle's note
                   : "Oracle Client / Server Interoperability Support" (ID 207303.1).
                   :
                   : You must install libclntsh.so.19.1 to use this package, provided
                   : in the database installation, or in the free Oracle Instant Client
                   : available from Oracle.
      ...

      For now, you need version 19.6 which allows to connect to databases version 11.2 and later.

      4. RPM usage

      The simple way is to install the library RPM provided by Oracle

      Download it from Oracle Instant Client Downloads

      Currently, you need the oracle-instantclient19.6-basic-19.6.0.0.0-1.x86_64.rpm package

      If not already present, you also need to install the libnsl package (dependency is not handled by the package)

      This is enough and you can verify using:

      $ php --ri oci8
      
      oci8
      
      OCI8 Support => enabled
      OCI8 DTrace Support => enabled
      OCI8 Version => 2.2.0
      Oracle Run-time Client Library Version => 19.6.0.0.0
      Oracle Compile-time Instant Client Version => 19.6

      5. Manual installation

      In a more complex case, e.g. if another version of the server is already installed on the same computer, or if you prefer to use an already installed library, you have to configure the library search path.

      Example, installation of instantclient-basic-linux.x64-19.6.0.0.0dbru.zip in /opt

      # mkdir /opt/oracle; cd /opt/oracle
      # unzip /tmp/instantclient-basic-linux.x64-19.6.0.0.0dbru.zip

      5.1 Settings default path

      If you have a single version of the library in the system, the simplest way is to add the directory to the linker default search path, which will be used by all users and all services

      # echo "/opt/oracle/instantclient_19_6" >/etc/ld.so.conf.d/oracle.conf
      # ldconfig

      5.2 User specific path

      If you prefer to set the path for each user (the more complex case)

      In command line

      $ export LD_LIBRARY_PATH=/opt/oracle/instantclient_19_6

      For web servers, httpd (if you are still using mod_php) or php-fpm, you have to change the environment of the service by overriding the unit file

      # systemctl edit php-fpm

      by adding the lines

      [Service]
      Environment=LD_LIBRARY_PATH=/opt/oracle/instantclient_19_6

      6. Other

      6.1 tnsnames.ora

      If you still use this file for SID configuration (optional using EasyConnect), you have to add its path to httpd or php-fpm environment

      # systemctl edit php-fpm

      by adding the lines

      [Service]
      Environment=TNS_ADMIN=/path/to/network/admin

      6.2 SELinux

      When database access use the network, you have to explicitly allow it

      # setsebool -P httpd_can_network_connect on

      7. Conclusion

      For memory, I start building my PHP packages more than 15 years ago especially to have these extensions.

      Their installation has never been easy, particularly because of the impossibility to properly manage the dependencies or to provide my own packages because of the Oracle's Licence.

      With these installations notes, I hope this will be simpler, and clearer.

      During PHP update, remind to check if the required library version has not changed.

      Using Fedora to implement REST API in JavaScript: part 2

      Posted by Fedora Magazine on May 18, 2020 08:00 AM

      In part 1 previously, you saw how to quickly create a simple API service using Fedora Workstation, Express, and JavaScript. This article shows you the simplicity of how to create a new API. This part shows you how to:

      • Install a DB server
      • Build a new route
      • Connect a new datasource
      • Use Fedora terminal to send and receive data

      Generating an app

      Please refer to the previous article for more details. But to make things simple, change to your work directory and generate an app skeleton.

      $ cd our-work-directory
      $ npx express-generator –no-view –git /myApp
      $ cd myApp
      $ npm i

      Installing a database server

      In this part, we’ll install MariaDB database. MariaDB is the Fedora default database.

      $ dnf module list mariadb | sort -u ## lists the streams available
      $ sudo dnf module install mariadb:10.3 ##10.4 is the latest

      Note: the default profile is mariadb/server.

      For those who need to spin up a Docker container a ready made container with Fedora 31 is available.

      $ docker pull registry.fedoraproject.org/f31/mariadb
      $ docker run -d --name mariadb_database -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 registry.fedoraproject.org/f31/mariadb

      Now start the MariaDB service.

      $ sudo systemctl start mariadb

      If you’d like the service to start at boot, you can also enable it in systemd:

      $ sudo systemctl enable mariadb ## start at boot

      Next, setup the database as needed:

      $ mysql -u root -p ## root password is blank
      MariaDB> CREATE DATABASE users;
      MariaDB> create user dbuser identified by ‘123456‘;
      MariaDB> grant select, insert, update, create, drop on users.* to dbuser;
      MariaDB> show grants for dbuser;
      MariaDB> \q

      A database connector is needed to use the database with Node.js.

      $ npm install mariadb ## installs MariaDB Node.js connector

      We’ll leverage Sequelize in this sample API. Sequelize is a promise-based Node.js ORM (Object Relational Mapper) for Postgres, MySQL, MariaDB, SQLite and Microsoft SQL Server.

      $ npm install sequelize ## installs Sequelize

      Connecting a new datasource

      Now, create a new db folder and create a new file sequelize.js there:

      const Sequelize = require('sequelize'),
        sequelize = new Sequelize(process.env.db_name || 'users', process.env.db_user || 'dbuser', process.env.db_pass || '123456', {
          host: 'localhost',
          dialect: 'mariadb',
          ssl: true
      })
      
      module.exports = sequelize

      Note: For the sake of completeness I‘m including a link to the related Github repo: https://github.com/vaclav18/express-api-mariadb

      Let‘s create a new file models/user.js. A nice feature of a Sequelize model is that it helps us to create the necessary tables and colums automatically. The code snippet responsible for doing this is seen below:

      sequelize.sync({
      force: false
      })

      Note: never switch to true with a production database – it would drop your tables at app start!

      We will refer to the earlier created sequelize.js this way:

      const sequelize = require('../db/sequelize')

      Building new routes

      Next, you’ll create a new file routes/user.js. You already have routes/users.js from the previous article. You can copy and paste the code in and proceed with editing it.

      You’ll also need a reference to the previously created model.

      const User = require('../models/user')

      Change the route path to /users and also create a new post method route.

      Mind the async – await keywords there. An interaction with a database will take some time and this one will do the trick. Yes, an async function returns a promise and this one makes promises easy to use.

      Note: This code is not production ready, since it would also need to include an authentication feature.

      We‘ll make the new route working this way:

      const userRouter = require('./routes/user')
      app.use(userRouter)

      Let‘s also remove the existing usersRouter. The routes/users.js can be deleted too.

      $ npm start

      With the above command, you can launch your new app.

      Using the terminal to send and retrieve data

      Let’s create a new database record through the post method:

      $ curl -d 'name=Adam' http://localhost:3000/users

      To retrieve the data created through the API, do an HTTP GET request:

      $ curl http://localhost:3000/users

      The console output of the curl command is a JSON array containing data of all the records in the Users table.

      Note: This is not really the usual end result — an application consumes the API finally. The API will usually also have endpoints to update and remove data.

      More automation

      Let‘s assume we might want to create an API serving many tables. It‘s possible and very handy to automatically generate models for Sequelize from our database. Sequelize-auto will do the heavy lifting for us. The resulting files (models.js) would be placed and imported within the /models directory.

      $ npm install sequelize-auto

      A node.js connector is needed to use this one and we have it already installed for MariaDB.

      Conclusion

      It‘s possible to develop and run an API using Fedora, Fedora default MariaDB, JavaScript and efficiently develop a solution like with a noSQL database. For those used to working with MongoDB or a similar noSQL database, Fedora and MariaDB are important open-source enablers.


      Photo by Mazhar Zandsalimi on Unsplash.

      Modularity survey results

      Posted by Fedora Community Blog on May 18, 2020 07:00 AM

      The purpose of this survey was to get feedback on Modularity. The survey was published on public Fedora devel and an internal Red Hat mailing lists in April 3, 2020, and also shared on Fedora’s devel-announce and epel-devel mailing lists. We received 193 responses in 3 weeks. Read more below or download the PDF of the […]

      The post Modularity survey results appeared first on Fedora Community Blog.

      Episode 197 – Beer, security, and consistency; the newer, better, triad

      Posted by Josh Bressers on May 17, 2020 11:22 PM
      Josh and Kurt talk about what beer and reproducible builds have in common. It’s a lot more than you think, and it mostly comes down to quality control. If you can’t reproduce what you do, you’re not a mature organization and you need maturity to have quality.

      <audio class="wp-audio-shortcode" controls="controls" id="audio-713-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_197_Beer_security_and_consistency_the_newer_better_triad.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_197_Beer_security_and_consistency_the_newer_better_triad.mp3</audio>

      Show Notes

      Episode 197 - Beer, security, and consistency; the newer, better, triad

      Posted by Open Source Security Podcast on May 17, 2020 11:22 PM
      Josh and Kurt talk about what beer and reproducible builds have in common. It's a lot more than you think, and it mostly comes down to quality control. If you can't reproduce what you do, you're not a mature organization and you need maturity to have quality.

      <iframe allowfullscreen="allowfullscreen" height="90" mozallowfullscreen="mozallowfullscreen" msallowfullscreen="msallowfullscreen" oallowfullscreen="oallowfullscreen" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/14457869/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none" webkitallowfullscreen="webkitallowfullscreen" width="100%"></iframe>

      Show Notes


          Open positions: NeuroFedora is looking to take on trainees

          Posted by The NeuroFedora Blog on May 17, 2020 10:38 PM
          Teamwork by Perry Grone on Unsplash

          Photo by Perry Grone on Unsplash.


          After the recent release of the Computational Neuroscience installable OS image, the NeuroFedora team is looking to work on to the next set of deliverables. For this, we need to expand the team.

          I want to note that we are not only looking for people that may already have the necessary skills. We are looking for anyone interested in working in these areas that would perhaps like to acquire the required skills. We will teach the skills that we can, and where we cannot, we will involve experienced members of the Free/Open Source software community to help us. All one really needs is a few hours a week of free time.

          We are looking for people interested in:

          • Scientific communication, marketing, outreach, and community engagement:
            • To spread information on the staggering amount of Free/Open Source Software that is available for Neuroscience to researchers and the community in general,
            • To disseminate the progress that the NeuroFedora team makes regularly to the community.
            • To just generally monitor our various communication channels to answer queries and participate in discussions with the team and users.
          • Software development:
            • There are still about 200 tools to include in NeuroFedora, so still lots of work to do here. Some tools related to computational neuroscience remain, so we are working on those. However, we want to start making some headway on the next deliverable that will be focussed on neuroimaging and data analysis. Not only do we need to build these tools from source, we also need to test them regularly, and push new versions to our users when developers make new releases.
            • We also want to provide easy to use containers for all the tools that we are including in NeuroFedora.
          • Neuro-imaging and data analysis in Neuroscience:
            • A lot of tools on our list are related to Neuro-imaging and data analysis. To effectively integrate these with the rest of NeuroFedora, we need more people with domain knowledge. If you work these areas or want to work in them and would like to learn more about these tools, NeuroFedora is a great informal environment to start in.

          It is common knowledge that joining Free/Open source communities is an excellent way to pick up skills and experience. So, I especially encourage students to join one, even if it is not NeuroFedora.

          I also have first hand experience of how busy a PhD candidate can get, but in my experience I also found it possible to free up a few hours a week to work on developing general skills that one may not necessarily be able to learn from daily research work. So, I also strongly encourage undergraduate/postgraduate research students and Ph.D. candidates to do the same.

          Get in touch with us today!

          Open positions: NeuroFedora is looking for trainees

          Posted by The NeuroFedora Blog on May 17, 2020 10:38 PM
          Teamwork by Perry Grone on Unsplash

          Photo by Perry Grone on Unsplash.


          After the recent release of the Computational Neuroscience installable OS image, the NeuroFedora team is looking to work on to the next set of deliverables. For this, we need to expand the team.

          I want to note that we are not only looking for people that may already have the necessary skills. We are looking for anyone interested in working in these areas that would perhaps like to acquire the required skills. We will teach the skills that we can, and where we cannot, we will involve experienced members of the Free/Open Source software community to help us. All one really needs is a few hours a week of free time.

          We are looking for people interested in:

          • Scientific communication, marketing, outreach, and community engagement:
            • To spread information on the staggering amount of Free/Open Source Software that is available for Neuroscience to researchers and the community in general,
            • To disseminate the progress that the NeuroFedora team makes regularly to the community.
            • To just generally monitor our various communication channels to answer queries and participate in discussions with the team and users.
          • Software development:
            • There are still about 200 tools to include in NeuroFedora, so still lots of work to do here. Some tools related to computational neuroscience remain, so we are working on those. However, we want to start making some headway on the next deliverable that will be focussed on neuroimaging and data analysis. Not only do we need to build these tools from source, we also need to test them regularly, and push new versions to our users when developers make new releases.
            • We also want to provide easy to use containers for all the tools that we are including in NeuroFedora.
          • Neuro-imaging and data analysis in Neuroscience:
            • A lot of tools on our list are related to Neuro-imaging and data analysis. To effectively integrate these with the rest of NeuroFedora, we need more people with domain knowledge. If you work these areas or want to work in them and would like to learn more about these tools, NeuroFedora is a great informal environment to start in.

          It is common knowledge that joining Free/Open source communities is an excellent way to pick up skills and experience. So, I especially encourage students to join one, even if it is not NeuroFedora.

          I also have first hand experience of how busy a PhD candidate can get, but in my experience I also found it possible to free up a few hours a week to work on developing general skills that one may not necessarily be able to learn from daily research work. So, I also strongly encourage undergraduate/postgraduate research students and Ph.D. candidates to do the same.

          Get in touch with us today!

          Update on GNOME documentation project and infra

          Posted by Petr Kovar on May 15, 2020 10:16 PM

          As you may have noticed, GNOME was recently accepted as a participating organization in the Season of Docs 2020 program (thanks Kristi Progri, Shaun McCance, and Emmanuele Bassi for your help with this).

          While we are eagerly awaiting potential participants to apply for the program and start contributing as documentation writers to GNOME user and developer documentation projects, I also wanted to summarize recent updates from the GNOME documentation infrastructure area.

          Back in January this year when conferences were not solely virtual, Shaun McCance, David King and yours truly managed to meet right before FOSDEM 2020 in Brussels, Belgium for two days of working on a next-gen site generator for GNOME user documentation.

          As largely unmaintained library-web running behind help.gnome.org remains one of the biggest pain points in the GNOME project, we have a long-term plan to replace it with Pintail. This generator written by Shaun builds Ducktype or Mallard documentation directly from Git repos, surpassing the need to handle Autotools or Meson or any other build system in a tarball,  as opposed to library-web, which, for historical reasons, depends on released tarballs generated with Autotools only, with no support for Meson.

          With the help from the awesome GNOME Infrastructure team, we managed to get a test instance up and running at help-web.openshift.gnome.org for everybody to review. The sources are hosted at gitlab.gnome.org/Infrastructure/help.gnome.org. Please keep in mind that this all is very much a work in progress.

          We summarized some of the top priority issues to resolve as a follows:

          • Finalize site design to be on par with what users can found on help.gnome.org currently.
          • Add translation support for the site, so users can access localized content.
          • Figure out the right GNOME stable branching scheme support to be used by the site.
            Initially, we plan to make latest stable and unstable (master) versions for each GNOME module available. We want to have branching and linking configuration automated, without the need to manually reconfigure documentation modules for every new branch or release, just like in library-web. However, adding that support to Pintail requires some non-trivial code to be written and we had David looking into that particular area.

          With the limited amount of time we had during the pre-FOSDEM days, we still managed to make a considerable progress towards having a documentation site replacement ready for deployment. But as it is common with volunteer-based projects, the pace of work often slows down once the intense hacking session is over.

          I think we all realize that this area really needs more help from others in the community, be it Infrastructure, web design, localization, or simply community members submitting feedback. Please check out the help-web project on GNOME GitLab and leave comments, or, even better, submit merge requests.

          Thank you Shaun and David for your time and help!

          Fedora program update: 2020-20

          Posted by Fedora Community Blog on May 15, 2020 08:28 PM
          Fedora Program Manager weekly report on Fedora Project development and progress

          Here’s your report of what has happened in Fedora this week. Fedora 30 will reach end-of-life on 26 May. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. Announcements Help wanted Upcoming meetings Releases CPE update Announcements Orphaned packages seeking […]

          The post Fedora program update: 2020-20 appeared first on Fedora Community Blog.

          Reflections on “Empire City”

          Posted by Adam Young on May 15, 2020 08:02 PM

          “Empire City” by Matt Gallagher.

          <figure class="wp-block-image"></figure>


          This is not a book a review, except that I am writing it after I read the book, and am ruminating on the messages there in. I do not write this to judge writing.

          I do give a lot of spoilers.

          I’ve been reading Matt’s work since he blogged from Iraq back in the Aughts. Unlike many of the milbloggers, Matt was an officer, a Lieutenant. Since my own abbreviated Army career ended with me leaving the Army as a Lietenant a decade earlier, I kind of find my self in this perpetual Lieutenant-mentality when I think about the Army.

          The differences are worth noting: I was a Tabless bastard in the Infantry. Matt was a Cav scout, where a Ranger Tab was not a requirement for acceptance.
          I patrolled in Haiti for 80 days. That was my only “Real World” mission. I was not afraid of being shot at. Matt’s overseas experiences were a little more kinetic.
          I got out of the Army and became a software engineer. I spent my free time rockclimbing.
          Matt got out of the Army to become a writer. Not sure how he spends his free time.
          I have a tech blog that I use as a programmers notebook. I am the only person that reads it regularly

          Matt wrote a weblog that was read by many people. It was taken down by his chain of command (abbreviated form of the story)

          Matt writes book. I have never written a book.

          <figure class="wp-block-image"></figure>

          Matt’s first book is called “Kaboom.” I took my time in getting around to read it; I needed some space from all things Army for a while. I finally grabbed it a couple months back. The first half was a review of his blog, rereshing, but I had read it before. The second half as all new to me. Another parallel; the unit Matt served with on the remainder of his tour was on of the ones I served with in Hawaii. Nec Aspera Terrent.

          When I heard he wrote a fiction book, I was intrigued. I have to admit that, as a recovering fantasy/sci-fi bookworm, I have actually shied away from fiction lately, and I don’t find violence to be escapism anymore. I wanted more of his realism, but, hey, he gets to write what he writes. There was to be no sequel to Kaboom, which makes sense as he left the theater.

          I was wrong. Empire City is in every way a sequel to Kaboom.

          In an alternative history, America “Won” the Vietnam war. I suspect this more than anything leads to the parallels with “The Watchmen.” Well, that and the superheros.

          In Kaboom, Matt talks about an operation along side the Rangers, and his impression of them. The superhero’s in Empire are Rangers…except for the two that are not. One is a pilot, and one is the Author’s avatar.

          Sebastian Rios. Was the last name a nod to Rico, The main character from “Star Ship Troopers” by Heinlein? Even if it is subconscious, we all read and absorbed that book. San Sebastian…a non-white boy when he wants to be, able to hide in plain sight, whether visible or not. But can use his name to reject the white supremacists. The coming of age story of a young neoliberal becoming a conservative warfighter? The drinking, the shades, the story told primarily from his perspective? A Catholic, not WASP, background?

          Haiti. The word Haitian is one of the most pleasing-to-rhyme words I’ve come across. My own blog post used the term we called our deployment “The Haitian Vacation.” In His blog, Matt used the Haitian Sensation as as the code name for one of his soldiers. We see the influence of the Sensation on “Dash.” Dash is, of course, a bad name for a superhero with superspeed as that was the name used in “The Incredibles” but we can assume that to be an homage, too; maybe that movie didn’t exist in this alternate timeline.

          The Chaplain. What a wonderful twist, to make him a non-noncombatant. The uniform, with the magical aurora a Colonel-Chaplain would carry, would be a wonderful hide-in-plain sight disguise, just like the homeless disguise he used on an everyday street. He shoots, he scores, he dies.

          Is Mia an amalgam of some pilot friends? Is she City Girl? That was Matt’s code name for his Girlfriend/fiancee from the Kaboom book, who is now his wife. They have been together over a decade now; write what you know. Matt’s got a couple kids (as do I!) and there is no closer a man can get to a pregnant woman than living through the process at her side. The prosthetic as a banal reality of life, not stopping her from running 10 miles, is a nod to how we’ve changed.

          Would we even have Tupac without the changes that happened in the 80s? Would he really have lived to the present day?

          What the fuck happened to all the “good guys” (and girsl) at the end? I half expected that General Jackie Collins (A nod to the writer? Of course yes, and of course no) would prove to be the “Chessmaster” of the whole thing, organizaing both the Superhero project and the assassination of the Governor, leading to her own presidency. That Gallagher does not come right out and tell us this is actually a wonderful bit of story telling. Instead of a Hollywood ending, we get “Brazil.” All of the people that could have figured out what is going on get subsumed by this alternate history…because that is the rule in that world. Dash does not become a member of the outlaw group, he kills the leader of it. Mia stays in the campaign, even after she realizes that there are assassins associated with it. Sebastian “Volunteer”s. Flowers continues to get laid. Justice dies. In both the modern and the literal meaning of Literally.

          Is this Parody? Is this world a Dystopia? If so, is it any worse than the real world we are living in right now? Is this the way history “should have” gone according to the many right wing friends Matt and I both have from our time in the Army? Is Matt actually a right wing nut job hiding behind a liberal veneer to survive in modern day Manhattan? OK, I kinda doubt this last one.

          Maybe this is all a setup for a series. Over time, our heros see the wool has been pulled over their eyes and they fight the establishment and we get pulled out of the dystopia. I kinda hope not. I like this as a stand alone story; it is an amorality play. Reductio ad absurdum.

          J.R.R. Tolkien wrote “The Fall of Gondolin” when he was recovering in the Hospital after his wartime experiences in WWI. As such, it reflects a merging of his lifelong fascination with tales of the Fae and his horrific experiences with modern warfare. It was not analogy, it was therapy. I think the same is true of Empire City. It would not have been written if Matt had not gone to Iraq, and it reflects his experiences over there layered on top of the American culture he absorbed as a kid.

          There is more, but I don’t need to write down every last detail of the book, this is long enough as it is.

          I’d like to end with my favorite quotes from the book.

          She took off her shoes for security and set her jawline to “Stoic.”

          And

          Jesus, Mary and Allah

          Curso de Python - Tuplas, listas, diccionarios y sets

          Posted by Alvaro Castillo on May 15, 2020 05:09 PM

          A continuación vamos a ver algunos de los tipos de datos clave en Python junto con alguno de sus métodos, de los cuáles nos serán útiles para empezar.

          Tuplas tuple

          Conjunto de valores que no cambian dentro del flujo de la ejecución del programa. Pueden contener como valor todo tipo de dato incluyendo otra tupla.

          Más info en la doc oficial de Python

          >>> nacionalidad = ( 'Español', 'Turco', 'Italiano')
          >>> paises = ( 'España', nacionalidad, 'Turquía', 'Italia' )
          >>> print(paises)
          ('España', ('Español', 'Turco', 'Italiano'), 'Turquía', 'Italia')

          Añadir elementos a la tupla:

          >>> nacionalidad = ( 'Español', 'Turco', 'Italiano')
          >>> paises = ( 'España', nacionalidad, 'Turquía', 'Italia' )
          >>> paises += ('Francia', 'Munich')
          >>> print(paises)
          ('España', ('Español', 'Turco', 'Italiano'), 'Turquía', 'Italia', 'Francia', 'Munich')

          Duplicar el str o caracter que obtengamos por un nº de veces:

          >>> nacionalidad = ( 'Español', 'Turco', 'Italiano')
          >>> nacionalidad * 2
          ('Español', 'Turco', 'Italiano', 'Español', 'Turco', 'Italiano')

          Muestra el str que esté ubicada en una posición:

          >>> nacionalidad = ( 'Español', 'Turco', 'Italiano')
          >>> print(nacionalidad[2])
          'Italiano'

          Mostrar un conjunto de valores específicos de la tupla haciendo uso de su posición:

          >>> paises = ( 'España', nacionalidad, 'Turquía', 'Italia','Francia', 'Munich')
          >>> print(paises[3:5])
          ('Italia', 'Francia')

          Listas

          Grupo de valores representados dentro de unos [], se pueden cambiar de forma simple y sencilla. Más info en la doc oficial de Python

          Declarando una lista:

          >>> animales = [ 'gato', 'perro', 'búho' ]

          Añadir valores en la última posición de la lista, (similar a la tupla):

          >>> animales = [ 'gato', 'perro', 'búho' ]
          >>> animales += [ 'lagartija', 'pez' ]
          >>> print(animales)
          ['gato', 'perro', 'búho', 'lagartija', 'pez']

          También podemos utilizar .extend():

          >>> animales = [ 'gato', 'perro', 'búho' ]
          >>> animales.extend([ 'lagartija', 'pez' ])
          >>> print(animales)
          ['gato', 'perro', 'búho', 'lagartija', 'pez']

          Por otro lado podemos añadir un valor a partir de una posición específica dentro de la lista .insert()

          >>> animales = [ 'gato', 'perro', 'búho' ]
          >>> animales.insert(0, 'lagartija')
          >>> print(animales)
          ['lagartija', 'gato', 'perro', 'búho']

          Eliminar un valor de la lista:

          >>> animales = [ 'gato', 'perro', 'búho' ]
          >>> animales.remove('gato')
          >>> print(animales)
          [ 'perro', 'búho' ]

          Multiplicar el nº veces los valores de la lista:

          >>> animales = [ 'gato', 'perro', 'búho' ]
          >>> animales * 2
          ['gato', 'perro', 'búho', 'gato', 'perro', 'búho']

          Mostrar valores específicos utilizando la posición de dichos valores en la lista:

          >>> animales = [ 'gato', 'perro', 'búho' ]
          >>> print(animales[0:2])
          ['gato', 'perro']

          Diccionarios

          Son un conjunto de valores que se almacenan en modo clave:valor, separados los valores por comas, y todas las claves y sus valores van encerrados en {} como JSON. Las claves no pueden contener tuplas, diccionarios, sets... solo str o int. Más info en la doc oficial de Python

          >>> ciudades = { 'Andalucía': 'Sevilla', 'País Vasco': 'Bilbao', 'Baleares':'Palma' }

          Accediendo a un valor del diccionario:

          >>> ciudades = { 'Andalucía': 'Sevilla', 'País Vasco': 'Bilbao', 'Baleares':'Palma' }
          >>> print(ciudades['Andalucía'])
          Sevilla

          Obtener el nº de posiciones de una lista:

          >>> ciudades = { 'Andalucía':...

          802.11n AP <-> Client Kick-Off Script (Py)

          Posted by Jon Chiappetta on May 15, 2020 04:00 PM

          So I have been running multiple APs with the same SSID on separate channels and frequencies and noticed that the clients are really good at switching from a weak 802.11ac signal strength to the stronger but lower speed 802.11n AP station. This is good, however, they don’t seem to be as aggressive in switching back to 802.11ac once they get closer again (unless they turn off or shutdown or restart their network stack since the 802.11n just gets stronger the closer you get). I found an OpenWRT compatible shell script which kicks clients off a given radio depending on their signal strength to the router. I adjusted it to disconnect a client if they start to get too close to the N router as they are likely going to get a good signal strength from the AC AP instead. You can set the AP deauth time (ex 19 seconds), the time between kicking the same client on/off again (ex 31 mins), and it checks for the signal-to-noise ratio to get above a certain amount (ex 45 SNR) before activating on a client!

          python /root/apc.py n 19 31
          
          import os,sys,time
          
          radios = []
          mode = sys.argv[1]
          secs = (int(sys.argv[2]) * 1000)
          macs = {}
          ban = (int(sys.argv[3]) * 60)
          
          if (mode == "ac"):
          	radios = ["wlan0"]
          
          if (mode == "n"):
          	radios = ["wlan1"]
          
          while True:
          	sec = int(time.time())
          
          	for intf in radios:
          		os.system("iwinfo '%s' assoclist | grep 'SNR' | tr '(/)' ' ' | tr -s ' ' > /tmp/apc.log" % (intf))
          		f = open("/tmp/apc.log", "r")
          		lines = f.readlines()
          		f.close()
          
          		for line in lines:
          			info = line.strip().split(" ")
          			mac = info[0] ; sig = int(info[1]) ; snr = abs(int(info[6]))
          			print("> %d %d [%s][%s]" % (snr, sig, intf, mac))
          
          			if (secs >= 1000):
          				delc = 0
          				opts = ("'addr':'%s', 'reason':5, 'deauth':false, 'ban_time':%s" % (mac, secs))
          
          				if ((mode == "ac") and (sig <= -83)):
          					delc = 1
          
          				#if ((mode == "n") and (sig >= -51)):
          				#	delc = 1
          
          				if ((mode == "n") and (snr >= 45)):
          					delc = 1
          
          				if (delc == 1):
          					if (not mac in macs.keys()):
          						macs[mac] = 0
          					if (sec >= (macs[mac] + ban)):
          						macs[mac] = sec
          						print("* %d %d [%s][%s][%d][%d]" % (macs[mac], sec, mac, intf, sig, snr))
          						os.system('ubus call "hostapd.%s" del_client "{%s}"' % (intf, opts))
          
          	for mac in macs.keys():
          		if (sec >= (macs[mac] + ban)):
          			print("x", mac, macs[mac])
          			del macs[mac]
          
          	print("")
          	time.sleep(9)
          
          

          Single file implementation of PEP582

          Posted by Kushal Das on May 15, 2020 02:11 PM

          During 2018 CPython core developer sprint, I worked on the PEP 582. The goal was to help all the newbie learners during their first day writing Python by skipping the whole complexity of virtual environments. The PEP contains a reference implementation. During the sprint itself, a few core developers did not like the idea of yet another feature focusing only on newbies. Instead, there was another discussion to create a single tool to solve all the problems in the packaging world.

          Now, in 2020, we, the Python trainers, are still facing the same problem. How to explain the whole idea of virtual environments to the newbie? Should we teach the concept of the Operating system and shells and environments or teach Python?

          A few nights ago, during a chat with Brett Cannon, he suggested having a single tool to do the same and see how people react.

          Introducing project PEP582

          PEP582 is a single file implementation of the above-mentioned idea. You can call it a stupid hack, but it works.

          Installing the project and using it

          First, get the latest copy of the source, and then you can install it (without any root/administrator access) using Python itself. If you are using Ubuntu or Debian system, it assumes that you already have python3-venv and python3-pip installed.

          curl https://raw.githubusercontent.com/kushaldas/pep582/master/pep582.py -o pep582.py
          python3 pep582.py --install
          Successfully installed in /home/kdas/.local/lib/python3.7/site-packages/pep582.py
          

          After this, in any directory, if you create a __pypackages__ directory, python executable will start using it. If you install any package via pip, it will also install in the __pypackages__ directory.

          pep582 demo

          It does not modify the PATH variable. If you want to install and use any executable. This is not a replacement for virtual environments. The tool is here to help the newbies to start programming fast. For more advanced work, they will have to learn about virtual environments.

          Oh, this works on Windows too. I never tested on Mac yet.

          pep582 demo

          Please play around, and let me know any improvement you want to see. You are always welcome to open issues in the project repository.

          The pieces of Fedora Silverblue

          Posted by Fedora Magazine on May 15, 2020 07:00 AM

          Fedora Silverblue provides a useful workstation build on an immutable operating system. In “What is Silverblue?“, you learned about the benefits that an immutable OS provides. But what pieces go into making it? This article examines some of the technology that powers Silverblue.

          The filesystem

          Fedora Workstation users may find the idea of an immutable OS to be the most brain-melting part of Silverblue. What does that mean? Find some answers by taking a look at the filesystem.

          At first glance, the layout looks pretty much the same as a regular Fedora file system. It has some differences, like making /home a symbolic link to /var/home. And you can get more answers by looking at how libostree works. libostree treats the whole tree like it’s an object, checks it into a code repository, and checks out a copy for your machine to use.

          libostree

          The libostree project supplies the goods for managing Silverblue’s file system. It is an upgrade system that the user can control using rpm-ostree commands.

          libostree knows nothing about packages—an upgrade means replacing one complete file system with another complete file system. libostree treats the file system tree as one atomic object (an unbreakable unit). In fact, the forerunner to Silverblue was named Project Atomic.

          The libostree project provides a library and set of tools. It’s an upgrade system that carries out these tasks.

          1. Pull in a new file system
          2. Store the new file system
          3. Deploy the new file system

          Pull in a new file system

          Pulling in a new file system means copying an object (the entire file system) from a remote source to its own store. If you’ve worked with virtual machine image files, you already understand the concept of a file system object that you can copy.

          Store the new file system

          The libostree store has some source code control qualities—it stores many file system objects, and checks one out to be used as the root file system. libostree’s store has two parts:

          • a repository database at /sysroot/ostree/repo/
          • file systems in /sysroot/ostree/deploy/fedora/deploy/

          libostree keeps track of what’s been checked in using commit IDs. Each commit ID can be found in a directory name, nested deep inside /sysroot .A libostree commit ID is a long checksum, and looks similar to a git commit ID.

          $ ls -d /sysroot/ostree/deploy/fedora/deploy/*/
          /sysroot/ostree/deploy/fedora/deploy/c4bf7a6339e6be97d0ca48a117a1a35c9c5e3256ae2db9e706b0147c5845fac4.0/

          rpm-ostree status gives a little more information about that commit ID. The output is a little confusing; it can take a while to see this file system is Fedora 31.

          $ rpm-ostree status
          State: idle
          AutomaticUpdates: disabled
          Deployments:
          ● ostree://fedora:fedora/31/x86_64/silverblue
                             Version: 31.1.9 (2019-10-23T21:44:48Z)
                              Commit: c4bf7a6339e6be97d0ca48a117a1a35c9c5e3256ae2db9e706b0147c5845fac4
                        GPGSignature: Valid signature by 7D22D5867F2A4236474BF7B850CB390B3C3359C4

          Deploy the new filesystem

          libostree deploys a new file system by checking out the new object from its store. libostree doesn’t check out a file system by copying all the files—it uses hard links instead. If you look inside the commit ID directory, you see something that looks suspiciously like the root directory. That’s because it is the root directory. You can see these two directories are pointing to the same place by checking their inodes.

          $ ls -di1 / /sysroot/ostree/deploy/fedora/deploy/*/
          260102 /
          260102 /sysroot/ostree/deploy/fedora/deploy/c4bf7a6339e6be97d0ca48a117a1a35c9c5e3256ae2db9e706b0147c5845fac4.0/

          This is a fresh install, so there’s only one commit ID. After a system update, there will be two. If more copies of the file system are checked into libostree’s repo, more commit IDs appear here.

          Upgrade process

          Putting the pieces together, the update process looks like this:

          1. libostree checks out a copy of the file system object from the repository
          2. DNF installs packages into the copy
          3. libostree checks in the copy as a new object
          4. libostree checks out the copy to become the new file system
          5. You reboot to pick up the new system files

          In addition to more safety, there is more flexibility. You can do new things with libostree’s repo, like store a few different file systems and check out whichever one you feel like using.

          Silverblue’s root file system

          Fedora keeps its system files in all the usual Linux places, such as /boot for boot files, /etc for configuration files, and /home for user home directories. The root directory in Silverblue looks much like the root directory in traditional Fedora, but there are some differences.

          • The filesystem has been checked out by libostree
          • Some directories are now symbolic links to new locations. For example, /home is a symbolic link to /var/home
          • /usr is a read-only directory
          • There’s a new directory named /sysroot. This is libostree’s new home

          Juggling file systems

          You can store many file systems and switch between them. This is called rebasing, and it’s similar to git rebasing. In fact, upgrading Silverblue to the next Fedora version is not a big package install—it’s a pull from a remote repository and a rebase.

          You could store three copies with three different desktops: one KDE, one GNOME, and one XFCE. Or three different OS versions: how about keeping the current version, the nightly build, and an old classic? Switching between them is a matter of rebasing to the appropriate file system object.

          Rebasing is also how you upgrade from one Fedora release to the next. See “How to rebase to Fedora 32 on Silverblue” for more information.

          Flatpak

          The Flatpak project provides a way of installing applications like LibreOffice. Applications are pulled from remote repositories like Flathub. It’s a kind of package manager, although you won’t find the word package in the docs. Traditional Fedora variants like Fedora Workstation can also use Flatpak, but the sandboxed nature of flatpaks make it particularly good for Silverblue. This way you do not have to do the entire ostree update process every time you wish to install an application.

          Flatpak is well-suited to desktop applications, but also works for command line applications. You can install the vim editor with the command flatpak install flathub org.vim.Vim and run it with flatpak run org.vim.Vim.

          toolbox

          The toolbox project provides a traditional operating system inside a container. The idea is that you can mess with the mutable OS inside your toolbox (the Fedora container) as much as you like, and leave the immutable OS outside your toolbox untouched. You pack as many toolboxes as you want on your system, so you can keep work separated. Behind the scenes, the executable /usr/bin/toolbox is a shell script that uses podman.

          A fresh install does not include a default toolbox. The toolbox create command checks the OS version (by reading /usr/lib/os-release), looks for a matching version at the Fedora container registry, and downloads the container.

          $ toolbox create
          Image required to create toolbox container.
          Download registry.fedoraproject.org/f31/fedora-toolbox:31 (500MB)? [y/N]: y
          Created container: fedora-toolbox-31
          Enter with: toolbox enter

          Hundreds of packages are installed inside the toolbox. The dnf command and the usual Fedora repos are set up, ready to install more. The ostree and rpm-ostree commands are not included – no immutable OS here.

          Each user’s home directory is mounted on their toolbox, for storing content files outside the container.

          Put the pieces together

          Spend some time exploring Fedora Silverblue and it will become clear how these components fit together. Like other Fedora variants, all these of tools come from open source projects. You can get as up close and personal as you want, from reading their docs to contributing code. Or you can contribute to Silverblue itself.

          Join the Fedora Silverblue conversations on discussion.fedoraproject.org or in #silverblue on Freenode IRC.

          Instalando MongoDB en CentOS 8

          Posted by Alvaro Castillo on May 14, 2020 06:40 PM

          Bueno, antes de entrar en materia quiero comentar que las instrucciones que hay aquí descritas se han obtenido de la documentación oficial de MongoDB y las he probado personalmente para asegurar que funciona este procedimiento.

          Imagen obtenida de MongoDB.com

          Para empezar, MongoDB es una base de datos (NoSQL) orientada a documentos y utiliza un formato de intercambio de datos llamado BSON que es una representación binaria de estructuras de datos y mapas JSON. Sin más tapujos, entraremos al trapo.

          Requisitos previos

          Importante: Sacar un snapshot o copia de seguridad antes de comenzar:

          • Actualizamos el SO antes de comenzar:
            # dnf upgrade -y
          • NOTA: Reiniciamos si hemos aplicado un nuevo kernel o systemd antes de proceder.

          Instalando MongoDB

          • Creamos el siguiente archivo /etc/yum.repos.d/mongodb-org-4.2.repo y añadimos las siguientes líneas:
            [mongodb-org-4.2]
            name=MongoDB Repository
            baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.2/x86_64/
            gpgcheck=1
            enabled=1
            gpgkey=https://www.mongodb.org/static/pgp/server-4.2.asc
          • Refrescamos los repositorios:
            # dnf check-update
          • Instalamos el servidor y la consola de acceso:
            # dnf install -y mongodb-org
          • MongoDB por defecto, almacenará todo su contenido en los directorios:

            • /var/lib/mongo ----> Directorio de datos
            • /var/lib/mongodb --> Directorio de registros
          • Si queremos hacer alguna modificación, la configuración está en: /etc/mongod.conf hay que recordar, que MongoDB se ejecuta utilizando el usuario:grupo mongod:mongod. Por lo tanto, si hacemos alguna modificación en las rutas, tendremos que cambiar su $HOME, y otorgarle los permisos correspondientes usuario:grupo a los nuevos directorios. Y también actualizar la definición de los mismos en el fichero de configuración.

            NOTA: Es mejor tener un FS separado de / para evitar el colapso del mismo cuando hay mucha información almacenada, porque evitaría que el sistema no pudiera trabajar correctamente forzando un reinicio.

          • Comprobar el servicio que esté parado
            $ systemctl status mongod.service
            ● mongod.service - MongoDB Database Server
            Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor pres>
            Active: inactive (dead)
             Docs: https://docs.mongodb.org/manual

          Gestionando los permisos de SELinux

          Como ya he mencionado en un par de artículos anteriores, SELinux es un conjunto de políticas de seguridad que trabajan junto con el núcleo de Linux e impiden que se ejecuten exploits o malware que pueda afectar a la seguridad de la máquina. Esta modificación que vamos a implementar, permitirá a MongoDB acceder a /sys/fs/cgroup para que pueda identificar cuánta memoria hay disponible en el sistema.

          • Asegurándonos de que el paquete checkpolicy está instalado en nuestro sistema:
            # dnf install checkpolicy

            NOTA: Si lo tienes instalado saldrá una salida tal que así:

            Last metadata expiration check: 0:11:28 ago on Wed 13 May 2020 11:06:20 PM CEST.
            Package checkpolicy-2.9-1.el8.x86_64 is already installed.
            Dependencies resolved.
            Nothing to do.
            Complete!
          • Creamos un fichero de políticas llamado mongodb_cgroup_memory.te:
          # cat > mongodb_cgroup_memory.te <<EOF
          module mongodb_cgroup_memory 1.0;
          
          require {
            type cgroup_t;
            type mongod_t;
            class dir search;
            class file { getattr open read };
          }
          
          #============= mongod_t ==============
          allow mongod_t cgroup_t:dir search;
          allow mongod_t cgroup_t:file { getattr open read };
          EOF
          • Compilamos y cargamos este módulo de políticas:
            # checkmodule -M -m -o mongodb_cgroup_memory.mod mongodb_cgroup_memory.te
            # semodule_package -o mongodb_cgroup_memory.pp -m mongodb_cgroup_memory.mod
            # semodule -i mongodb_cgroup_memory.pp
          • Creamos un servicio que deshabilite el THP (Transparent Huge Pages)...