Fedora People

20 years of blog!

Posted by Kevin Fenzi on November 30, 2023 07:03 PM

Just a short post to note that 20 years ago (!) I posted the first entry in this blog. I’ve been rather busy of late and haven’t posted too much, but it’s still up and a going concern. 🙂 Then, I posted about a long thanksgiving trip. This year I stayed very close to home for thanksgiving. Changes all around.

I plan some posts soon reviewing the amd/ryzen frame.work upgrade I got and the lovely asus hyper M.2 card (4 nvme drives). Also, some thoughts on open source and the like.

Here’s to 20 more.

ANNOUNCE: libvirt-glib release 5.0.0

Posted by Daniel Berrange on November 30, 2023 02:59 PM

I am pleased to announce that a new release of the libvirt-glib package, version 5.0.0, is now available from


The packages are GPG signed with

Key fingerprint: DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF (4096R)

Changes in this release:

  • Fix compatiblity with libxml2 >= 2.12.0
  • Bump min libvirt version to 2.3.0
  • Bump min meson to 0.56.0
  • Require use of GCC >= 4.8 / CLang > 3.4 / XCode CLang > 5.1
  • Mark USB disks as removable by default
  • Add support for audio device backend config
  • Add support for DBus graphics backend config
  • Add support for controlling firmware feature flags
  • Improve compiler flag handling in meson
  • Extend library version script handling to FreeBSD
  • Fix pointer sign issue in capabilities config API
  • Fix compat with gnome.mkenums() in Meson 0.60.0
  • Avoid compiler warnings from gi-ir-scanner generated code by not setting glib version constraints
  • Be more robust about NULL GError parameters
  • Disable unimportant cast alignment compiler warnings
  • Use ‘pragma once’ in all header files
  • Updated translations

Thanks to everyone who contributed to this new release.

AI sẽ thay đổi hoàn toàn trò chơi dành cho những kẻ gửi thư rác và những trò lừa đảo phức tạp sẽ trở nên tinh vi hơn rất nhiều

Posted by Truong Anh Tuan on November 30, 2023 10:25 AM
Bài được đăng trên tạp chí Fortune – Lược dịch nhân ngày An Toàn Thông Tin Việt Nam 2023 đang diễn ra với chủ đề “An toàn dữ liệu trong thời đại Điện toán đám mây & Trí tuệ nhân tạo”. Về tác giả: John Licato là trợ lý giáo sư Khoa học Máy tínhContinue reading "AI sẽ thay đổi hoàn toàn trò chơi dành cho những kẻ gửi thư rác và những trò lừa đảo phức tạp sẽ trở nên tinh vi hơn rất nhiều"

Join a December 2023 Fedora Docs workshop!

Posted by Fedora Community Blog on November 30, 2023 08:00 AM

Use Linux on the desktop or server? Want to benefit from writing user documentation? There’s no better way to wrap up 2023 than with the Fedora Docs workshop.

Join us on December 7, 2023 from UTC 08:00-09:00 (see local time below). The Fedora Docs team will guide you on how to contribute user documentation, get ahead of technical writing skills and learn Docs tool set.

Jitsi Meeting Link for December 2023


View local times

  • 08:00-09:00 in London
  • 09:00-10:00 in Berlin
  • 10:00-11:00 in Athens
  • 11:00-12:00 in Nairobi
  • 12:00-13:00 in Abu Dhabi
  • 13:30-14:30 in Bengaluru
  • 16:00:17:00 in Singapore and Kuala Lumpur
  • 17:00-18:00 in Seoul and Tokyo
  • 19:00-20:00 in Sydney

We’ll rotate meeting time in 2024 to cater for a diverse set of time zones. The workshop will be recorded and shared on PeerTube, so you can watch it later.


  • Best practices in technical documentation from the Fedora Project
  • Editorial process and Docs repositories
  • Ask the experts

Who will benefit from this interactive workshop?

  • You use software run on Linux desktop and/or server to get the job done
  • You love to document tools, features, and how-to guides, which require care and maintenance to ensure technical accuracy and conformance to documentation style guide
  • You have expertise in web design, UI or UX development can help optimize the graphical and functional appearance of Fedora Quick Docs and curation
  • You are interested in Docs tool chain
  • You want to gain experience on technical writing

Other awesome documentation tools you’ll learn and use in 2024

GitLab Web IDE, Podman, Git, vale linter

The post Join a December 2023 Fedora Docs workshop! appeared first on Fedora Community Blog.

Video: Distrobox Demo on Fedora 39 Host

Posted by Scott Dowdle on November 29, 2023 07:03 PM
Video: Distrobox Demo on Fedora 39 Host Scott Dowdle Wed, 11/29/2023 - 12:03

Version 4.5.0 of syslog-ng is now available with OpenObserve JSON API support

Posted by Peter Czanik on November 29, 2023 09:12 AM

Recently, syslog-ng 4.5.0 was released with many new features. These include sending logs to OpenObserve using its JSON API, support for Google Pub/Sub, a new macro describing message transport mechanisms like RFC 3164 + TCP, an SSL option to ignore validity periods, and many more. You can find a full list of new features and bug fixes in the release notes at: https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.5.0

In this blog, you can find some pointers on how to install the very latest syslog-ng version and learn how you can configure syslog-ng to use the OpenObserver JSON API: https://www.syslog-ng.com/community/b/blog/posts/version-4-5-0-of-syslog-ng-is-now-available-with-openobserve-json-api-support


syslog-ng logo

</figcaption> </figure>

Searching for packages with ‘rpm-ostree search’

Posted by Fedora Magazine on November 29, 2023 08:00 AM

This article descibes how to find an application to add to your ostree based system (such as Silverblue and Kinoite) using rpm-ostree.

One of the main benefits of the Fedora ostree-based systems is the immutability of the system. Not only is the image read-only, it’s also pre-built on the Fedora servers. Thus, updating a running system downloads the update deltas (that is, only the differences) and patches the system. This makes the many installations identical by default.

A pre-built image will be good enough for most people, since users are normally encouraged to use both flatpaks for applications and toolbox for development tasks. But what if the specific application doesn’t fit this and the user requires installing applications on the host system?

Well, in this case the option is to overlay the packages on the system, creating a new image locally with the added package on top of the standard image.

But, how do I know what package I want to install? What about search functionality?

The old way (toolbox + dnf search)

While it has always been possible to search for packages via the PackageKit-enabled Software Center (such as GNOME Software or KDE Discover), it has been a bit harder to do so via CLI.

Since rpm-ostree didn’t use to offer a search command, the common way to search has been to enter a toolbox with toolbox enter and search using dnf search <search term>. This has the downside of requiring the same repositories be enabled in the toolbox in order to get proper search results.

Example for searching for neofetch:

$ toolbox enter
<Note that at this point the toolbox command might request creating a toolbox, which might involve downloading a container image>
⬢[fedora@toolbox ~]$ dnf search neofetch
=== Name Exactly Matched: neofetch ===
neofetch.noarch : CLI system information tool written in Bash
=== Summary Matched: neofetch ===
fastfetch.x86_64 : Like neofetch, but much faster because written in c

The new way (rpm-ostree search)

Since version 2023.6, rpm-ostree supports the “search” verb allowing users to use rpm-ostree to search for available packages. An example command for this is:

 rpm-ostree search *kernel

To use the search verb, first make certain you are using rpm-ostree version 2023.6 or later:

$ rpm-ostree --version
 Version: '2023.8'
 Git: 9a99d0af32640b234318815a256a2d11e35fa64c
  - rust
  - compose
  - container
  - fedora-integration

If the version requirement is satisfied, you should be able to run rpm-ostree search <search terms>.

Here is an example, searching for neofetch with rpm-ostree search:

$ rpm-ostree search neofetch

===== Name Matched =====
neofetch : CLI system information tool written in Bash

===== Summary Matched =====
fastfetch : Like neofetch, but much faster because written in c

Cockpit 306

Posted by Cockpit Project on November 29, 2023 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit 306, cockpit-machines 303, cockpit-podman 82, and cockpit-ostree 198.1:

Kdump: Add Ansible/shell automation

The Kdump page can generate an Ansible role or a shell script for replicating the current kdump configuration on other machines.

screenshot of the kdump page with an Ansible button

screenshot of shell automation dialog with the "Ansible" tab selected and the "shell script" tab unselected

OSTree: Redesign, with new features

The OSTree page for software updates has been redesigned and includes several new features.

Update status is now listed in the “Status” card, and details about the current OSTree repository and branch are visible in the “OSTree source” card.

new design of the OSTree page, showing off the status card

Unused deployments and package cache can be removed in using the “clean up” action.

screenshot of the clean up modal dialog, with cache selected and deployments unselected by default

A new “Reset” feature has been added, which can remove layered and overriden packages.

reset modal dialog

Deployments can be pinned to persist even when new deployments trigger an auto-cleanup and unpinned.

screenshot showing a pinned deployment

Machines: Change “Add disk” default behavior

The “Always attach” Persistent option will now be set by default.

Podman: Delete intermediate images

Intermediate images have no tags or other identifiers. They are usually intermediate layers from building container images or leftovers from moved tags. Intermediate images can now be deleted within Cockpit Podman.

screenshot of the "delete intermediate images" modal dialog

Try it out

Cockpit 306, cockpit-machines 303, cockpit-podman 82, and cockpit-ostree 198.1 are available now:

Call for participation: Testing and Continuous Delivery devroom, FOSDEM'24

Posted by Kiwi TCMS on November 28, 2023 09:00 AM

"Cfp banner"

Attention testers! On behalf of the Testing and Continuous Delivery devroom we'd like to announce that call for participation is still open. This room is about building better software through a focus on testing and continuous delivery practices across all layers of the stack. The purpose of this devroom is to share good and bad examples around the question “how to improve quality of our software by automating tests, deliveries or deployments” and to showcase new open source tools and practices.

Note: for FOSDEM 2024 this devroom is a merger between the former Testing and Automation and Continuous Integration and Continuous Deployment devrooms and is jointly co-organized between devroom managers in previous FOSDEM editions! Kiwi TCMS is proud to be part of the team hosting this devroom!

Important: devroom will take place on Sunday, February 4th 2024 at ULB Solbosch Campus, Brussels, Belgium! Presentations will be streamed online but all accepted speakers are required to deliver their talks in person!

Here are some ideas for topics that are a good fit for this devroom:

Testing in the real, open source world

  • War stories/strategies for testing large scale or complex projects
  • Tools that extend the ability to test low-level code, e.g. bootloaders, init systems, etc.
  • Projects that are introducing new/interesting ways of testing "systems"
  • Address the automated testing frameworks fragmentation
  • Stories from end-users (e.g. success/failure)
  • Continuous Integration
  • Continuous Delivery
  • Continuous Deployment
  • Security in Software Supply Chain
  • Pipeline standardization
  • Interoperability in CI/CD
  • Lessons learned

Cool Tools (good candidates for lightning talks)

  • Project showcases, modern tooling
  • How your open source tool made developing quality software better
  • What tools do you use to setup your CI/CD
  • Combining projects/plugins/tools to build amazing things "Not enough people in the open source community know how to use $X, but here's a tutorial on how to use $X to make your project better."

In the past the devroom has seen both newbies/students and experienced professionals and past speakers as part of the audience and talks covering from beginner/practical to advanced/abstract topics. If you have doubts then submit your proposal and leave a comment for the devroom managers and we'll get back to you.

To submit a talk proposal (you can submit multiple proposals if you'd like) use Pretalx, the FOSDEM paper submission system. Be sure to select Testing and Continuous delivery!

Checkout https://fosdem-testingautomation.github.io/ for more information!

Happy Testing!

If you like what we're doing and how Kiwi TCMS supports various communities please help us!

CFP now open for the Distributions dev room at FOSDEM 2024!

Posted by Fedora Community Blog on November 28, 2023 08:00 AM

Co-authored by benny Vasquez and Justin W. Flory.

In just over two months, the Fedora Project returns to FOSDEM in Brussels, Belgium from 3-4 February 2024 again with a stand, a bunch of friends, and our usual role as organizers of the (Linux) Distributions Developer Room (dev room).

CFP is open until 5 December 2023!

The Distributions dev room opened its CFP a couple of weeks ago. We have seen great ideas come in! If you would like to join us, submit your talk for inclusion no later than December 5th. If you’re looking for ideas, we’ve got a list of potential topics on our official CFP announcement, and you can always look at previous years for more ideas.

Important dates for FOSDEM 2024

  • CFP closes: Tuesday, 5 December 2023
  • Deadline for accepted speakers to confirm: Tuesday, 12 December 2023
  • Final schedule announcement: Friday, 15 December 2023

What is a FOSDEM Developer Room?

Developer rooms are assigned to self-organizing groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc.

From the FOSDEM website

The general idea is that a group of people get together to organize a mini-event inside the FOSDEM umbrella. The groups of folks submit a proposal to the FOSDEM organizers, and then recruit content and build a schedule for the room. Everyone involved is likely to be a volunteer, but the content is always extremely beneficial, and presented by leaders and experts on their topics.

How to attend

FOSDEM is a free conference that requires no registration of any kind. You just show up on the days of the event, and then attend the talks that you want to attend. It is an extremely popular event, so the talks are also recorded and posted later on the FOSDEM website.

When the schedule is live, you’ll be able to find it on the FOSDEM website, so keep an eye out for updates!

Help out as a volunteer

Will you be at FOSDEM next year from 3-4 February 2024? The Distributions dev room welcomes volunteers to help with various day-of logistics for running the dev room. If you are interested in helping out with the Distributions dev room, send an introduction email to the FOSDEM distributions-devroom mailing list.

The post CFP now open for the Distributions dev room at FOSDEM 2024! appeared first on Fedora Community Blog.

On the nature of the right to privacy

Posted by Fabio Alessandro Locati on November 28, 2023 12:00 AM
In the last month, Meta has started to give their European users a choice between an account for their services paid in data or one paid in Euros. Today, noyb has filed a GDPR complaint against Meta over this behavior. Noyb has very good points to sustain their filing, but I don’t want to delve too much into those since those are very well explained in their press release. I think there is a deeper problem that they quickly touch but do not address directly, which is the interpretation of the kind of right that privacy is.

recvfrom system call in python

Posted by Adam Young on November 27, 2023 07:21 PM

With a socket created, and a message sent, the next step in our networking journey is to receive data.

Note that I used this post to get the parameter right. Without using the ctypes pointer function I got an address error.

print("calling recvfrom")
recv_sockaddr_mctp = struct_sockaddr_mctp(AF_MCTP, 0, mctp_addr , 0, 0)
len = 2000;
rxbuf = create_string_buffer(len)
sock_addr_sz = c_int(sizeof(recv_sockaddr_mctp))
MSG_TRUNC = 0x20
rc = libc.recvfrom(sock, rxbuf, len, MSG_TRUNC, byref(recv_sockaddr_mctp), pointer(sock_addr_sz));
errno = get_errno_loc()[0]

if (rc < 0 ):
    print("rc = %d errno =  %d" % (rc, errno))


From previous articles you might be wondering about these values.

AF_MCTP = 45
mctp_addr = struct_mctp_addr(0x32)

However, the real magic is the structures defined using ctypesgen. I recommend reading the previous article for the details.

While the previous two article are based on the code connect example that I adjusted and posted here, the recvfrom code is an additional call. The c code it is based on looks like this:

    addrlen = sizeof(addr);

    rc = recvfrom(sd, rxbuf, len, MSG_TRUNC,
                (struct sockaddr *)&addr, &addrlen);

PipeWire 1.0 – An interview with PipeWire creator Wim Taymans

Posted by Fedora Magazine on November 27, 2023 08:00 AM

Wim Taymans is a Fedora contributor and the creator of PipeWire, the system service that takes audio and video handling under Linux to the next level. Combining the power of PulseAudio and JACK, and adding a video equivalent of those audio services, allows Linux to become a premier content creation platform for audio engineers and video creators alike.

Q: So its 2 years since we talked about PipeWire here as part of the Fedora Workstation 34 launch. What are your development highlights since we last spoke?

So many! We’ve done more than 40 releases since then.

A big part of the work was to close the gap between pulseaudio and PipeWire. We did the transition in Fedora 34 with quite a few missing features that were not enabled by default but that people often used, such as the various network sinks and sources. We also added echo-cancellation and many of the other missing modules and finally we added S/PDIF passthrough, Airplay support and multiple sample rates. Most of these new modules now have more features than the pulseaudio equivalents.

We’ve also added something that I wanted to do for a long time: a filter-chain. This allows you to write a graph of filters to do various things. We’ve been building filters for 3D sound, reverbs, delays and equalizers with this.

PipeWire now also has support for some of the more advanced network audio protocols. We’ve added experimental support for AVB. We have working AES67 support and we support low-latency synchronization of AES67 streams with a PTP clock. We’ve also added support for Apple MIDI and extended the protocol a little to also handle raw audio and OPUS compressed audio.

We switched to WirePlumber 0.4 in Fedora 35 and deprecated our old session manager. A lot of work has been done for the next version of WirePlumber 0.5. That should be released, hopefully, soon and that includes a rework of the event system.

Other than that the code has matured a lot. Performance has increased and there are fewer bugs. The wire protocol is fully documented now. The scheduling has improved a lot and has more features.

Q: So there are 3 main parts to PipeWire, the PulseAudio support which was mostly done already back in Fedora Workstation 34, then there was the JACK support which was there, but with gaps and finally the video support which was there, but with no real world users yet. So to start with JACK, where are we with JACK support now?

We mostly implemented the missing features in JACK. We have support for freewheeling, which allows you to export an Ardour project, and we also have latency reporting, which is necessary to correctly align tracks on a timeline.

Compatibility with applications was improved a lot over time and required quite a few iterations to get it right. We’re now able to run the JACK stress test successfully, which required some fundamental changes in core memory management and scheduling. 

We’ve also recently started implementing the last missing pieces of JACK, which is NETJACK2 support and a firewire audio driver called FFADO. The FFADO support likely needs some more work because we have not been able to actually test this because we don’t have the hardware.

We support JACK as a driver (with jackdbus), which gives the PipeWire graph the same low-latency as JACK. This should also make it possible for people that used PulseAudio and jackdbus to migrate to PipeWire.

We now also have an IRQ based ALSA driver that works the same way as the JACK driver and achieves the same low latency as JACK. Initial benchmarks also show that PipeWire uses slightly less instructions per cycle compared to JACK, which is a bit surprising and more tests need to be done.

Q: And likewise what is the state of the video support?

On the video front we added support for DMABUF and DRM modifier negotiation. This makes it possible for a video provider such as the compositor to provide a stream in the most efficient format possible, when the client supports it.

We’re also improved the scheduler to make it possible to handle headless compositors and throttling of the frame rate.

We’ve also spent a lot of time improving the libcamera plugins. We’re at a point where PipeWire camera support is added to browsers and we should be able to handle that now. We also have initial support for vulkan video filters.

There is a GSOC project to implement video conversion filters using Vulkan, which would make it possible to link and process video streams in more cases.

Since the launch in Fedora 34, there are now also a couple of native PipeWire patchbays, such as Helvum and qpwgraph that can reroute video streams as well. The most recent PipeWire version has improved support for relinking of video streams.

Q: So when PipeWire got merged into Fedora the message was that the official audio APIs for PipeWire would be the PulseAudio and JACK APIs. We have seen some projects come along since then using the PipeWire stream API instead. Is the message still to use the PulseAudio and JACK APIs for new development, or has your thinking on that changed over the last couple of years?

The message is still to use the PulseAudio and JACK APIs. They are proven and they work and they are fully supported.

I know some projects now use the pw-stream API directly. There are some advantages for using this API such as being lower latency than the PulseAudio API and having more features than the JACK API. The problem is that I came to realize that the stream API (and filter API) are not the ultimate APIs. I want to move to a combination of the stream and filter API for the future.

Q: So one of the goals for PipeWire was to help us move to a future of containerized applications using Flatpaks. How are we doing on that goal today? Does PipeWire allow us to Sandbox JACK applications for instance?

Yes, sandboxed JACK applications are available right now. You can run a flatpak Ardour that uses the PipeWire JACK client libraries in the flatpak.

What we don’t have yet is fine grained access control for those applications through the portal. We currently give flatpak applications restricted permissions. They don’t have, for example, write access on objects and so can’t change nodes or hardware volumes.

For video, we currently have more fine grained control because it is managed by the portal and permission store. We’re working on getting that kind of control for audio applications as well. 

Q: So one vision we had for PipeWire, when we started out, was that it would unify consumer and pro-audio and make usecase distributions like Fedora JAM or Ubuntu Studio redundant, instead allowing people, regardless of usecase, to just use Fedora Workstation and install both consumer and pro-audio applications through Flatpak. With what you are saying, are we basically there with that, once PipeWire 1.0 releases?

In a sense yes. It’s possible to run those applications from flatpak out of the box on Fedora Workstation without having to deal with a custom JACK/Pulseaudio setup like most of those pro-audio distributions do. 

There is, however, the aspect of integration of the various applications and the configuration of the Pro-audio cards like samplerates and latency that is not yet covered as nicely in Fedora Workstation compared to specific usecase distros.

Q: One of the things people have praised PipeWire for is improving support, under Linux, for Bluetooth headsets etc. What are the most recent developments there and what is next for it?

Bluetooth has improved a lot. We’ve added support for new codecs. PipeWire now has a vendor specific OPUS codec that allows PipeWire clients to use OPUS over bluetooth. This was developed in parallel with the Google OPUS vendor codec and so is not compatible yet.

We’ve also tracked kernel and bluez5 development and added support for the upcoming bluetooth LE standard with the LC3+ codec. We added infrastructure support for identifying, grouping, and handling the latency of separate bluetooth earpieces.

We did quite some tweaks to improve compatibility with devices, we changed the way we handle the rate control and the data transfer. There is now also support for bluetooth MIDI so (with some changes to the bluez5 config files)  you can pair and play on your bluetooth MIDI keyboard in PipeWire.

For mobile phones, we now also support offloading bluetooth handling to hardware.

We’re still tracking the changes in the kernel and bluez5 for the latest bluetooth LE changes.

Q: Another change, since we last spoke, is the switch to WirePlumber as the session management. Is that transition completed in your mind now and has it yielded the benefits hoped for?

The transition is certainly completed and is in many ways successful.

I think the hope was also that people would use the lua scripting to customize their experience. We have definitely seen people do this for specific use cases but it has not been widespread. 

I think also some of the barriers for seeing that adoption have been removed in the upcoming 0.5 version, which is shaping up quite nicely now. 

We’ve also not seen applications use the wireplumber library yet. I think this partly because the pulseaudio compatibility is so good that there is no need for native applications yet. I heard someone is working on pavucontrol, written in rust and was looking to use the wireplumber API.

Q: I know you worked on a set of patches to add pipewire Camera handling to OBS Studio. What is the status of that work?

There are 2 pending features for OBS. One is support for cameras using the Portal and PipeWire and another is to export the OBS scenes as a PipeWire stream.

Most of that code works and is ready to merge. It just needs some cleanup and Georges Stavracas is working on that now.

Q: PipeWire camera support for Firefox and Chrome is another major milestone. What can you tell us about that effort?

PipeWire Camera support is now merged in firefox 116 and you can enable it with an about:config flag. I’m not sure about Chrome.

When the browser and OBS patches are all merged, it would, for example, be possible to create a scene in OBS studio and then route the exported video stream as a camera source into the web browser. You would be able to route video just like you can with audio!

Q: One thing that you and I have talked a lot about over the last few years is how to improve the handling of things like allowing the user interface to be smarter about how to deal with sources that supply both audio and video. Examples, here, are things like HDMI interfaces or Webcameras with microphones. Currently, since the linux kernel treats these as completely independent devices, the UI also tends to, But, for a user, it will often make sense if they are seen as a pair, or at a minimum labeled in a way that makes it 100% clear that these devices are part of the same hardware. I think a lot of people would think of such things as ‘PipeWire’ features. But in reality, being policy based, is it actually WirePlumber where we would need to implement such things?

It would be a wireplumber feature, yes. One idea is to use the udev device tree to link those devices together. It requires some heuristics to get it right.

Q: There was work on adding AVB support to PipeWire, how is that work going?

Not great. The code is there but it doesn’t work or we can’t test it due to lack of hardware and infrastructure. AVB is very specialized and we need somebody with the hardware and skills to help us out with that.

We’re focusing on getting AES67 working instead, which works on more readily available hardware.  We’ve added support for PTP clocks in PipeWire and the RTP sender and receivers. We’ve had success integrating Dante devices with AES67 with PipeWire.

Q: Apart from maturing and optimizing PipeWire what are the features you are looking into going forward?

PipeWire is an IPC mechanism for multimedia. The most interesting stuff will happen in the session manager, the modules, the applications and the tools around all this. I hope to see more cool tools to route video and set up video filters etc.

I think we would like to do an API/ABI break and clean up the native API to remove some of the cruft that has been accumulating over the years, at some point.  This should not impact a lot of applications, though, as they are using the PulseAudio and JACK APIs that will of course remain unchanged. 

There are some ideas to unify the stream and filter API and remove some of the features of the stream API that turned out to be a bad idea. It probably also needs some work in the session manager.

Q: Thanks you so much for talking with us Wim, any final words to the community?

Just a big thank you to the community for embracing PipeWire and helping us get to this point. The amount of people who have contributed to PipeWire over the last couple of years is astounding both in terms of people contributing patches, but also a lot of people testing PipeWire with various kinds of software and helping us close out JACK and PulseAudio compatibility bugs or by writing articles or youtube videos about PipeWire. I hope to the community will keep working with us as we focus on providing new features through WirePlumber now and get applications out there ported to use PipeWire for video camera handling too.

PHP 8.0 is retired

Posted by Remi Collet on November 27, 2023 06:50 AM

One year after PHP 7.4, and as announced, PHP version 8.0.30 was the last official release of PHP 8.0

To keep a secure installation, the upgrade to a maintained version is strongly recommended:

  • PHP 8.1 has security only support and will be maintained until November 2024.
  • PHP 8.2 has active support and will be maintained until December 2024 (2025 for security).
  • PHP 8.3 has active support and will be maintained until November 2025 (2026 for security).

Read :

However, given the very important number of downloads by the users of my repository the version is still available in remi repository for Enterprise Linux (RHEL, CentOS, Alma, Rocky...) and Fedora and will include the latest security fixes.

Warning : this is a best effort action, depending on my spare time, without any warranty, only to give users more time to migrate. This can only be temporary, and upgrade must be the priority.

You can also watch the sources repository on github.

Week 47 in Packit

Posted by Weekly status of Packit Team on November 27, 2023 12:00 AM

Week 47 (November 21st – November 27th)

  • Packit now correctly sets the specfile content (e.g. changelog entry) even if it syncs the specfile from upstream for the first time. (packit#2170)

Episode 403 – Does the government banning apps work?

Posted by Josh Bressers on November 27, 2023 12:00 AM

Josh and Kurt talk about the Canadian Government banning WeChat and Kaspersky. There’s a lot of weird little details in this conversation. It fundamentally comes down to a conversation about risk. It’s easy to spout nonsense about risk, but having an honest discussion about it is REALLY complicated. But the government plays by a very different set of rules.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3258-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_403_Does_the_government_banning_apps_work.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_403_Does_the_government_banning_apps_work.mp3</audio>

Show Notes

Revue de presse de Fedora Linux 39

Posted by Charles-Antoine Couret on November 26, 2023 11:42 PM

Cela fait depuis Fedora 19 que je publie sur la liste de diffusion de Fedora-fr une revue de presse de chaque sortie d'une nouvelle version. Récapituler quels sites en parle et comment. Je le fais toujours deux semaines après la publication (pour que tout le monde ait le temps d'en parler). Maintenant, place à Fedora Linux 39 !

Bien entendu je passe sous silence mon blog et le forum de fedora-fr.

Sites web d'actualité

Soit 7 sites.

Les vidéos

Soit 3 vidéos.


Le nombre de sites parlant de Fedora Linux 39 est stable, tandis que els vidéos sont en hausse.

La semaine de sa sortie, nous avons eu globalement une augmentation de visites par rapport à la semaine d'avant de cet ordre là :

  • Forums : hausse de 27,1% (plus de 800 visites en plus)
  • Documentation : hausse d'environ 41,6% (soit environ 800 visites en plus)
  • Le site Fedora-fr : hausse de 82,1% (soit 150 visites en plus)
  • Borsalinux-fr : hausse de 82% (soit 23 visites en plus)

Si vous avez connaissance d'un autre lien, n'hésitez pas à partager !

Rendez-vous pour Fedora Linux 40.

Kiwi TCMS 12.7

Posted by Kiwi TCMS on November 26, 2023 11:30 AM

We're happy to announce Kiwi TCMS version 12.7!

IMPORTANT: This is our first release after reaching 2 million downloads on Docker Hub earlier this month! It is a small release which contains security related updates, several improvements, bug fixes and internal refactoring!

You can explore everything at https://public.tenant.kiwitcms.org!


Upstream container images (x86_64):

kiwitcms/kiwi   latest  973df48a2f82    613MB

IMPORTANT: version tagged and multi-arch container images are available only to subscribers!

Changes since Kiwi TCMS 12.6.1


  • Update django from 4.2.4 to 4.2.7. Fixes CVE-2023-46695, CVE-2023-43665 and CVE-2023-41164
  • Update django-simple-captcha from 0.5.18 to 0.5.20
  • We believe that none of these issue affect Kiwi TCMS directly however we recommend that you upgrade your installation as soon as possible


  • Update bleach from 6.0.0 to 6.1.0
  • Update django-colorfield from 0.9.0 to 0.10.1
  • Update django-grappelli from 3.0.6 to 3.0.8
  • Update django-simple-history from 3.3.0 to 3.4.0
  • Update markdown from 3.4.4 to 3.5.1
  • Update psycopg2 from 2.9.7 to 2.9.9
  • Update pygments from 2.16.1 to 2.17.2
  • Update python-gitlab from 3.15.0 to 4.1.1
  • Update uwsgi from 2.0.22 to 2.0.23
  • Update node_modules/crypto-js from 4.1.1 to 4.2.0
  • Update node_modules/datatables.net-buttons from 2.4.1 to 2.4.2
  • Update node_modules/pdfmake from 0.2.7 to 0.2.8
  • Update bug-tracker integration documentation with specifics about matches for product name
  • When searching for JIRA projects try also matching by project key
  • Fall-back to credentials from database if settings.EXTERNAL_ISSUE_RPC_CREDENTIALS override returns None


  • New migrations after upgrading django-color-field. Increases field max_length from 18 to 25

Bug fixes

  • Fix error in filtering by TestRun ID on TestCase Search page (@somenewacc)
  • Fix TestRun page to not automatically update its stop_date when marking statuses for test executions if there are more neutral executions left on the page outside of the currently filtered selection (@somenewacc)
  • Fix bug with JIRA integration not being able to find project via name

Refactoring and testing

  • Refactor calls to delete expandedExecutionIds to satisfy https://rules.sonarsource.com/typescript/RSPEC-2870/ (@somenewacc)
  • Refactor calls to delete expandedTestCaseIds to satisfy https://rules.sonarsource.com/typescript/RSPEC-2870/
  • Use tuple as the cache-key for IssueTrackerType.rpc_cache internally
  • Add test for collectstatic because of an upstream issue with django-grappelli
  • Improve tests for JIRA integration
  • Test against Bugzilla on Fedora 39
  • Update actions/checkout from 3 to 4
  • Update node_modules/eslint from 8.48.0 to 8.54.0
  • Update node_modules/eslint-plugin-import from 2.28.1 to 2.29.0
  • Update node_modules/eslint-plugin-n from 16.0.2 to 16.3.1
  • Update node_modules/webpack from 5.88.2 to 5.89.0
  • Update pylint-django from 2.5.3 to 2.5.5 and all of our custom linter rules

Kiwi TCMS Enterprise v12.7-mt

  • Based on Kiwi TCMS v12.7

  • Update kiwitcms-tenants from 2.5.1 to 2.5.2

  • Update kiwitcms-trackers-integration from 0.5.0 to 0.6.0

    Provides functionality for personal API tokens. Accessible via PLUGINS -> Personal API tokens menu!

    WARNING: in order for users to be able to define personal API tokens for 3rd party bug-trackers they will need to be assigned permissions.

    Kiwi TCMS administrators should consider granting the following permissions:

    tracker_integrations | api token | Can add api token
    tracker_integrations | api token | Can change api token
    tracker_integrations | api token | Can delete api token
    tracker_integrations | api token | Can view api token

    either individually per-user basis or via groups!

  • Update python3-saml from 1.15.0 to 1.16.0

  • Update social-auth-app-django from 5.2.0 to 5.4.0

    Private container images:

    quay.io/kiwitcms/version            12.7 (aarch64)          aa6a4c5434c9    25 Nov 2023     624MB
    quay.io/kiwitcms/version            12.7 (x86_64)           973df48a2f82    25 Nov 2023     613MB
    quay.io/kiwitcms/enterprise         12.7-mt (aarch64)       e19c493e7291    25 Nov 2023     814MB
    quay.io/kiwitcms/enterprise         12.7-mt (x86_64)        f38a49d661ad    25 Nov 2023     801MB

IMPORTANT: version tagged, multi-arch and Enterprise container images are available only to subscribers!

How to upgrade

Backup first! Then follow the Upgrading instructions from our documentation.

Happy testing!


If you like what we're doing and how Kiwi TCMS supports various communities please help us grow!

Untitled Post

Posted by Zach Oglesby on November 26, 2023 01:45 AM

Finished reading: A Trail Through Time by Jodi Taylor 📚

Consejos para el uso de Telegram en Educación

Posted by Pablo Iranzo Gómez on November 25, 2023 11:00 PM
Telegram viene siendo utilizado con bastante frecuencia en colegios para permitir una comunicación sencilla mediante los ‘canales’, donde los profesores pueden enviar información a las familias de forma general sin que se pueda contactar con ellos directamente, facilitando llegar esta información pero sin el riesgo de que sean acosados por las familias con preguntas, etc fuera de los canales oficiales de comunicación. Esta comodidad, ha venido, posiblemente por desconocimiento con una serie de problemas/peligros que no se han tenido en cuenta.

PipeWire Camera Support in Firefox #2

Posted by Jan Grulich on November 24, 2023 02:45 PM

I wrote the first blog post about PipeWire cameras in Firefox in May and a lot has happened since then. The first PipeWire support arrived shortly after the blog post was published and was released as part of Firefox 116 (August). We didn’t enable it by default, of course since it’s still a “work in progress”, but many of you tried it (thank you for that) and we were able to fix some issues that I, as the only tester at the time, hadn’t found. However, aside from all the crashes and minor issues we were able to fix relatively quickly, there was one major problem (or drawback) with the PipeWire camera that made it unusable with most popular video conferencing sites, such as Google Meet. Kind of a deal breaker, right? This has kept me busy ever since, but we are finally close to fixing it in upstream. I’m going to explain why this was a problem and how we fixed it, and forgive me in advance if I write anything wrong, I’m still learning and discovering things as they unfold.

There are Javascript APIs that are implemented by all the major browsers. The API documentation is here. It defines APIs sites can use to query information about media devices. I will now describe a simplified workflow used with V4L2 on the aforementioned Google Meet once you start a meeting:

  • GMeet makes enumerateDevices() call to get information about available devices
  • Firefox can respond with the list of available cameras (+ audio devices obviously) on the system because the information about cameras is available and no permission is needed
  • GMeet makes getUserMedia() call to get access to the camera since it knows there is a camera available
  • Firefox will prompt the user to get access to the selected devices (including camera) and start streaming

Now the same situation, but with PipeWire:

  • GMeet makes enumerateDevices() call to get information about available devices
  • Firefox cannot respond with the list of available cameras because this enumeration request cannot ask for user permission and we cannot access PipeWire without it. Firefox will return an empty list of camera devices and there will be only audio devices announced
  • GMeet makes getUserMedia() call, but only to get access to the devices that were previously announced, so only audio devices
  • Firefox will prompt the user to get access only to the selected audio devices and no camera

How did we solve this?

The documentation here also covers this situation. The enumerateDevices() request is allowed to return a placeholder device for each type. This means that we can return a placeholder camera device, which tells Google Meet there is actually a camera device to ask for. With this device placeholder, the subsequent getUserMedia() request will also request access to camera devices. How do we know that a camera device is present without having access to PipeWire, you ask? The camera portal from xdg-desktop-portal has a IsCameraPresent property for exactly the same purpose and we use it to know whether to insert the camera device placeholder or not.

While such a solution sounds simple on paper, it required a significant amount of changes to the entire media handling stack. There is not a small amount of PipeWire specific code, so this fix also involves some restructuring so that all the backend specific logic is in one place. And while I’m getting more and more familiar with the Firefox code, which is helping me to progress faster, there’s still a lot to learn.

Anyway, the reason I’m writing this blog post now is that all the related changes have been approved and will hopefully be landing soon, making Firefox fully usable with PipeWire . Although not yet merged, Fedora users can use a COPR repository I created. The repository has Fedora Firefox builds with all the necessary changes backported and PipeWire camera enabled by default. Just note that while I’ve been testing and using it for the past few months and it’s worked perfectly for me, you use it at your own risk. You better to use it just to test PipeWire camera support as the official Fedora Firefox package is the one we keep fully updated and my repo may lag behind. There will be a new PipeWire 1.0 release soon, which will be a big milestone for PipeWire and I hope that PipeWire camera support in Firefox and browsers in general will be part of the PipeWire success story.

Infra & RelEng Update – Week 47 2023

Posted by Fedora Community Blog on November 24, 2023 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contain updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: – 20 – 24 Nov 2023

<figure class="wp-block-image size-full"></figure>

Infrastructure & Release Engineering

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
Planning board

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives


Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).


Community Design

CPE has few members that are working as part of Community Design Team. This team is working on anything related to design in Fedora Community.


  • The deadline for submitting proposals for the virtual Creative Freedom Summit 2024 has been extended to December 1st. Don’t miss out on this opportunity! Learn more and submit your proposal here: https://cfp.fedoraproject.org 📣

ARC Investigations

The ARC (which is a subset of the CPE team) investigates possible initiatives that CPE might take on.


The post Infra & RelEng Update – Week 47 2023 appeared first on Fedora Community Blog.

Building RHEL and RHEL UBI images with mkosi

Posted by Fedora Magazine on November 24, 2023 08:00 AM

Mkosi is a lightweight tool to build images from distribution packages. This article describes how to use mkosi to build images with packages from RHEL (Red Hat Enterprise Linux) and RHEL UBI. RHEL Universal Base Image is a subset of RHEL that is freely available without a subscription.

Mkosi features

Mkosi supports a few output formats, but the most important one is Discoverable Disk Images (DDIs). The same DDI can be used to boot a container, as a virtual machine, copied to a USB stick and used to boot a real machine, and finally copied from the USB stick to the disk to boot from it. The image has a standarized layout and metadata that describes its purpose.

Mkosi relies on other tools to do most of the work: systemd-repart to create partitions on a disk image, mkfs.btrfs/mkfs.ext4/mkfs.xfs/… to write the file systems, and dnf/apt/pacman/zypper to download and unpack packages.

Mkosi supports a range of distributions: Debian and Ubuntu, Arch Linux, OpenSUSE, and of course Fedora, CentOS Stream and derivatives, and now RHEL UBI and RHEL since the last release. Because the actual “heavy lifting” is done by other tools, mkosi can do cross builds. This is where one distro is used to build images for various other distros. The only requirement is that the appropriate tools are installed on the host. Fedora has native packages for apt, pacman, and zypper, so it provides a good base to use mkosi to build any other distribution.

There are some nifty features: images can be created by an unprivileged user, or in a container without device files, in particular access to loopback devices. It can also launch those images as VMs (using qemu) without privileges.

The configuration is declarative and very easy to create. systemd-repart is used to create disk partitions, and repart.d configuration files are used to define how this should be done.

For more details see two talks by Daan DeMeyer at the All Systems Go conference: systemd-repart: Building Discoverable Disk Images and mkosi: Building Bespoke Operating System Images.

Project goal

One goal for Mkosi is to allow the testing of a software project against various distributions. It will create an image for a distribution (using packages from that distribution) and then compile and install the software project into that image, inserting additional files that are not part of a package. But the first stage, the creation of an image from packages, is useful on its own. This is what we will show first.

We1 recently added support for RHEL and RHEL UBI. Let’s start with RHEL UBI, just building an image out of distro packages.

Please note that the examples below require mkosi 19, and will not work with earlier versions.

A basic RHEL UBI image with a shell

$ mkdir -p mkosi.cache
$ mkosi \
  -d rhel-ubi \
  -t directory \
  -p bash,coreutils,util-linux,systemd,rpm \

The commands above specify the distribution ‘rhel-ubi’, the output format ‘directory’, and request that packages bash, coreutils, …, rpm be installed. rpm isn’t normally needed inside of the image, but here it will be useful for introspection. We also enable automatic login as the root user.

Before the build is started, we create the cache directory mkosi.cache. When a cache directory is present Mkosi uses it automatically to persist downloaded rpms. This will make subsequent invocations on the same package set much faster.

We can then boot this image as a container using systemd-nspawn:

$ sudo mkosi \
  -d rhel-ubi \
  -t directory \
Detected virtualization systemd-nspawn.
Detected architecture x86-64.
Detected first boot.

Red Hat Enterprise Linux 9.2 (Plow)
[ OK ] Created slice Slice /system/getty.
[ OK ] Created slice Slice /system/modprobe.
[ OK ] Created slice User and Session Slice.
[ OK ] Started User Login Management.
[ OK ] Reached target Multi-User System.

Red Hat Enterprise Linux 9.2 (Plow)
Kernel 6.5.6-300.fc39.x86_64 on an x86_64

image login: root (automatic login)

[root@image ~]# rpm -q rpm systemd

As mentioned earlier, the image can be used to boot a VM. In this setup, it is not possible — our image doesn’t have a kernel. In fact, RHEL UBI doesn’t provide a kernel at all, so we can’t use it to boot (in a VM or on bare metal).

Creating an image

I also promised an image, but so far we only have a directory. Let’s actually create an image:

$ mkosi \
  -d rhel-ubi \
  -t disk \
  -p bash,coreutils,util-linux,systemd,rpm \

This produces image.raw, an disk image with a GPT partition table, and a single root partition (for the native architecture).

$ sudo systemd-dissect image.raw
Name: image.raw
Size: 301.0M
Sec. Size: 512
Arch.: x86-64

Image UUID: dcbd6499-409e-4b62-b251-e0dd15e446d5
OS Release: NAME=Red Hat Enterprise Linux
VERSION=9.2 (Plow)
PRETTY_NAME=Red Hat Enterprise Linux 9.2 (Plow)
REDHAT_BUGZILLA_PRODUCT=Red Hat Enterprise Linux 9

Use As: ✗ bootable system for UEFI
        ✓ bootable system for container
        ✗ portable service
        ✗ initrd
        ✗ sysext extension for system
        ✗ sysext extension for initrd
        ✗ sysext extension for portable service

rw root 1236e211-4729-4561-a6fc-9ef8f18b828f root-x86-64 xfs x86-64 no yes /dev/loop0p1 1

OK, we have an image, the image has some content from RHEL UBI packages. How do we add our own stuff on top?

Extending an image with your own files

There are a few ways to extend the image, including compiling something from scratch. But first let’s do something easier and inject a provided file system into the image:

$ mkdir -p mkosi.extra/srv/www/content
$ cat >mkosi.extra/srv/www/content/index.html <<'EOF'
<h1>Hello, World!</h1>

The image will now have /srv/www/content/index.html.

This method is used to inject additional configuration or simple programs.

Building from source

Now let’s do the full monty and build something from sources. For example, a trivial meson project with a single C file:

$ cat >hello.c <<'EOF'
#include <stdio.h>

int main(int argc, char **argv) {
    char buf[1024];

    FILE *f = fopen("/srv/www/content/index.html", "re");
    size_t n = fread(buf, 1, sizeof buf, f);

    fwrite(buf, 1, n, stdout);
    return 0;

$ cat >meson.build <<'EOF'
project('hello', 'c')
executable('hello', 'hello.c',
           install: true)
$ cat >mkosi.build <<'EOF'
set -ex

mkosi-as-caller rm -rf "$BUILDDIR/build"
mkosi-as-caller meson setup "$BUILDDIR/build" "$SRCDIR"
mkosi-as-caller meson compile -C "$BUILDDIR/build"
meson install -C "$BUILDDIR/build" --no-rebuild
$ chmod +x mkosi.build

To summarize: we have some source code (hello.c), a build system configuration (meson.build), and a glue script (mkosi.build) that is to be invoked by mkosi. For a “real” project, we would have the same elements, just more complex.

The script requires some explanation. Mkosi uses user namespaces when creating the image. This allows the package managers (e.g. dnf) to install files owned by different users even though it is invoked by a normal unprivileged user. We are using mkosi-as-caller to switch back to the calling user to do the compilation. This way, the files created during compilation under $BUILDDIR will be owned by the calling user.

Now let’s build the image with our program. Compared to the previous invocation, we need additional packages: meson, gcc. Since we now have a build script, mkosi will execute two build stages: first an build image is built, and the build script is invoked in it, and the installation artifacts are stashed in a temporary directory, then a final image is built, and the installation artifacts are injected. (Mkosi sets $DESTDIR, and meson install uses $DESTDIR automatically, so the right things happen without us having to specify things explicitly.)

$ mkosi \
  -d rhel-ubi \
  -t disk \
  -p bash,coreutils,util-linux,systemd,rpm \
  --autologin \
  --build-package=meson,gcc \
  --build-dir=mkosi.builddir \
  --build-script=mkosi.build \

At this point we have the image image.raw with a custom payload. We can start our freshly minted executable as a shell command:

$ sudo mkosi -d rhel-ubi -t directory shell hello
<h1>Hello, World!</h1>

Obtaining a developer subscription for RHEL

RHEL UBI is intended for use primarily as a base layer for container builds. It has a limited set of packages available (about 1500). Let’s now switch to a full RHEL installation.

The easiest way to get access to RHEL is with a developer license. It provides an entitlement to register 16 physical or virtual nodes running RHEL, with self-service support.

First, create an account. After that, head over to management and make sure “Simple content access for Red Hat Subscription Management” is enabled. Then, create a new activation key with “Red Hat Developer Subscription for Individuals” selected. Make note of the Organization ID that is shown. We’ll refer to the key name and organization ID as $KEY_NAME and $ORGANIZATION_ID below.

Now we are ready to consume RHEL content:

$ sudo dnf install subscription-manager
$ sudo subscription-manager register \
  --org $ORGANIZATION_ID --activationkey $KEY_NAME

Building an image using RHEL

In previous examples, we specified all configuration through parameter switches. This is nice for quick development, but can become unwieldy. RHEL is a serious distribution, so let’s use a configuration file instead:

$ cat >mkosi.conf <<'EOF'




Let’s first check if everything is kosher:

$ mkosi summary

And now let’s build the image (err, directory):

$ mkosi build
$ mkosi qemu
Welcome to Red Hat Enterprise Linux 9.2 (Plow)!

[ OK ] Created slice Slice /system/modprobe.
[ OK ] Reached target Initrd Root Device.
[ OK ] Reached target Initrd /usr File System.
[ OK ] Reached target Local Integrity Protected Volumes.
[ OK ] Reached target Local File Systems.
[ OK ] Reached target Path Units.
[ OK ] Reached target Remote Encrypted Volumes.
[ OK ] Reached target Remote Verity Protected Volumes.
[ OK ] Reached target Slice Units.
[ OK ] Reached target Swaps.
[ OK ] Listening on Journal Socket.
[ OK ] Listening on udev Control Socket.
[ OK ] Listening on udev Kernel Socket.
Red Hat Enterprise Linux 9.2 (Plow)
Kernel 5.14.0-284.30.1.el9_2.x86_64 on an x86_64

localhost login: root (automatic login)
[root@localhost ~]#

Yes, we built the “image” as a directory with a file system tree, and booted it as a virtual machine.

In the booted virtual machine, findmnt / shows that the root file systems is virtiofs. This is a virtual file system that exposes a directory from the host to the guest. We could build a more traditional image with a partition table and file systems inside of a file, but a directory+virtiofs is quick and nicer for development.

The image that we just booted is not registered. To allow updates to be downloaded from inside of the image, we would have to add yum, subscription-manager, and NetworkManager to the package list, and before we download any updates, call subscription-manager in the same way as above. Once we do that, we have about 4500 packages at our disposal in the basic repositories, and a few dozen additional repositories with more specialized packages.


And that’s all I have for today. If you have questions, find us on Matrix at #mkosi:matrix.org or on the systemd mailing list.

  1. Daan DeMeyer, Lukáš Nykrýn, Michal Sekletár, Zbigiew Jędrzejewski-Szmek ↩

PHP version 8.1.26 and 8.2.13

Posted by Remi Collet on November 24, 2023 06:05 AM

RPMs of PHP version 8.2.13 are available in the remi-modular repository for Fedora ≥ 37 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php82 repository for EL 7.

RPMs of PHP version 8.1.26 are available in the remi-modular repository for Fedora ≥ 37 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php81 repository for EL 7.

emblem-notice-24.png The Fedora 39, EL-8 and EL-9 packages (modules and SCL) are available for x86_64 and aarch64.

emblem-notice-24.pngThere is no security fix this month, so no update for version 8.0.30.

emblem-important-2-24.pngPHP version 7.4 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.2 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.2
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php82
yum update

Parallel installation of version 8.2 as Software Collection

yum install php82

Replacement of default PHP by version 8.1 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.1
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php81
yum update php\*

Parallel installation of version 8.1 as Software Collection

yum install php81

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-9 RPMs are built using RHEL-9.2 (next builds will use 9.3)
  • EL-8 RPMs are built using RHEL-8.8 (next builds will use 8.9)
  • EL-7 RPMs are built using RHEL-7.9
  • intl extension now uses libicu73 (version 73.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.9, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 21.12 on x86_64, 19.19 on aarch64
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page


Base packages (php)

Software Collections (php80 / php81 / php82)

Zimbra 10 builds

Posted by Truong Anh Tuan on November 24, 2023 04:45 AM
Như thông tin từ Zimbra, phiên bản 8.8.15 sẽ hết thời gian hỗ trợ vào 31/12/2023 và phiên bản 9.0 cũng sẽ hết hỗ trợ vào 31/12/2024 (vừa gia hạn thêm); nguồn lực phát triển sẽ được tập trung cho phiên bản 10 và các phiên bản tiếp theo. Xem ảnh dưới: Cũng kể từContinue reading "Zimbra 10 builds"

PHP version 8.3.0 is released!

Posted by Remi Collet on November 23, 2023 04:51 PM

RC6 was GOLD, so version 8.3.0 GA is just released, at planned date.

A great thanks to Eric Mann, Jakub Zelenka, and Pierrick Charron our Release Managers,  to all developers who have contributed to this new long awaiting version of PHP, and thanks to all testers of the RC versions who have allowed us to deliver a good quality version.

RPM are available in the php:remi-8.3 module for Fedora and Enterprise Linux 8 and as Software Collection in the remi-safe repository.

RPM are also available in the remi-php83 repository for  Enterprise Linux 7 (RHEL, CentOS, Alma, Rocky...).

Read the PHP 8.3.0 Release Announcement and its Addendum for new features detailed description.

For memory, this is the result of 6 months of work for me to provide these packages, starting in June for Software Collections of alpha versions, in August for module streams of RC versions, and also a lot of work on extensions to provide a mostly full PHP 8.3 stack.

emblem-notice-24.pngInstallation: read the Repository configuration and choose installation mode, or follow the Configuration Wizard instructions.

Replacement of default PHP by version 8.3 installation (simplest):

Fedora modular or Enterprise Linux 8:

dnf module reset php
dnf module install php:remi-8.3

Enterprise Linux 7:

yum-config-manager --enable remi-php83
yum update php\*

Parallel installation of version 8.3 as Software Collection (recommended for tests):

yum install php83

emblem-important-2-24.pngTo be noticed :

  • EL9 RPMs are build using RHEL-9.2
  • EL8 RPMs are build using RHEL-8.8
  • EL7 RPMs are build using RHEL-7.9
  • this version will also be the default version in Fedora 40
  • lot of extensions are already available, see the PECL extension RPM status page.

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php83)

Five minute hacks: Swapping left and right headphone audio in wireplumber

Posted by James Just James on November 22, 2023 10:25 PM
I’m on Fedora 37 at the moment. My headphone audio is backwards. How do we fix it? I’ve apparently never found the right magic until now. The Setup: I have a crappy “couch computer”. It’s used for casual internet browsing and video watching. It has an analog audio out which goes to my speakers. I’d like to use headphones for when I don’t want to annoy my neighbours. I have a pair of wireless headphones that I got for free, but the battery doesn’t last longer than an hour.

sendto system call from python

Posted by Adam Young on November 22, 2023 08:18 PM

Once we open a socket, we probably want to send and receive data across it.Here is the system call we need to make in order to send data as I wrote about in my last post:

c = sendto(sd, buf, sizeof(buf), 0,
                (struct sockaddr *)&addr, sizeof(addr));

Note the cast of the addr parameter. This is a structure with another structure embedded in it. I tried hand-jamming the structure, but the data sent across the system call interface was not what I expected. In order to make it work ,I uses a utility called ctypesgen with the header I separated out in my last post. Runing this:

ctypesgen ./spdmcli.h

generates a lot of code, much of which I don’t need. The part I do need looks like this:

__uint8_t = c_ubyte# /usr/include/bits/types.h: 38
__uint16_t = c_ushort# /usr/include/bits/types.h: 40
__uint32_t = c_uint# /usr/include/bits/types.h: 42
uint8_t = __uint8_t# /usr/include/bits/stdint-uintn.h: 24
uint16_t = __uint16_t# /usr/include/bits/stdint-uintn.h: 25
uint32_t = __uint32_t# /usr/include/bits/stdint-uintn.h: 26

# /root/spdmcli/spdmcli.h: 5
class struct_mctp_addr(Structure):

struct_mctp_addr.__slots__ = [
struct_mctp_addr._fields_ = [
    ('s_addr', uint8_t),

# /root/spdmcli/spdmcli.h: 9
class struct_sockaddr_mctp(Structure):

struct_sockaddr_mctp.__slots__ = [
struct_sockaddr_mctp._fields_ = [
    ('smctp_family', uint16_t),
    ('smctp_network', uint32_t),
    ('smctp_addr', struct_mctp_addr),
    ('smctp_type', uint8_t),
    ('smctp_tag', uint8_t),

# /root/spdmcli/spdmcli.h: 3
    MCTP_TAG_OWNER = 0x08

mctp_addr = struct_mctp_addr# /root/spdmcli/spdmcli.h: 5

sockaddr_mctp = struct_sockaddr_mctp# /root/spdmcli/spdmcli.h: 9

With those definition, the following code makes the system call with the parameter data in the right order:

mctp_addr = struct_mctp_addr(0x32) # /root/spdmcli/spdmcli.h: 16
sockaddr_mctp = struct_sockaddr_mctp(AF_MCTP, 1, mctp_addr , 5, 0x08)

p = create_string_buffer(b"Held") 
p[0] = 0x10 
p[1] = 0x82

sz = sizeof(sockaddr_mctp)
print("calling sendto")

rc = libc.sendto(sock, p, sizeof(p), 0, byref(sockaddr_mctp), sz) 
errno = get_errno_loc()[0]
print("rc = %d errno =  %d" % (rc, errno)   )

if (rc < 0 ):


With this Next step is to read the response.

Updated MCTP send code

Posted by Adam Young on November 22, 2023 04:26 PM

While the existing documentation is good, there are a couple things that have changed since it was originally written, and I had to make a couple minor adjustments to get it to work. Here’s the code to send a message. The receive part should work as originally published; what is important is the set of headers. I built and ran this on an AARCH64 platform running Fedora 38.

I split the code into a header and a .c file for reasons I will address in another article.


#include <stdint.h>

#define MCTP_TAG_OWNER 0x08

struct mctp_addr {
    uint8_t             s_addr;

struct sockaddr_mctp {
    uint16_t            smctp_family;
    uint32_t            smctp_network;
    struct mctp_addr    smctp_addr;
    uint8_t             smctp_type;
    uint8_t             smctp_tag;

I also coded this for spdm, not MCTP control messages. The difference is the value for addr.smctp_type

EDIT: I also added the recvfrom call to make this complete.


#include <linux/mctp.h>
#include <linux/if_link.h>
#include <linux/rtnetlink.h>

#include "stdio.h"
#include "spdmcli.h"

int main(void)
    struct sockaddr_mctp addr = { 0 };
    char buf[] = "hello, world!";
    int sd, rc;

    /* create the MCTP socket */
    sd = socket(AF_MCTP, SOCK_DGRAM, 0);
    if (sd < 0)
        err(EXIT_FAILURE, "socket() failed");

    /* populate the remote address information */
    addr.smctp_family = AF_MCTP;  /* we're using the MCTP family */
    addr.smctp_addr.s_addr = 0x32;   /* send to remote endpoint ID 32 */
    addr.smctp_type = 5;          /* encapsulated protocol type iSPDM = 5 */
    addr.smctp_tag = MCTP_TAG_OWNER; /* we own the tag, and so the kernel
                                        will allocate one for us */

    addr.smctp_network = 1 ;
    /* send the MCTP message */
    rc = sendto(sd, buf, sizeof(buf), 0,
                (struct sockaddr *)&addr, sizeof(addr));

    if (rc != sizeof(buf))
        err(EXIT_FAILURE, "sendto() failed");

   //EDIT: The following code recieves a message from the network and prints it to the console.
        if (rc < 0){
        err(EXIT_FAILURE, "recvfrom() failed");

    printf("recv rc = %d\n",rc);        

    for (int i =0; i< rc; i++){
        printf("%x", rxbuf[i]); 


    return EXIT_SUCCESS;

git fixup -- your workflow

Posted by Farhaan Bukhsh on November 22, 2023 02:34 PM

"Woah!" this was my reaction when I discovered the --fixup flag for git commit command. It reduces the time spent on working on features/fixes by:

  1. Avoiding temporary commit messages.

  2. Eliminating the time taken to remove the temporary commit messages.

  3. Helping with organized rebases.

  4. Helping with quicker squashing of commits.

Let me walk you through the old workflow that I used to employ when I worked on any feature/fixes. I generally start with a fresh branch from the main branch.

git checkout -b farhaan/great-looking-feature

Then I will keep working on the branch and keep adding the commits. The commits are a result of a review or some additional fixes. Once the review is completed I need to squash the commits. The difficulties I face here are:

  1. If I have done multiple fixes by any chance I will not be able to segregate them.

  2. I need to count the number of commits to pass it to HEAD~<number> for interactive rebase.

Hence, when my work on the feature branch is done I would have to first squash my commits and then rebase the branch over main. Suppose I have 5 commits to squash I will have to run:

git rebase -i HEAD~5

There will be an editor window that will open and we need to select the commit to squash or to pick. Then there will be an editor window to modify the commit message if you want to.

All the above problems and steps are easily fixed and simplified by the --fix-up workflow. Here I put all the effort that I need into writing my first commit messages then later messages are prefixed by fixup. I can even segregate my changes into multiple commit messages. Let me show an example:


To add a fixup commit I need to give the commit hash that I want to fixup.

git commit --fixup=632aa92e0d27332b4d7caf70eb268a23d3544610

This adds the commit message fixup! First commit [feat]. Now, if we want to squash the commit message we need to autosquash .

git rebase -i --autosquash e05a9e0

We just need to be sure to give the commit hash till we want to squash.


And the result is *Drum Rolls*


I hope this workflow will help you save a lot of time and improve the way you use git. Happy Hacking!

Linux Desktop Migration Tool 1.3

Posted by Jiri Eischmann on November 22, 2023 10:51 AM

I made another release of Linux Desktop Migration Tool. This release includes migration of various secrets and certificates.

It can now migrate PKI certificates and the shared NSS database. It also exports, copies over, and imports existing GPG keys, ssh certificates and settings, migrates GNOME Online Accounts and the GNOME keyring.

For security reasons libsecret has no API to change a keyring password. So I thought that I would have to instruct the users to do it manually in Seahorse after the migration, but I was happy to learn that after you open the Login keyring (and most users only has this one) with the old password, it automatically changes it to the current user’s password. So after logging in you’re prompted to type the old password and that’s it.

After the migration, I opened Evolution and everything was ready. All messages were there, all GOA accounts set up, it didn’t prompt me for passwords, my electronic signature certificate and GPG keys were also there to sign my messages.

What’s new for Silverblue, Kinoite, Sericea and Onyx in Fedora 39

Posted by Timothée Ravier on November 21, 2023 11:00 PM

Fedora 39 has been released! 🎉 So let’s see what comes in this new release for the Fedora Silverblue, Kinoite, Sericea and Onyx variants. This post is a summary of the “What’s new in Fedora Silverblue, Kinoite, Sericea and Onyx?” talk I did with Joshua Strobl for the Fedora 39 Release Party (see the full slides).

What’s new?

Welcome to Fedora Onyx!

Fedora Onyx is a new variant using the Budgie desktop, with a (nearly) stock experience. It follows up on the Fedora Budgie Spin which has been introduced in Fedora 38.

The experience is similar to other Fedora Atomic Desktops (what’s that? see below 🙂): ships toolbx out-of-the-box and access to Flatpaks.

We will hopefully re-brand it from “Onyx” to “Fedora Budgie Atomic” and later aspire at having the Atomic variant be the “Fedora Budgie” and have the “mutable” spin be re-branded.

Fedora Atomic Desktops

We have created a new Special Interest Group (SIG) focused on (rpm-)ostree based desktop variants of Fedora (Silverblue, Kinoite, Sericea and Onyx). The “Fedora Atomic Desktops” name will also serve as an umbrella to regroup all those variants under a common name.

Note that the new name is still pending approval by the Fedora Council. A Fedora Change Request has been opened to track that for Fedora 40.

We will progressively centralize the work for this SIG in the fedora/ostree GitLab namespace. We already have an issue tracker.

What’s new in Silverblue?

Silverblue comes with the latest GNOME 45 release. Loupe replaces Eye of GNOME (EOG). For now, the new Flatpaks are not automatically installed on updates so you will have to replace EOG by Loupe manually.

Fedora Flatpaks are now available ppc64le and included in the installer.

For more details about the changes that comes with GNOME 45, see the What’s new in Fedora Workstation 39 on the Fedora Magazine.

What’s new in Kinoite?

Kinoite stays on Plasma 5.27. Plasma 6 is coming for Fedora 40.

A subset of KDE Apps is now available as Flatpaks from Fedora. They are built from Fedora RPM source and build options and are also available for all releases (not just the latest) and even other distributions due to the nature of Flatpaks.

Thanks a lot to Yaakov Selkowitz and the Flatpak SIG for making this happen!

With the Flatpaks being available in the Fedora remote, we have removed some apps from the base image: Okular, Gwenview, Kcalc. The Flatpaks are not installed on updates but you can install them from the Fedora Flatpak remote or from Flathub.

Fedora Flatpaks will be installed by Anaconda by default for new installations in Fedora 40.

What’s new in Sericea?

No major changes this release.

rpm-ostree unified core

Ostree commits are now built via rpm-ostree unified core mode. The main benefits are cleanups and stricter build constraints (that sometimes surface bugs in RPMs). This is also how Fedora CoreOS is being built right now.

This change should be completely transparent to users.

This is needed to get bootupd support and a step towards moving to ostree native container images (discussed below).

What’s next?

bootupd support

Adding bootupd support to Atomic Desktops will finally let users easily update their bootloader on their systems (issue#120). We needed the commits to be built using rpm-ostree unified core mode, which is a change that landed in Fedora 39.

We are now waiting on Anaconda installer fixes that are in progress. This should hopefully land in Fedora 40.

Ostree Native Containers

The idea behind Ostree Native Containers is to package ostree commits as OCI containers. The main benefits are:

  • OCI containers are easier to manage, deploy and mirror than ostree repos
  • It makes it possible to create derived images via a Containerfile/Dockerfile
  • As it is a regular container, you can inspect its content, scan it for vulnerabilities or run it like a container
  • Signing is made easier via support for cosign/sigstore

You can take a look at the following examples that take advantage of this functionality:

Work is currently in progress to add support to build those images via Pungi. Initially, they will be built alongside the current ostree commits. This is currently planned for Fedora 40 (the change page needs to be updated / rewritten).

We will be looking at fully transitioning to containers in a future release.

Universal Blue, Bluefin and Bazzite

Those projects build on the in-progress support for the Ostree Native Containers format and the Fedora Atomic Desktops images. All the changes that are included are made via Containerfiles/Dockerfiles.

They include lots of options, offer a wide choice of images, include additional fixes, enable more platform support, UX fixes, etc.

Universal Blue is the general project, Project Bluefin is the developer focused one and Bazzite is focused on gaming, including on the Steam Deck and other similar devices.

Check them out!

Support for Asahi Linux?

Help us make that happen! One notable missing part is support in Kiwi (issue#38) to build the images. See Fedora Asahi Remix for more details.

Where to reach us?

We are looking for contributors to help us make the Fedora Atomic Desktops the best experience for Fedora users.

socket system call from python

Posted by Adam Young on November 21, 2023 08:37 PM

While the Python socket API is mature, it does not yet support MCTP. I thought I would thus try to make a call from python into native code. The first step is to create a socket. Here is my code to do that.

Note that this is not the entire C code needed to make network call, just the very first step.I did include the code to read errno if the call fails.

from ctypes import *
libc = CDLL("/lib64/libc.so.6")
AF_MCTP = 45
rc = libc.socket (AF_MCTP, SOCK_DGRAM, 0)
#print("rc = %d " % rc)
get_errno_loc = libc.__errno_location
get_errno_loc.restype = POINTER(c_int)
errno = get_errno_loc()[0]
print("rc = %d errno =  %d" % (rc, errno)   )

Running this code on my machine shows success

# ./spdm.py 
rc = 3 errno =  0

Nascimento da Internet

Posted by ! Avi Alkalay ¡ on November 21, 2023 12:15 PM

No dia de hoje, mas em 1969, nasceu a Internet.

Criada por desejo e necessidade de pesquisadores americanos trocarem mensagens, dados e recursos computacionais para suas pesquisas, foi viabilizada quando um orçamento de $1 milhão das forças armadas americanas foi realocado de mísseis balísticos para o projeto ARPANET.

Em 21 de Novembro de 1969, uma conexão permanente foi estabelecida entre a Universidade da Califórnia em Los Angeles (UCLA) e o Instituto de Pesquisas de Stanford (SRI), perto de San Francisco. Protocolos como TCP/IP, Telnet, FTP também foram criados para facilitar o uso dessa novidade. Até o final daquele ano diversas outras universidades também foram conectadas. O diagrama mostra os pontos de acesso à Internet em 1970.

No Brasil, a Internet chegou nas universidades na virada para a década de 1990. Apagamos o Windows 3 de alguns PCs da IBM que tinham acabado de chegar e instalamos FreeBSD para poder usar a internet discada que foi disponibilizada no meu campus, na UNESP em Rio Claro. Um pesquisador da EmBraPA me deu as primeiras dicas sobre X11, Netscape, o terminal e alguns comandos, e não parei mais.

Hoje, não conseguimos sequer imaginar qualquer aspecto da nossa vida sem a Internet, que nos ensinou que tudo pode ser convertido ou expresso em informação, serializada e transmitida pelos cabos, sinais de rádio e protocolos que conectam nossa enorme fauna de aparelhos.

21 de Novembro de 2023, a Internet faz 54 anos.

PyPI Anuncia con Éxito su Primera Auditoría de Seguridad Externa

Posted by Arnulfo Reyes on November 20, 2023 03:58 PM

Un Resumen en Español del Informe Original

Nos complace anunciar que PyPI ha completado su primera auditoría de seguridad externa. Esta labor fue financiada en colaboración con el Open Technology Fund (OTF).


PyPI has completed its first security audit - The Python Package Index

El Open Technology Fund seleccionó a Trail of Bits, una firma líder en ciberseguridad con amplia experiencia en código abierto y Python, para realizar la auditoría. Trail of Bits dedicó un total de 10 semanas de trabajo de ingeniería para identificar problemas, presentar esos hallazgos al equipo de PyPI y ayudarnos a remediar los hallazgos.

La auditoría se centró en “Warehouse”, la base de código de código abierto que alimenta https://pypi.org, y en “cabotage”, el marco de orquestación de contenedores de código abierto personalizado que usamos para implementar Warehouse. Incluyó la revisión del código de ambas bases de código, priorizando áreas que aceptan la entrada del usuario, proporcionan API y otras superficies públicas. La auditoría también cubrió las configuraciones de integración continua / implementación continua (CI/CD) para ambas bases de código.

Los auditores determinaron que el código de Warehouse “estaba adecuadamente probado y cumplía con las mejores prácticas ampliamente aceptadas para el desarrollo seguro en Python y web”. Aunque el código de cabotage carecía del mismo nivel de pruebas, no se identificaron problemas de alta gravedad en ninguno de los dos códigos.

Como resultado de la auditoría, Trail of Bits detalló 29 advertencias diferentes en ambos códigos. Evaluando la gravedad, 14 se categorizaron como “informativas”, 6 como “bajas”, 8 como “medianas” y ninguna como “alta”. El equipo de PyPI ha remediado todas las advertencias significativas en ambos códigos, trabajando con equipos externos cuando fue necesario.

En aras de la transparencia, se han publicado los resultados completos de la auditoría preparados por Trail of Bits.

Además, en dos publicaciones de blog adicionales publicadas hoy, Mike Fiedler (Ingeniero de Seguridad y Seguridad de PyPI) detalla cómo remediaron estos hallazgos en Warehouse y Ee Durbin (Director de Infraestructura de la Fundación de Software Python) detalla de manera similar las remediaciones en cabotage.

Nos gustaría agradecer al Open Technology Fund por su continuo apoyo a PyPI y específicamente por este importante hito de seguridad para el ecosistema Python. También nos gustaría agradecer a Trail of Bits por ser un socio confiable, minucioso y reflexivo durante todo el proceso.

Nextcloud as Personal Cloud

Posted by Jiri Eischmann on November 20, 2023 12:34 PM

I have been using Nextcloud as a cloud for personal use for over 7 years. I’ve written a few blog articles about it in Czech, but the last one is from 2018 and a lot has happened in the Nextcloud world since then, so I decided to revisit the topic.

When I started Nextcloud back in 2016, it was still seen as a replacement for Dropbox. Just a place to keep your files, share with others, etc. It had already outgrown its capabilities back then, and that’s doubly true today. Nextcloud now sees itself as a “content collaboration platform”. I would go further and call it a platform for running web applications in general. This also reflects the name change to Nextcloud Hub that took place a few years ago. If I was using Nextcloud quite fully in 2016, today it far exceeds my needs with its capabilities and app offerings.


Many people are still fixated on Nextcloud being used to sync files between computers. The Nextcloud desktop client can indeed sync files between your computer and Nextcloud, so syncing between different devices in this way is possible. But if you just need to sync files between devices, Nextcloud is not the ideal tool for that. Tools designed for this, such as Syncthing, will do a better job.

I’ve actually never used synchronization between a local drive and Nextcloud. I didn’t think of it as a remote copy of the local disk, but as a remote storage that extends the disk on the computer. I typically access files in Nextcloud through the Nautilus file manager, where I have it bookmarked as an additional drive. That means I work with remote files, but unless I’m on a downright slow connection and they are large files like videos, it is almost indistinguishable from working with local files.

The files I have on Nextcloud are those that I want to access quickly from anywhere. Since I’m limited by the 120 GB disk on vpsfree.cz, I can’t have absolutely everything there. I don’t use it as a backup. I also have a Synology NAS at home with 3 TB of disk space, which I use for multimedia and backups. The most important things are then backed up from it to a special remote backup drive.


When I started using Nextcloud, it only had basic document capabilities. That has changed significantly since then. Nextcloud hasn’t gained more advanced document handling capabilities on its own, but through collaboration with others. The integration with OnlyOffice was the first to appear. Eventually Collabora Online, which is LibreOffice Online from Collabora, was added. This became the default choice over time and is also my preferred solution.

The two solutions work very differently. OnlyOffice converts documents to json and then sends that to your browser, where it loads a thick client that contains most of the logic for working with documents. The advantage is the lower transfer rate requirements, the disadvantage was that OnlyOffice keept the files in its own database, not as files in Nextcloud. If you wanted to keep a document as a file, you had to export it. So OnlyOffice sort of ran in parallel alongside Nextcloud instead of being integrated into it and working with its files. This has changed, I was told, but when I tried OnlyOffice now again, it still didn’t feel as integrated as Collabora Online.

Collabora Online behaves like a standard office application, working directly with files in Nextcloud and saving changes to them as well. You can combine it with the desktop LibreOffice (or any other office suite), too. You just have to make sure that you don’t have several users working on the document at once, because in that case Collabora Online doesn’t handle conflicts the way it does when users access the document only through it.

Collabora Online has additional advantages: in cooperation with Nextcloud, it has acquired an interface in the form of Nextcloud Office, which visually fits very nicely into Nextcloud and adapts to small displays, so it can also be used in the Nextcloud mobile app. You can install Collabora Online in the form of a Build-In CODE server with one click, which was not possible before. This solution is sufficient for several users. If you want something more scalable, you need to have Nextcloud connected to a full-fledged Collabora Online server instance, but that requires significantly more resources.

<figure class="wp-block-image aligncenter size-large has-custom-border"><figcaption class="wp-element-caption">Nextcloud Office</figcaption></figure>

The downside of Collabora Online compared to OnlyOffice is the bandwidth requirements. The editor itself runs on the server and only sends thin clients what to display. Sending bitmaps is of course more demanding than sending text json, and on slow connections this can be noticeable. But when I need to work more with a document, I open it in the desktop client. I use the web interface for quick browsing and editing, and minor lags on slow connections don’t bother me.


Today, photo management is one of the obvious parts of a personal cloud. With a mobile phone today, we generate tons of photos and need to back them up and organize them somehow. I used to use Nextcloud for photos primarily, but as I write above, due to limited available space, I moved my multimedia to Synology NAS primarily. Today I use Nextcloud primarily for photo sharing. When we have an event, I make a folder where everyone uploads photos and shares them together. Nextcloud in the data center has much better connectivity for this than the NAS at home.

I used to back up photos to Nextcloud. The Nextcloud mobile app takes care of that and it works nicely. I took a photo and as I connected to WiFi, it backed up to Nextcloud. The photo management itself was really basic in Nextcloud for a long time, but a few releases back they completely redesigned it and now it has pretty much everything you’d expect from such a tool: built-in editor, albums, sorting by location, tags, date… It can also recognize people, objects and famous monuments in photos.

<figure class="wp-block-image size-large has-custom-border"><figcaption class="wp-element-caption">Nextcloud Photos</figcaption></figure>

Nextcloud can do all this on your computer, you don’t have to hand your photos over to anyone for analysis. But analyzing photos and recognizing people and objects are also computationally intensive. It requires 4 GB of RAM, which is exactly what I have for the entire VPS, which is why I don’t use it. Otherwise, if you’re not comfortable with the default app, you can use an app like Memories, which organizes photos primarily not in albums and folders, but a timeline like we’re used to from mobile apps.


I have maintained my contact list exclusively in Nextcloud since the very beginning. And it just works. I can access them from the web interface, but Nextcloud also supports the CardDAV protocol flawlessly, so they sync to my desktop and mobile as well.

On Linux, I have contacts thanks to evolution-data-server. After I log in to Nextcloud in GNOME Online Accounts, I can work with my contacts in the Contacts app or in Evolution, or I can search them directly in the GNOME Shell.

<figure class="wp-block-image size-large has-custom-border"><figcaption class="wp-element-caption">GNOME Contacts</figcaption></figure>

Android does not support CardDAV by default, but you can install the DAVx5 app, which Nextcloud officially recommends. It’s free in F-Droid, and for a small fee in Google Play. DAVx5 made not only my contacts, but also my calendars and tasks available in Android. I can then work with my contacts on my phone in any dedicated app, Samsung Contacts in my case.

When it comes to contacts I also tested on a very real situation that Nextcloud is not a backup. My mom somehow deleted all her contacts on her phone, and this deletion synced very quickly to Nextcloud and other clients. I had to retrieve a backup of Nextcloud itself to get her contact list, that was built up for year, back.


A personal calendar is an important element of time organization for me. Like contacts, I have been using it in Nextcloud since the very beginning. And as with contacts, Nextcloud supports the standard protocol, in this case CalDAV, very well. Thanks to GNOME Online Accounts and evolution-data-server, my calendars are then automatically displayed in Evolution or GNOME Calendar on the desktop and, thanks to DAVx5, which handles CalDAV in addition to CardDAV, also in the mobile calendar app.

We’ve been using Google Workspace at work for a few years now, so I can compare, and I have to say that while I’ve had countless problems with Google’s calendars in third-party tools, Nextcloud’s calendars have always worked flawlessly with them. Of the other features, I mainly use the sharing of calendars with other family members. However, Nextcloud calendars can do much more thanks to the integration with other tools. For example, you can create a conference room for a meeting in Nextcloud Talk, etc.

<figure class="wp-block-image size-large has-custom-border"><figcaption class="wp-element-caption">Nextcloud Calendars</figcaption></figure>

Actually, there’s only one thing missing: when I add an external calendar to Nextcloud via a link to an .ics file, Nextcloud won’t promote it to third-party clients. So I have to set it up again in each of them.


I also use Nextcloud for tasks I don’t need anything sophisticated, just a classic todo where I can quickly enter a task and set a deadline. I use it for both personal and work tasks. Nextcloud treats tasks like a special calendar, so they too can be synced between different devices thanks to the CalDAV protocol.

On Linux, I have tasks available again thanks to evolution-data-server. I can work with them either in Evolution. Or Endeavour that only focuses on tasks. I could find use for a few extra features, but it handles the basics I require. The problem with Endeavour is that development has stopped. It uses out-of-date versions of libraries and I may have to abandon it soon. Recently, a new application has appeared that has support for Nextcloud – Errands. However, it can’t set completion dates yet, which is essential for me, and it also can’t work with multiple pre-existing to-do lists, but creates a new one, which is not ideal either.

I used to use the OpenTasks app on Android, but it never really worked for me. I switched to Tasks.org a while ago and it suits me much better. It has a clearer interface and has more features. And one is really a killer feature: it can alert on tasks not only based on time, but also based on location, so it handles even tasks that are defined by location and not time of completion, like: when you’re in town, stop to buy…


I have been using RSS to follow news and articles for many years. I used to use the built-in reader in Opera, but then I switched to Firefox and as I started reading news on different devices, it didn’t make sense to use something that couldn’t sync between them.

<figure class="wp-block-image size-large has-custom-border"><figcaption class="wp-element-caption">Web interface of Nextcloud News</figcaption></figure>

So I switched to Feedly, but as I started using Nextcloud, I moved my RSS reader to that as well and started using Nextcloud News. I still use them to this day and I’m perfectly happy with them. I can read them from my laptop, mobile phone, or anywhere via the web interface. The read/unread articles sync so I can always pick up on one device where I left off on the other.

<figure class="wp-block-image aligncenter size-large is-resized has-custom-border"><figcaption class="wp-element-caption">Nextcloud News for Android</figcaption></figure>

There is a Nextcloud News app for Android, and on Linux I use the NewsFlash app, which is, in a word, excellent. It has a modern and simple interface that also adapts well to the screen size, so I use it to my full satisfaction on my Linux phone too.

<figure class="wp-block-image size-large has-custom-border"><figcaption class="wp-element-caption">NewsFlash for Linux</figcaption></figure>


I try to follow basic security rules, so I deploy two-factor authentication where possible, and have used unique strong passwords for years. This can’t be done without a password manager. At first I used 1Password for this, but with the move to Nextcloud I switched to Passman. It still exists today, but it had development activity issues at some point and I decided to switch to Nextcloud Passwords, which I like better in the form both of the web interface and the mobile app.

Nextcloud Passwords have the usual features like warnings that the password has been leaked somewhere. It allows different levels of encryption up to end-to-end encryption. I also like sharing passwords with other Nextcloud users. There’s also a web browser extension that autocompletes the login credentials, but I use the login manager directly in Firefox for that, so I don’t use it. There are three Android apps for Passwords, and the newest one in particular is excellent – small, simple, fast. There’s also a Linux app, but again its interface doesn’t suit me, so I use the web interface on my computer.

<figure class="wp-block-image aligncenter size-large is-resized"><figcaption class="wp-element-caption">Nextcloud Passwords for Android</figcaption></figure>


I also use Nextcloud for personal notes. Nextcloud Notes are quite simple compared to Evernote, but they are enough for me. They can be formatted decently, I can add pictures and other attachments. Actually, the only thing I miss is that they don’t have subcategories to organize them better. On the other hand, I like that they are saved in the /Notes folder as markdown files and can be worked with outside of the notes app itself.

<figure class="wp-block-image size-large has-custom-border"><figcaption class="wp-element-caption">The web interface of Nextcloud Notes</figcaption></figure>

Previously, Nextcloud Notes worked in GNOME Notes (Bijiben), but that application has been out of development for a few years now and support for Nextcloud has broken. So I have to work with the web interface on the desktop. For mobile, there is a Nextcloud Notes app that provides the same functionality, just in an interface tailored for small screens.

Calls & Chat

Nextcloud also offers the ability to chat and call using the Talk app. I primarily use Telegram and Signal, so I admit I don’t use this option much. Until Telegram was able to make (video) calls, I had used Talk for that. In order for calls to work through NAT, one must still have a Coturn TURN server on the server. Then it works reliably and nicely.

Talk makes sense especially in an organization that already uses Nextcloud and needs something for internal communication. The authors have recently been focusing on integration with other tools in Nextcloud. For example, you can upload a document to a chat room etc. It’s not that useful for private chatting, because even though it uses XMPP internally, it can’t federate, and in general XMPP is rather rare nowadays compared to other services. There is a mobile app available.


This is not a complete list of what I use in Nextcloud. I also use the polling app from time to time. Like when we need to make an meetup. Instead of Doodle, I can just create a poll in Nextcloud and others can vote in it without having to register.

I’m not much of a cook, but I use Nextcloud Cookbook for my few recipes. The nice thing about it is that I can write the recipe on my computer, but it also has a mobile app so I can just look at my mobile while I’m cooking.

<figure class="wp-block-image size-large has-custom-border"><figcaption class="wp-element-caption">The Nextcloud folder on my phone</figcaption></figure>

Many Options

This is what I use, but Nextcloud is already a really broad platform for which there are dozens of apps and everyone can find their own mix. But it’s still true that, unlike other personal cloud solutions, you’re in control of the data. I run Nextcloud in-house and I have to say that once things are set up, it is a nearly maintenance-free thing. The installation has gone through countless upgrades over the 7 years and has been virtually problem free.

Not everyone has the knowledge and desire to run Nextcloud in-house. There is a solution for them in the form of Nextcloud as a service. So far, the best value for money I have seen in terms of capacity/price ratio is Storage Share from Hetzner. You can get a 1TB disk with unlimited users and 50 concurrent connections for 125 CZK per month, which is a very good value for money even compared to offers from giants like Google or Microsoft. You just need to take into account that this is really Nextcloud as a service. You can only do administration inside the installation, but you can’t get to the installation file, database, etc., so there’s no easy migration to somewhere else. If you want more control, you can choose another one of the dozens of providers.

The article was originally published on my Czech blog.

Next Open NeuroFedora meeting: 20 November 1300 UTC

Posted by The NeuroFedora Blog on November 20, 2023 12:29 PM
Photo by William White on Unsplash

Photo by William White on Unsplash.

Please join us at the next regular Open NeuroFedora team meeting on Monday 20 November at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 2023-11-20'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

Week 46 in Packit

Posted by Weekly status of Packit Team on November 20, 2023 12:00 AM

Week 46 (November 14th – November 20th)

  • Packit now supports pre-release version in propose_downstream and pull_from_upstream. A spec file update might be required, see the documentation for more details. (packit#2149)
    • In relation to that, specfile library has a new method, Specfile.update_version(), that allows updating spec file version even if it is a pre-release. (specfile#317)
  • Packit can now check, using the new update_version_mask configuration option, that the proposed version of new release and the current version of the dist-git branch are compatible and sync the dist-git branch only in that case. (packit#2156)
  • Packit is now able to get the version from spec file even if the Version tag is not present in the specfile directly, but e.g. imported from another file. (packit#2157)
  • PACKIT_COPR_PROJECT env var that is exposed to Testing Farm now includes the Copr project of the additional build specified in comment, if present. (packit-service#2253)

Episode 402 – The EU’s eIDAS regulation is a terrible idea

Posted by Josh Bressers on November 20, 2023 12:00 AM

Josh and Kurt talk about the new EU eIDAS regulation. This is a bill that will force web browsers to add root certificates based on law instead of technical merits, which is how it’s currently done. This is concerning for a number of reasons that we discuss on the show. This proposal is not a good idea.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3254-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_402_The_EUs_eIDAS_regulation_is_a_terrible_idea.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_402_The_EUs_eIDAS_regulation_is_a_terrible_idea.mp3</audio>

Show Notes

Untitled Post

Posted by Zach Oglesby on November 19, 2023 11:36 PM

Finished reading: A Second Chance by Jodi Taylor 📚

Book #3 was action packed, I was surprised to find how much of the book was left when I thought it was nearly done (an ebook problem), but the story was great and the ending was worth it. Really pulling out all the stops in the time travel genre.

Some Fedora Infra stats in Nov 2023

Posted by Kevin Fenzi on November 19, 2023 11:11 PM

Things have been crazy busy of late, but with Fedora 39 out the door and a week of vacation coming up I am finally starting to feel caught up. So, I thought I would share a quick post on some stats:

Number of instances in fedora-infra ansible inventory: 448

Instances here means bare metal machines, vm’s on those bare metal machines and some aws instances. It doesn’t include containers or the like.

Breakdown by OS:

273 are Fedora of some version and 175 are some RHEL version

Of the Fedora ones, 1 is f40 (rawhide-test), 39 are f39, 210 are f38 (all the builders are still on f38, going to be reinstalled with f39 soon), 13 f37 and the rest odd older things (our OSBS cluster which is slated for retiremenet).

Of the RHEL ones, 70 are 9, 59 are 8 and 46 are 7. Many of the ones still on RHEL7 are services we are working to retire or waiting on applications to be ported to RHEL9 (mirrormanager, badges, mailman3, osbs, pdc, mbs). Many of the RHEL8 ones are just tricky ones to upgrade like database servers or virthosts that house those database servers.

Likely in the coming weeks I will try and get a bunch more of those uplifted before the end of the year. There may be some downtime doing the databases, but hopefully it will be minimal.

Infra & RelEng Update – Week 46 2023

Posted by Fedora Community Blog on November 17, 2023 12:00 PM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contain updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 13 – 17 Nov 2023

<figure class="wp-block-image size-full is-style-default">CPE infographic</figure>

Infrastructure & Release Engineering

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
Planning board

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives


Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).


ARC Investigations

The ARC (which is a subset of the CPE team) investigates possible initiatives that CPE might take on.


List of new releases of apps maintained by CPE

The post Infra & RelEng Update – Week 46 2023 appeared first on Fedora Community Blog.

New badge: Freetober 2023 !

Posted by Fedora Badges on November 17, 2023 10:13 AM
Freetober 2023You participated in #freetober 2023!

New Computer and x86_64 builder for 2024, 2025...

Posted by Remi Collet on November 16, 2023 08:17 AM

My new computer is operational :

  • Gigabyte B760 AORUS Elite AX
  • Intel(R) Core(TM) i9-13900K CPU
  • Memory: 2x32GiB DDR5
  • NVMe disk of 2TB

Retrieved from my previous computer:

  • Corsair Graphite series 760T box
  • WaterCooling
  • SSD disks, 180GB and 480GB
  • SATA disk of 4TB

This computer is now my main development workstation and also my x86_64 builder.

First test, boot time < 5", PHP 8.3 build takes 20" instead of 55". Nice.

CPU Benchmark shows tripling of results, see i9-9900K vs i9-13900K

Another comparison, building grpc extension RPM, with the old i9-9900K

  • 4'12" for a single build using -j16
  • 3'29" average for 2 builds using -j8
  • 3'25" average for 3 builds using -j5

With the new i9-13900K

  • 2'20" for a single build using -j32
  • 1'20" average for 2 builds using -j16
  • 1'05" average for 3 builds using -j10

I plan to extend it quickly:

  • +64GiB memory

Thanks to all who want to participate, see the Merci/Thanks page.

Of course, my previous machine, from 2019, will be re-used

Pagure Exporter is now available

Posted by Fedora Community Blog on November 16, 2023 08:00 AM

Pagure Exporter is now available in the Fedora package repositories and PyPI. The Pagure Exporter tool enables the migration of a Pagure repository (including metadata and issues) to a GitLab repository. Install pagure-exporter from Fedora Linux repositories or PyPI to get started.

With most of the projects used in the Fedora Project community moving away from Pagure to the Fedora Project’s namespace on GitLab.com, Justin W. Flory proposed an initiative to the Community Platform Engineering team. The objective was to investigate and develop a self-service tool capable of helping contributors to the community migrate their project assets from Pagure to GitLab as reliably as possible, similar to the pagure-importer tool from Trac in 2015.

After investigation and development of a prototype by ARC investigators, Michal Konecny and Akashdeep Dhar as well as a backlog refinement process, the project was scoped for implementation as a part of the Fedora Infrastructure and Release Engineering work for Q4 2023 by the team’s then-product owner, Aoife Moloney.

After around a month and a half of designing, developing and testing by Akashdeep Dhar, the Pagure Exporter tool is now published for use in Fedora Linux repositories and PyPI.

Install Pagure Exporter on Fedora Linux

A package is available in the official Fedora repositories. As of publishing time, the package is available on Fedora Linux 38, 39, and Rawhide.

sudo dnf install pagure-exporter

Install Pagure Exporter from PyPI

A package is available as a PyPI project. This can be used from any distribution:

pip install pagure-exporter

Future plans for Pagure Exporter

While the application has a set of known issues, it has the following set of features today, which will expand as the development progresses:

  • Transferring repository files from Pagure projects to GitLab
  • Filtering repository transfer operation by branch names
  • Transferring issue tickets from Pagure projects to GitLab
  • Filtering issue ticket transfer operation by statuses and identities
  • Migrating states, comments, and tags/labels for issues
  • Inbuilt logging library is used for better compatibility with journaling

Apart from the features mentioned above, the following things encourage community members to participate in the project’s development:

  • Excellent overall codebase quality is ensured with 100% coverage of functional code
  • Over 75 checks are provided for unit-based and integration-based codebase testing
  • GitHub Actions and Pre-Commit CI are enabled to automate maintaining code quality
  • Documentation related to usage, development and testing are provided

Get involved with Pagure Exporter

In case you encounter any issues or have a suggestion to make the project better, please let us know on the Pagure Exporter issue tracker. To contribute, please make your pull requests on GitHub.

The post Pagure Exporter is now available appeared first on Fedora Community Blog.

Music of the week: the Cello

Posted by Peter Czanik on November 15, 2023 12:22 PM

I love the melodies of Metallica songs. However, I strongly prefer instrumental music. That’s why I was very happy, when someone brought Apocalyptica to my attention: they played Metallica on four cellos. Over the years I discovered that metal or any other music sounds nice on cellos, as I learned about two more bands: 2cellos and Mozart Heroes.

<figure> </figure>

But I should not rush so far ahead. In the year 2000 someone introduced me to Metallica. I loved the melodies, but I’m not a great fan of singing. A few months later another friend introduced me to Apocalyptica, when learned about my problem. The same wonderful melodies, but purely instrumental music. First I bought their first album: “Apocalyptica plays Metallica by Four Cellos” and soon after also the second one: “Inquisition Symphony”.

<iframe allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/Etns5DS3Txo" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube Video"></iframe>

TIDAL: https://listen.tidal.com/album/248123832 and https://listen.tidal.com/album/109813970

The next album I heard from Apocalyptica also featured a singer. That’s not something for me. That’s when I learned from a colleague about 2Cellos, a Croatian cellist duo. They played a wide variety of arrangements, everything from classical, through rock to pop. I quickly listened to all of their albums on TIDAL, and watched some of their videos on YouTube. This is my favorite:

<iframe allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/uT3SBzmDxGk" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube Video"></iframe>

TIDAL: https://listen.tidal.com/album/38440446/track/38440450

I learned about Mozart Heroes from a friend who’s son plays the cello. It is not purely cello music, as the other member of the band plays the guitar. Still, it was instant love. They also play arrangements, often combining a classical piece with something modern in the very same song. Sometimes the transition from one melody to the other is completely seamless. In the video below they combine Mozart and Metallica in a song:

<iframe allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/UBfsS1EGyWc" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube Video"></iframe>

TIDAL: https://listen.tidal.com/artist/9200105

For almost two decades I did not follow Apocalyptica, as the new music I heard from them was not purely instrumental. As Covid broke out, many concert tours were canceled. Some of these were replaced by free on-line streaming. I do not remember how I learned that Apocalyptica would also be performing an online concert, but as I did not have anything better to do, I watched it. It was pure instrumental, and love at first sight, so I bought the new album as soon as it became available in Hungary. Below I link the whole concert, which I watched live 3.5 years ago.

<iframe allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/mwuX1fq5h5s" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube Video"></iframe>

TIDAL: https://listen.tidal.com/album/125174900

Nobody is perfect, so there is a little twist in my story. The original reason I fell in love with cello arrangements was that they were all instrumental. There was no singing. A good friend mentioned that Apocalyptica is coming in our part of Europe, but unfortunately playing together with another band, and there is singing. I listened to it, and to my greatest surprise, despite the vocals, it was absolutely beautiful. To me, anyway :-)

<iframe allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/cEgsUUhqdlg" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube Video"></iframe>

TIDAL: https://listen.tidal.com/album/261401831

Writing useful terminal TUI on Linux with dialog and jq

Posted by Fedora Magazine on November 15, 2023 08:00 AM

Why a Text User Interface?

Many use the terminal on a daily basis. A Text User Interface (TUI) is a tool that will minimize user errors and allow you to become more productive with the terminal interface.

Let me give you an example: I connect on a daily basis from my home computer into my Physical PC, using Linux. All remote networking is protected using a private VPN. After a while it was irritating to be repeating the same commands over and over when connecting.

Having a bash function like this was a nice improvement:

export REMOTE_RDP_USER="myremoteuser"
function remote_machine() {
/usr/bin/xfreerdp /cert-ignore /sound:sys:alsa /f /u:$REMOTE_RDP_USER /v:$1 /p:$2

But then I was constantly doing this (on a single line):

remote_pass=(/bin/cat/.mypassfile) remote_machine $remote_machine $remote_pass

That was annoying. Not to mention that I had my password in clear text on my machine (I have an encrypted drive but still…)

So I decided to spend a little time and came up with a nice script to handle my basic needs.

What information do I need to connect to my remote desktop?

Not much information is needed. It just needs to be structured so a simple JSON file will do:

{"machines": [
"name": "machine1.domain.com",
"description": "Personal-PC"
"name": "machine2.domain.com",
"description": "Virtual-Machine"
"remote_user": "MYUSER@DOMAIN",
"title" : "MY COMPANY RDP connection"

JSON is not the best format for configuration files (as it doesn’t support comments, for example) but it has plenty of tools available on Linux to parse its contents from the command line. A very useful tool that stands out is jq. Let me show you how I can extract the list of machines:

/usr/bin/jq --compact-output --raw-output '.machines[]| .name' \
($HOME/.config/scripts/kodegeek_rdp.json) \
"machine1.domain.com" "machine2.domain.com"

Documentation for jq is available here. You can try your expressions at jq play just by copying and pasting your JSON files there and then use the expression in your scripts.

So now that I have all the ingredients I need to connect to my remote computer, let’s build a nice TUI for it.

Dialog to the rescue

Dialog is one of those underrated Linux tools that you wish you knew about a long time ago. You can build a very nice and simple UI that will work perfectly on your terminal.

For example, to create a simple checkbox list with my favorite languages, selecting Python by default:

dialog --clear --checklist "Favorite programming languages:" 10 30 7 1 \ 
Python on 2 Java off 3 Bash off 4 Perl off 5 Ruby off
<figure class="wp-block-image size-full"></figure>

We told dialog a few things:

  • Clear the screen (all the options start with –)
  • Create a checklist with title (first positional argument)
  • A list of dimensions (height width list height, 3 items)
  • Then each element of the list is a pair of a Label and a value.

Surprisingly is very concise and intuitive to get a nice selection list with a single line of code.

Full documentation for dialog is available here.

Putting everything together: Writing a TUI with Dialog and JQ

I wrote a TUI that uses jq to extract my configuration details from my JSON file and organized the flow with dialog. I ask for the password every single time and save it in a temporary file which gets removed after the script is done using it.

The script is pretty basic but is more secure and also lets me focus on more serious tasks 🙂

So what does the script looks like? Let me show you the code:

# Author Jose Vicente Nunez
# Do not use this script on a public computer. It is not secure...
# https://invisible-island.net/dialog/
# Below some constants to make it easier to handle Dialog 
# return codes
: ${DIALOG_OK=0}
: ${DIALOG_ESC=255}
# Temporary file to store sensitive data. Use a 'trap' to remove 
# at the end of the script or if it gets interrupted
declare tmp_file=$(/usr/bin/mktemp 2>/dev/null) || declare tmp_file=/tmp/test$$
trap "/bin/rm -f $tmp_file" QUIT EXIT INT
/bin/chmod go-wrx ${tmp_file} > /dev/null 2>&1
Extract details like title, remote user and machines using jq from the JSON file
Use a subshell for the machine list
declare TITLE=$(/usr/bin/jq --compact-output --raw-output '.title' $HOME/.config/scripts/kodegeek_rdp.json)|| exit 100
declare REMOTE_USER=$(/usr/bin/jq --compact-output --raw-output '.remote_user' $HOME/.config/scripts/kodegeek_rdp.json)|| exit 100
declare MACHINES=$(
    declare tmp_file2=$(/usr/bin/mktemp 2>/dev/null) || declare tmp_file2=/tmp/test$$
    # trap "/bin/rm -f $tmp_file2" 0 1 2 5 15 EXIT INT
    declare -a MACHINE_INFO=$(/usr/bin/jq --compact-output --raw-output '.machines[]| join(",")' $HOME/.config/scripts/kodegeek_rdp.json > $tmp_file2)
    declare -i i=0
    while read line; do
        declare machine=$(echo $line| /usr/bin/cut -d',' -f1)
        declare desc=$(echo $line| /usr/bin/cut -d',' -f2)
        declare toggle=off
        if [ $i -eq 0 ]; then
        echo $machine $desc $toggle
    done < $tmp_file2
    /bin/cp /dev/null $tmp_file2
) || exit 100
# Create a dialog with a radio list and let the user select the
# remote machine
/usr/bin/dialog \
    --clear \
    --title "$TITLE" \
    --radiolist "Which machine do you want to use?" 20 61 2 \
    $MACHINES 2> ${tmp_file}
# Handle the return codes from the machine selection in the 
# previous step
export remote_machine=""
case $return_value in
    export remote_machine=$(/bin/cat ${tmp_file})
    echo "Cancel pressed.";;
    echo "Help pressed.";;
    echo "Extra button pressed.";;
    echo "Item-help button pressed.";;
    if test -s $tmp_file ; then
      /bin/rm -f $tmp_file
      echo "ESC pressed."

# No machine selected? No service ...
if [ -z "${remote_machine}" ]; then
  /usr/bin/dialog \
  	--clear  \
	--title "Error, no machine selected?" --clear "$@" \
       	--msgbox "No machine was selected!. Will exit now..." 15 30
  exit 100

# Send 4 packets to the remote machine. I assume your network 
# administration allows ICMP packets
# If there is an error show  message box
/bin/ping -c 4 ${remote_machine} >/dev/null 2>&1
if [ $? -ne 0 ]; then
  /usr/bin/dialog \
  	--clear  \
	--title "VPN issues or machine is off?" --clear "$@" \
       	--msgbox "Could not ping ${remote_machine}. Time to troubleshoot..." 15 50
  exit 100

# Remote machine is visible, ask for credentials and handle user 
# choices (like password with a password box)
/bin/rm -f ${tmp_file}
/usr/bin/dialog \
  --title "$TITLE" \
  --clear  \
  --insecure \
  --passwordbox "Please enter your Windows password for ${remote_machine}\n" 16 51 2> $tmp_file
case $return_value in
    # We have all the information, try to connect using RDP protocol
    /usr/bin/mkdir -p -v $HOME/logs
    /usr/bin/xfreerdp /cert-ignore /sound:sys:alsa /f /u:$REMOTE_USER /v:${remote_machine} /p:$(/bin/cat ${tmp_file})| \
    /usr/bin/tee $HOME/logs/$(/usr/bin/basename $0)-$remote_machine.log
    echo "Cancel pressed.";;
    echo "Help pressed.";;
    echo "Extra button pressed.";;
    echo "Item-help button pressed.";;
    if test -s $tmp_file ; then
      /bin/rm -f $tmp_file
      echo "ESC pressed."

<figure class="wp-block-image size-full"></figure>

You can see from the code that dialog expects positional arguments and also allows you to capture user responses in variables. This effectively makes it an extension to Bash to write text user interfaces.

My small example above only covers the usage of some of the Widgets, there is plenty more documentation on the official dialog site.

Are dialog and JQ the best options?

You can skin this rabbit in many ways (Textual, Gnome Zenity, Python TKinker, etc.) I just wanted to show you one nice way to accomplish this in a short time, with only 100 lines of code.

It is not perfect. Specficially, integration with Bash makes the code very verbose, but it is still easy to debug and maintain. This is also much better than copying and pasting the same long command over and over.

One last thing: If you liked jq for JSON processing from Bash, then you will appreciate this nice collection of jq recipes.

Rewriting nouveau’s Website

Posted by Hari Rana (TheEvilSkeleton) on November 15, 2023 12:00 AM


We spent a whole week rewriting nouveau’s website — the drivers for NVIDIA cards. It started as a one-person effort, but it led to a few people helping me out. We addressed several issues in the nouveau website and improved it a lot. The redesign is live on nouveau.freedesktop.org.

In this article, we’ll go over the problems with the old site and the work we’ve done to fix them.

Problems With Old Website

I’m going to use this archive as a reference for the old site.

The biggest problem with the old site was that the HTML and CSS were written 15 years ago and have never been updated since. So in 2023, we were relying on outdated HTML/CSS code. Obviously, this was no fun from a reader’s perspective. With the technical debt and lack of interest, we were suffering from several problems. The only good thing about the old site was that it didn’t use JavaScript, which I wanted to keep for the rewrite.

Fun fact: the template was so old that it could be built for browsers that don’t support HTML5!

Not Responsive

“Responsive design” in web design means making the website accessible on a variety of screen sizes. In practice, a website should adapt to work on mobile devices, tablets, and laptops/computer monitors.

In the case of the nouveau website, it didn’t support mobile screen sizes properly. Buttons were hard to tap and text was small. Here are some screenshots taken in Firefox on my Razer Phone 2:

<figure> <figcaption>

Small buttons and text in the navigation bar that are difficult to read and tap.

</figcaption> </figure> <figure> <figcaption>

Small text in a table that forces the reader to zoom in.

</figcaption> </figure>

No Dark Style

Regardless of style preferences, having a dark style/theme can help people who are sensitive to light and battery life on AMOLED displays. Dark styles are useful for those who absolutely need them.


Search Engine Optimization (SEO) is the process of making a website more discoverable on search engines like Google. We use various elements such as title, description, icon, etc. to increase the ranking in search engines.

In the case of nouveau, there were no SEO efforts. If we look at the old nouveau homepage’s <head> element, we get the following:

<meta charset="utf-8">
<link rel="stylesheet" href="style.css" type="text/css">
<link rel="stylesheet" href="xorg.css" type="text/css">
<link rel="stylesheet" href="local.css" type="text/css">
<link rel="alternate" type="application/x-wiki" title="Edit this page" href="https://gitlab.freedesktop.org/nouveau/wiki/-/edit/main/sources/index.mdwn">

The only thing there was a title, which is, obviously, far from desirable. The rest were CSS stylesheets, wiki source link, and character set.

Readability Issues

One of the biggest problems with nouveau’s website (apart from the homepage) is the lack of a maximum width. Large paragraphs stretch across the screen, making it difficult to read.

Process of Rewriting

Before I started the redesign, I talked to Karol Herbst, one of the nouveau maintainers. He had been wanting to redesign the nouveau site for ages, so I asked myself, “How hard can it be?” Well… mistakes were made.

The first step was to look at the repository and learn about the tools freedesktop.org uses for their website. freedesktop.org uses ikiwiki to generate the wiki. Problem is: it’s slow and really annoying to work with. The first thing I did was create a Fedora toolbox container. I installed the ikiwiki package to generate the website locally.

The second step was to rewrite the CSS and HTML template. I took a look at page.tmpl — the boilerplate. While looking at it, I discovered another problem: the template is unreadable. So I worked on that as well.

I ported to modern HTML elements, like <nav> for the navigation bar, <main> for the main content, and <footer> for the footer.

The third step was to rewrite the CSS. In the <head> tag above, we can see that the site pulls CSS from many sources: style.css, xorg.css, and local.css. So what I did was to delete xorg.css and local.css, delete the contents of style.css, and rewrite it from scratch. I copied a few things from libadwaita, namely its buttons and colors.

And behold… merge request !29!

Despite the success of the rewrite, I ran into a few roadblocks. I couldn’t figure out how to make the freedesktop.org logo dark style. Luckily, my friend kramo helped me out by providing an SVG file of the logo that adapts to dark style, based on Wikipedia’s. They also adjusted the style of the website to make it look nicer.

I also couldn’t figure out what to do with the tables because the colors were low contrast. Also, the large table on the Feature Matrix page was limited in maximum width, which would make it uncomfortable on large monitors. Lea from Fyra Labs helped with the tables and fixed the problems. She also adjusted the style.

After that, the rewrite was mostly done. Some reviewers came along and suggested some changes. Karol wanted the rewrite so badly that he opened a poll asking if he should merge it. It was an overwhelming yes, so… it got merged!


As Karol, puts it:

<figure> <figcaption>

“check out the nouveau repo, then cry, then reconsider your life choices”

</figcaption> </figure>

In all seriousness, I’ve had a great time working on it. While this is the nouveau site in particular, I plan to eventually rewrite the entire freedesktop.org site. However, I started with nouveau because it was hosted on GitLab. Meanwhile, other sites/pages are hosted on freedesktop.org’s cgit instance, which were largely inaccessible for me to contribute to.

Ideally, we’d like to move from ikiwiki to something more modern, like a framework or a better generator, but we’ll have to see who’s willing to work on it and maintain it.

F39 Elections now open

Posted by Fedora Community Blog on November 14, 2023 05:49 PM

Today we are starting the nomination & campaign period during which we accept nominations to the “steering bodies” of the following teams:

This period is open until 2023-11-27 at 23:59:59 UTC.

Candidates may self-nominate. If you nominate someone else, please check with them to ensure that they are willing to be nominated before submitting their name.

The steering bodies are currently selecting interview questions for the candidates.

Nominees submit their questionnaire answers via a private Pagure issue. The Election Wrangler or their backup will publish the interviews to the Community Blog before the start of the voting period.

Please note that the interview is mandatory for all nominees. Nominees not having their interview ready by end of the Interview period (2023-11-29) will be disqualified and removed from the election.

As part of the campaign people may also ask questions to specific candidates on the appropriate mailing list.

The full schedule of the elections is available on the Elections schedule. For more information about the elections, process see the Elections docs.

The post F39 Elections now open appeared first on Fedora Community Blog.

نسخه نهایی لینوکس فدورا ۳۹ منتشر شد

Posted by Fedora fans on November 14, 2023 01:12 PM


در ششم نوامبر سال ۲۰۰۳ بود که پروژه ی فدورا اولین نسخه ی خود یعنی Fedora Core 1 را منتشر کرد که اکنون پس بیست سال نسخه نهایی Fedora Linux 39 را منتشر کرد.
فدورا لینوکس یک سیستم عامل بر پایه جامعه می باشد که برای دسکتاپ، لپ تاپ، سرور، محیط های ابری (Cloud) و edge device ها کاربرد دارد.

اخبار دسکتاپ


نسخه Fedora workstation اکنون دارای میزکار Gnome 45 است که عملکرد بهتر و بسیاری از پیشرفت‌های قابل توجه از جمله یک تعویض‌کننده جدید فضای کاری و یک نمایشگر تصویر بسیار بهبود یافته را به ارمغان می‌آورد.

اگر به دنبال تجربه دسکتاپ متفاوتی هستید، گروه Budgie ما Fedora Onyx را ایجاد کرده است که یک دسکتاپ “اتمی” مبتنی بر Budgie با روح Fedora Silverblue می باشد.

البته این همه چیز نیست و سایر نسخه ها با میزکارهای مختلف از جمله KDE Plasma Desktop, Xfce, Cinnamon و سایر میزکارهای دیگر نیز بروزرسانی شده اند.


فدورا در ابر (Cloud)


Image های  فدورا Cloud به طور رسمی در Microsoft Azure (علاوه بر Google Cloud و AWS) در دسترس خواهند بود. همچنین، Image های ابری اکنون به گونه‌ای پیکربندی شده‌اند که cloud-init می‌تواند (به انتخاب شما) به‌روزرسانی‌ها را نصب کند و هنگامی که برای اولین بار provision شد، reboot شود، بنابراین می‌دانید که از آخرین به‌روزرسانی‌های امنیتی استفاده می‌کنید.


دیگر بروزرسانی ها


مانند همیشه، بسیاری از بسته‌های دیگر بروزرسانی شده اند، در حالی که تلاش می شود بهترین‌ها را از همه چیزهایی که دنیای نرم‌افزار متن‌باز و رایگان ارائه می‌دهد به شما ارائه دهیم.

لینوکس فدورا ۳۹ شامل gcc 13.2, binutils 2.40, glibc 2.38, gdb 13.2 و rpm 4.19 می باشد و همچنین بروزرسانی زبان های برنامه نویسی مانند Python 3.12 و Rust 1.73

نکته قابل توجه، فدورا ۳۹ جدیدترین نسخه Inkscape، ابزار محبوب تصویر برداری و طراحی گرافیک برداری را شامل می شود. Inkscape نیز دیروز 20 ساله شد – ما دوقلوهای دیجیتالی هستیم! به همه در آن پروژه عالی نیز تبریک می گویم.


دانلود فدورا ۳۹

همواره جهت دانلود فدورا لینوکس می توانید به وب سایت رسمی پروژه فدورا مراجعه کنید:



شاد و فدورایی باشید


The post نسخه نهایی لینوکس فدورا ۳۹ منتشر شد first appeared on طرفداران فدورا.