Fedora People

Elections Voting is now Open!

Posted by Fedora Community Blog on May 19, 2024 11:01 PM

The voting period has now opened for the Fedora Linux 40 elections cycle. Please read the candidate interviews and cast your votes in our Elections App. Visit our Voting Guide to learn how our voting system works, and voting closes at 23:59:59 UTC on May 30th.

Fedora Mindshare Committee

Fedora Mindshare has one seat open. Below is the candidate that will be elected at the end of the voting period.

Fedora Council

The Fedora Council has two seats open. Below are the candidates eligible for election.

Fedora Engineering Steering Committee (FESCo)

The Fedora Engineering Steering Committee (FESCo) has four seats open. Below are the candidates eligible for election.

EPEL Steering Committee

The EPEL Steering Committee has four seats open. Below are the candidates eligible for election.

The post Elections Voting is now Open! appeared first on Fedora Community Blog.

EPEL Steering Committee Election: Troy Dawson (tdawson)

Posted by Fedora Community Blog on May 19, 2024 10:56 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Troy Dawson

  • FAS ID: tdawson

Questions

What is your background in EPEL? What have you worked on and what are you doing now?

I started contributing to EPEL 11 years ago with some nodejs packages for OpenShift. I later added rubygems and golang packages as OpenShift changed languages. Later, RHEL 8 did not have KDE, so I added KDE to epel8, and have been maintaining KDE in epel ever since. I have picked up many other packages during the years, but I think my KDE contributions are what I am most known for.

I’ve been the EPEL Steering Committee chair since 2020, taking over from Stephen Smoogen. A lot of changes have happened since then, most of them for the better. I’m not responsible for all the changes, but it’s been wonderful being part of the committee as these changes have come through.

Why are you running for EPEL Steering Committee Member?

EPEL has grown to be part of my professional and personal life. I not only want to contribute to it, but help steer it’s growth and progression. I think as a EPEL Steering Committee member, I can help keep EPEL healthy and thriving.

The post EPEL Steering Committee Election: Troy Dawson (tdawson) appeared first on Fedora Community Blog.

EPEL Steering Committee Election: Interview with Robby Callicotte (rcallicotte)

Posted by Fedora Community Blog on May 19, 2024 10:55 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Robby Callicotte

  • FAS ID: rcallicotte

Questions

What is your background in EPEL? What have you worked on and what are you doing now?

I’ve been an EPEL user since the mid 2000s and a package maintainer since 2021. I brought Saltstack back into EPEL and added a nifty linter for salt states called salt-lint. I am currently working on building all the dependencies for the upcoming pagure 6 release.

Why are you running for EPEL Steering Committee member?

EPEL packages account for millions of downloads per week and are a vitally important part of the Enterprise Linux ecosystem. I earnestly believe in the project’s mission and am eager to contribute my skills and acumen to its success. I am excited about the opportunity to serve the EPEL community and help shape its future direction.

The post EPEL Steering Committee Election: Interview with Robby Callicotte (rcallicotte) appeared first on Fedora Community Blog.

EPEL Steering Committee Election: Interview with Neil Hanlon (neil)

Posted by Fedora Community Blog on May 19, 2024 10:55 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Neil Hanlon

  • FAS ID: neil

What is your background in EPEL? What have you worked on and what are you doing now?

I’ve been a consumer of EPEL for more than a decade as I learned Linux and began my career—first in web hosting, then “DevOps”, and eventually being responsible for the network architecture and infrastructure for a large travel company. In the past four years, I helped to found the Rocky Linux project, where I became friends with Pablo Greco who encouraged me to participate in EPEL more actively, and I continue to maintain its various components, which involves significant Fedora and EPEL work as well as contributions to other upstream projects.

For over a year now, I have been a Fedora and EPEL packager and actively participate in EPEL Steering Committee meetings weekly. Most of the packages I maintain or contribute to are related to needs in the Rocky Linux project or are personally useful to me—such as remind, a calendar and alarm program that helps me stay organized, or passwdqc, a powerful password quality and policy enforcement toolkit that is especially useful in managing EL-based FreeIPA environments.

While I enjoy packaging, I find the most rewarding aspect of my work in EPEL to be mentoring and encouraging others to become Fedora and/or EPEL packagers. Growing the community and ecosystem of EL developers is crucial to the longevity of Enterprise Linux as a whole. Expanding the base of EPEL contributors not only strengthens EPEL but also benefits the broader Fedora developer community and can lead to more employer-sponsored contributions to Open Source.

In the coming months, I plan to introduce several cloud and HPC-related packages to Fedora and EPEL. I also aim to finalize a number of packages that are being introduced to Fedora following discussions in the #epel IRC channel, which I hope will bring in new Fedora and EPEL package maintainers.

Why are you running for EPEL Steering Committee member?

I am running for the EPEL Steering Committee to help foster growth and increase activity within Extra Packages for Enterprise Linux (EPEL). EPEL is unique in its commitment to stability and freedom from unnecessary disruptions, mirroring the principles of RHEL. While enabling Fedora packages for Enterprise Linux is a significant goal, EPEL goes further by ensuring that updates and newly introduced packages uphold this stability.

Maintaining packages for EPEL can be daunting for many maintainers, particularly those new to Fedora packaging guidelines, policies, and workflows. This initial hurdle is crucial to understand the different layers of policy between Fedora and EPEL, which make EPEL a unique and valuable project. EPEL has a large user base because of the immense utility it provides, and growing the footprint of EPEL maintainers is crucial for the project’s long-term health.

The post EPEL Steering Committee Election: Interview with Neil Hanlon (neil) appeared first on Fedora Community Blog.

EPEL Steering Committee Elections: Interview with Kevin Fenzi (kevin)

Posted by Fedora Community Blog on May 19, 2024 10:54 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Kevin Fenzi

  • FAS ID: kevin

Questions

What is your background in EPEL? What have you worked on and what are you doing now?

I’ve been involved in epel since the epel-5/6 days. With epel7 I wrangled a bunch of the work to get things setup, including beta before the rhel release, lots of release engineering work and mirroring/content. I maintain a lot of packages in EPEL, both for others use and to use in Fedora Infrastructure, which heavily uses epel for managing items that are not in rhel proper. I’ve been happy to see epel continue to grow and flourish and these days I tend to just provide some light releng work for epel along with package maint. epel is a wonderful service and I think it really increases the usefulness of rhel and downstreams.

Why are you running for EPEL Steering Committee member?

I think I provide a useful historical perspective on how things were setup and why, along with a release engineering perspective on how changes could be done or might affect composes.

The post EPEL Steering Committee Elections: Interview with Kevin Fenzi (kevin) appeared first on Fedora Community Blog.

EPEL Steering Committee Election: Interview with Jonathan Wright (jonathanspw)

Posted by Fedora Community Blog on May 19, 2024 10:53 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Jonathan Wright

  • FAS ID: jonathanspw

Questions

What is your background in EPEL? What have you worked on and what are you doing now?

I’ve been working with EPEL since its inception – having been a user of “Enterprise Linux” since CentOS 4. I became a Fedora packager a few years ago with an explicit interest in the EPEL side of things. Little did I know I’d also end up caring greatly about the Fedora side and I came back from the dark side (Arch) and am a Fedora desktop user again.

I’m the infrastructure lead for AlmaLinux and through that have a great care for EPEL and everything it empowers users to do – not just for AlmaLinux but for the EL community at large.

Why are you running for EPEL Steering Committee member?

I feel strongly that I can help steer the direction of EPEL due to my experience in the ecosystem.

The post EPEL Steering Committee Election: Interview with Jonathan Wright (jonathanspw) appeared first on Fedora Community Blog.

EPEL Steering Committee Election: Interview with Carl George (carlwgeorge)

Posted by Fedora Community Blog on May 19, 2024 10:52 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Carl George

Questions

What is your background in EPEL? What have you worked on and what are you doing now?

I got my start in Fedora and EPEL back in 2014. I joined a new team at Rackspace whose primary purpose was to maintain IUS, a third-party package repository for RHEL. Packages in this repository were typically based on Fedora packages. Some of these packages had dependencies in EPEL. We also maintained several other packages in EPEL that were important to our customers. I continued on as a regular EPEL contributor, even after I left Rackspace in 2019 to join the CentOS team at Red Hat. In 2021, I started a new team at Red Hat to specifically focus on EPEL activities.

In 2020 and 2021, I designed and lead the implementation of EPEL Next, an optional repository for EPEL maintainers to perform builds against CentOS Stream when necessary. Also in 2021, I lead the implementation of EPEL 9. This brought some significant improvements over previous EPEL versions. We utilized CentOS Stream 9 to launch EPEL 9 about six months before RHEL 9. This resulted in RHEL 9 having more EPEL packages available at its launch (~5700) than any previous release, helping ensure a positive user and customer experience. If you want to learn more about EPEL Next and EPEL 9, I suggest watching my conference presentation “The Road to EPEL 9“.

Currently I’m leading the implementation of EPEL 10, which will include more improvements over previous EPEL versions. The plan is for EPEL 10 to have minor versions, obsoleting the previous EPEL Next model. This will allow us to better target specific minor versions of RHEL 10, as well as CentOS Stream 10 as the leading minor version. You can learn more about this plan in my conference presentation “EPEL 10 Overview“.

Aside from the hands-on technical work of EPEL architecture and packaging, I have engaged in various other efforts to promote visibility and awareness of EPEL, and to communicate with our users and contributors.

  • Organized the first EPEL survey, which provided valuable feedback to improve packager workflows and the onboarding experience
  • Started a monthly “EPEL office hours” video call, which is open for anyone to attend to discuss EPEL topics
  • Presented about EPEL and related topics at conferences
  • Interviewed about EPEL and related topics on podcasts

Why are you running for EPEL Steering Committee member?

I have been on the EPEL Steering Committee since I was appointed to it
in 2020. As part of implementing elections for the committee, myself
and other members agreed to “step down” and essentially run for
re-election. I have enjoyed my four years serving on the committee, and
hope to have the opportunity to continue to serve the EPEL community. I
am passionate about EPEL and I am committed to continue finding ways to
improve the EPEL experience for both users and contributors.

The post EPEL Steering Committee Election: Interview with Carl George (carlwgeorge) appeared first on Fedora Community Blog.

FESCo Elections: Interview with Tom Stellard (tstellar)

Posted by Fedora Community Blog on May 19, 2024 10:47 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Tom Stellard

  • FAS ID: tstellar
  • Matrix Rooms: devel, fedora-ci

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I have a background in compilers and toolchains, and I would like to use some of the knowledge I’ve gained over the years of building and troubleshooting applications to help make Fedora better. Specifically, I’m interested in helping packagers avoid making common mistakes through standardized macros and packaging practices and also by increasing the reliance on CI.

How do you currently contribute to Fedora? How does that contribution benefit the community?

I’m currently one of the maintainers of the LLVM packages in Fedora which is a set of 14 packages that provide a C/C++/Fortran compilers as well as a set of reusable compiler libraries that are used for developing other languages and for developer tools, like IDEs.

I’ve also worked on two system wide change requests to help standardize the use of make within Fedora packages. These changes helped to make spec files more consistent across all of Fedora and also made it possible to remove make from the default buildroot.

How do you handle disagreements when working as part of a team?

When I am in a leadership role, like FESCO, and there is a disagreement, the first thing I do is make sure I understand the problem and the potential solutions. This usually requires having a discussion between all interested parties either on a mailing list, chat platform, or video call. Many times disagreements are simply the result of misunderstandings, so getting everyone together to discuss the issue in the same place can lead to a consensus decision and avoid the need for someone in leadership to get involved.

However, if consensus cannot be reached, once I like to try to get some third party opinions from people who have not been directly involved with the discussions. Once I feel comfortable I have enough information and am ready to make a decision, I make sure I am able to explain in writing why the decision was made and then I communicate the decision to all the stakeholders. It’s always important to have a written record of why a decision was made in case it needs to be revisited in the future.

What else should community members know about you or your position

I work for Red Hat on the Platform Tools team. I am the technical lead for our LLVM team and the overall technical lead for the Go/Rust/LLVM compiler group. This means that I work on packaging, bug fixing and upstream feature development for LLVM and work on high-level technical issues common across all 3 compilers.

The post FESCo Elections: Interview with Tom Stellard (tstellar) appeared first on Fedora Community Blog.

FESCo Elections: Interview with Stephen Gallagher (sgallagh)

Posted by Fedora Community Blog on May 19, 2024 10:46 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Stephen Gallagher

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I’ve been a member of FESCo for many years now, and it’s been a great experience. It gives me the opportunity to see a much wider view of the project than just the pieces I would otherwise contribute to.

As for steering the direction of Fedora, I think I would mostly just continue to do as I have been doing: pushing for Fedora to continue to be both the most advanced and one of the most stable open-source distributions in the world.

How do you currently contribute to Fedora? How does that contribution benefit the community?

Aside from my work on FESCo, I am also acting as the Lead on the Fedora ELN project, which is a prototype of what will eventually be the next major release of Red Hat Enterprise Linux. We recently branched off CentOS Stream 10 from Fedora ELN, so we’re now looking to the future (and CentOS Stream 11!). Performing these activities in the public provides both an opportunity for the community to be involved with the creation of Red Hat Enterprise Linux as well as painting a clear picture of Fedora’s value to Red Hat, our primary sponsor.

Additionally, I have been acting as a primary administrator for the CentOS Stream Gitlab instance and I intend to volunteer my experience to help with the upcoming dist-git migration discussions.

How do you handle disagreements when working as part of a team?

First and foremost, I always strive for consensus. Most disagreements are not fundamental differences between people. Instead, they tend to be more nuanced. My goal (particularly within my FESCo service) is to make sure that everyone’s opinion is heard and considered; I then try to figure out how to meet in the middle.

Of course, not every decision can be resolved with consensus. In the event that a true impasse is reached, that’s the point where I usually advocate for calling a vote and proceeding with the majority opinion. On the whole, I believe that democratic decision-making is the best solution that humanity has come up with for resolving otherwise-insoluble disagreements.

What else should community members know about you or your positions?

Just so it’s very clear, I’m a Red Hat employee. My day-job at Red Hat is to organize and improve the processes we use to kick off development of the next major RHEL release. As such, my stances on FESCo will often represent my opinion of what will make that effort operate more smoothly. So, no matter how entertaining it might be, we’re not going to be replacing the entire contents of /usr/share/icons with the Beefy Miracle icon.

The post FESCo Elections: Interview with Stephen Gallagher (sgallagh) appeared first on Fedora Community Blog.

FESCo Elections: Interview with Neal Gompa (ngompa)

Posted by Fedora Community Blog on May 19, 2024 10:45 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Neal Gompa

  • FAS ID: ngompa
  • Matrix Rooms: #devel:fedoraproject.org, #asahi:fedoraproject.org, #asahi-devel:fedoraproject.org, #kde:fedoraproject.org, #workstation:fedoraproject.org, #cloud:fedoraproject.org, #kernel:fedoraproject.org #centos-hyperscale:fedoraproject.org, #okd:fedoraproject.org, #budgie:fedoraproject.org, #multimedia:fedoraproject.org, #miracle:fedoraproject.org, #cosmic:fedoraproject.org, #centos-kernel:fedora.im, #admin:opensuse.org, #chat:opensuse.org, #bar:opensuse.org, #obs:opensuse.org, #RedHat:matrix.org, #networkmanager:matrix.org, #rpm:matrix.org, #rpm-ecosystem:matrix.org, #yum:matrix.org, #manatools:matrix.org, #lugaru:matrix.org, #buddiesofbudgie-dev:matrix.org, #PackageKit:matrix.org, #mir-server:matrix.org, #mageia-dev:matrix.org

(There’s quite a bit more, but I think that sort of covers it. 😉)

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

As a long-time member of the Fedora community as a user and a contributor, I have benefited from the excellent work of many FESCo members before me to ensure Fedora continues to evolve as an amazing platform for innovation. For the past few years, I have had the wonderful privilege of serving as a member of FESCo for the first time, and I enjoyed my time serving to steer Fedora into the future, and I wish to continue to contribute my expertise to help analyze and make good decisions on evolving the Fedora platform.

How do you currently contribute to Fedora? How does that contribution benefit the community?

The bulk of my contributions to Fedora lately are on the desktop side of things. Following on the work I did to improve Fedora’s multimedia capabilities, we now have a SIG that takes care of bringing in and updating multimedia software into Fedora. The successful bring-up of the Fedora Asahi Remix by the Asahi SIG to support Fedora Linux on Apple Silicon Macs has brought notoriety and several new contributors to the community. Most recently, I am mentoring new contributors to support the development of the Fedora Miracle Window Manager and the Fedora COSMIC efforts, including assisting in setting up SIGs and guiding them in the processes to support their work.

Beyond the desktop and more into the clouds, I helped revamp the tooling for creating Fedora Cloud images as well as the base Vagrant and container images to leverage the new KIWI image build tool, which enables both Fedora Cloud contributors and third parties to much more easily build things leveraging Fedora Cloud content. This has led to renewed interest from folks who work in public clouds, and we’ve started seeing contributions from notably Microsoft Azure now.

My hope is that the work I do helps with making the experience using and contributing to Fedora better than it was ever before and that Fedora’s technical leadership in open source draws in more users and contributors.

How do you handle disagreements when working as part of a team?

I attempt to explain my viewpoint and try to build consensus through persuasion and collaboration. If there isn’t a path to consensus as-is, I try to identify the points of disagreement and see if there is a way to compromise to resolve the disagreement. Generally, this ultimately results in a decision that FESCo can act on.

What else should community members know about you or your positions?

To me, the most important thing about Fedora is that we’re a community with a bias for innovation. Our community looks to solve problems and make solutions available as FOSS, and this is something that Fedora uniquely does when many others take the easy path to ship old software or nonfree software everywhere. We work with tens of thousands of projects to deliver an amazing platform in an easily accessible and open fashion, built on FOSS infrastructure and tools. This makes Fedora special to me, and we should continue to hold ourselves to that high standard.

I’m also a big believer in community bonds and collaboration, which is why people tend to find me all over the place. I’m involved in Asahi Linux, CentOS, openSUSE, and several other similar projects in leadership roles as well as a contributor in order to demonstrate my commitment to this philosophy.

The post FESCo Elections: Interview with Neal Gompa (ngompa) appeared first on Fedora Community Blog.

FESCo Elections: Interview with Michel Lind (salimma)

Posted by Fedora Community Blog on May 19, 2024 10:44 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Michel Lind

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I have been a Fedora contributor since the very beginning – circa 2003-4 – and over the years have contributed on the packaging side, as an Ambassador (I can still remember burning CDs and shipping them via the local post office), and more recently on the policy side with Fedora (starting the Lua SIG) and EPEL (reestablishing the EPEL Packagers SIG, establishing a workflow for escalating stalled EPEL requests).

I am fortunate enough to have a day job at a company (Meta) that heavily uses CentOS Stream and contribute to many core Linux projects, where my main focus is on upstream work – so if the Fedora community entrusts me with this position, I hope to advocate for Change Proposals that advance Fedora’s four foundations (Freedom, Friends, Features, and First) – while accepting that not all Fedora features can be supported in CentOS Stream and RHEL. I feel that having experience in packaging software and working on tooling and workflows across Fedora, EPEL and CentOS Stream gives me a vantage point where I can understand where different parts of the community are coming from.

How do you currently contribute to Fedora? How does that contribution benefit the community?

I am a proven packager (using it to help out with mass rebuilds or to help out fixing major issues) and a packager sponsor, a member of several packaging SIGs (Go, Lua, Python, Rust) and several SIGs that bridge the gaps between Fedora, CentOS Stream, and RHEL (ELN, which is a rebuild of Rawhide with EL macros; EPEL, which provides extra packages for EL distros; and on the CentOS side, the CentOS Hyperscale SIG, which provides faster-moving backports and additional functionalities (typically sourced from Fedora) for use on top of CentOS Stream.

I wrote ebranch to help with branching Fedora packages to EPEL and heavily use it in bootstrapping EPEL 9; a rewrite is in the work to turn it into a suite of related tools for dependency graph management, branching, and inventory tracking.

I have done, and am working on, various Change Proposals.

Oh, and I helped migrate my company’s Linux desktops to Fedora (Flock 2019), FOSDEM 2021

How do you handle disagreements when working as part of a team?

I have been vilified because of where I work; I have learned to accept that people might have very strong opinions on certain issues that they won’t budge on, whether the opinions are (IMHO) justified or not, and still find a way to be civil and collaborate on topics of shared interest.

What else should community members know about you or your positions?

I have been a part of the Fedora community (and before that, a user of Red Hat Linux doing my own rebuilds) for several decades; my involvement predates and outlasts working for several employers (some of them not RPM shops). I am also a Debian Maintainer – and I hope to bring my experience contributing to different distributions to inform FESCo discussions. I dislike NIH (not invented here); ideas should stand on their own merits and it is good to learn from others (both adopting good ideas and avoiding past mistakes).

The post FESCo Elections: Interview with Michel Lind (salimma) appeared first on Fedora Community Blog.

Council Election: Interview with Sumantro Mukherjee (sumantrom)

Posted by Fedora Community Blog on May 19, 2024 10:39 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Sumantro Mukherjee

  • Fedora Account: sumantrom
  • Matrix Rooms: #fedora,#fedora-kde,#fedora-mindshare,#fedora-social,#fedora-social-hour, #fedora-arm,#fedora-devel,#fedora-qa,#fedora-workstation,#fedora-test-days, #fedora-i3,#fedora-badges,#fedora-docs,#fedora-l18n,#fedora-ambassadors)

Questions

Why are you running for Fedora Council?

I hail from APAC (India) and would like to focus on bringing in more non-US perspectives, which includes bringing in more contributors from diverse backgrounds. Efficient utilization of our brand new design assets which are now in multiple languages (Hindi, for example) to onboard variety of users (general and power-users) to the Fedora community as contributor either to functional sides (QA, packaging..etc) and/or outreach.
In the recent past; the council charter has been expanded to an Executive Sponsor. This role enables people like me to work closely with an upcoming initiative.
In parallel, I am closely working on Blueprints (https://discussion.fedoraproject.org/t/introducing-fedora-change-proposal-blueprints-in-association-with-fedora-qa/115903)
this is a long term effort and will benefit from my connections with people in Council (ie FESCo rep and FESCo at large)

The Fedora Strategy guiding star is that the project is going to double its contributor base by 2028. As a council member, how would you help the project deliver on that goal?

As a council member, I would work closely with teams which will need more attention building pipelines for onboarding, mentoring and retention of contributors. In Fedora QA, I have worked extensively to handle a high influx of contributors and retain them for at least a Release Cycle or two. I have optimized workflows that have been helping Fedora QA to grow optimally without burning out contributors or team members. I could help solve problems and coordinate closely with CommOps and other teams to build data-driven contributor pipelines to help Fedora Project double its contributors by 2028.

How can we best measure Fedora’s success?

However, there is no specific way to measure, BUT.. here’s something I think can define Fedora’s success the best
Fedora Linux’s success is measured by the people who use Fedora.
Fedora Project stands as the reference for the people who want to build an upstream community diverse and strong.
Fedora Project at its core is composed of people who take pride in packaging, testing, designing, translating, blogging, evangelizing and collaborating across the globe.
Fedora project takes pride in being a trendsetter for bringing technologies which impact a large downstream ecosystem and embolden their trust in our four foundations.

What is your opinion on Fedora Editions – what purpose do they serve? Are they achieving this purpose? What would you change?

Fedora Editions are Fedora Project’s flagships. They are exclusively tested and tried by Fedora’s in-house QA team to deliver on the expectations of tens of thousands of users. These platforms serve as building blocks for the Fedora Project to share innovation across multiple downstream ecosystems. Fedora Edition has more room to grow in terms of adoption in CI platforms and pre-installs for Hardware Makers. I would love to drive the adoption of Fedora not only at an organizational level ; but also in schools, universities and next-gen digital influencers.

The Fedora Council is intended to be an active working body. How will you make room for Council work?

As a long-term member, the Council is a body which in the coming days will be working across many facades of the Project, taking bold decisions to make our community
more robust, productive and healthy. Being a long-term member, I support the goals and I’ve prepared my upcoming work cycles such that I have room to spend time
at the Council position.
The aforementioned changes in accordance with 2028 strategy impacts my role in Fedora Project at large and I will always want to be a stakeholder. This will help drive decisions which help the Quality teams day to day operations.

The post Council Election: Interview with Sumantro Mukherjee (sumantrom) appeared first on Fedora Community Blog.

Mindshare Election: Interview with Sumantro Mukherjee (sumantrom)

Posted by Fedora Community Blog on May 19, 2024 10:38 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Monday, 20th May and closes promptly at 23:59:59 UTC on Thursday, 30th May 2024.

Interview with Sumantro Mukherjee

  • FAS ID: sumantrom
  • Matrix Rooms: fedora,#fedora-kde,#fedora-mindshare,#fedora-social,#fedora-social-hour, #fedora-arm,#fedora-devel,#fedora-qa,#fedora-workstation,#fedora-test-days, #fedora-i3,#fedora-badges,#fedora-docs,#fedora-l18n,#fedora-ambassadors)

Questions

What is your background in Fedora? What have you worked on and what are you doing now?

I’ve been in Fedora Project for more than 10yrs now. I have been a part of the Fedora QA team for the whole time. As a part of Fedora QA, I have signed off on multiple releases.
Apart from those, I have been an avid advocate of Fedora Project to the community at large, it has driven me to take up onboarding , ambassadorship and mentoring as part of Fedora Project. Recently, I am taking part in a11y initiative and few places of Mentorship init.

Please elaborate on the personal “Why” which motivates you to be a candidate for Mindshare.

Fedora in recent years has been in the talks, be it conferences or tech news. I run my own YT channel, promote a lot of QA test day events over socials and totally believe in the power of Digital and Social Media. In this day and age, where Recognition is a very sought after and retention of efforts directly linked to rewards ; it’s important for a project like Fedora to become more efficient with resource allocation. My “why” is to figure out and open up more way how Fedora project can reward it’s contributors.

How would you improve Mindshare Committee visibility and awareness in the Fedora community?

Mindshare is a very visible committee to begin with, however we can actually benefit from efforts like CommOps 2.0. I believe the more stats we have about out community and contributors the more we can get visibility in the obscure corners. I would like to be someone who helps bring more clarity for my team (Fedora QA) to start with and then expand to other parts of the project.

What part of Fedora do you think needs the most attention from the Mindshare Committee during your term?

Recognition and Community health are the two key areas Mindshare can put some attention. Recognition brings in fresh blood in the community, adding more ideas and diverse way of doing things. Community’s health helps us with retention and avoiding possible burnouts. Outreach teams of all kinds will benefit from both of them and Fedora contributors will have a long lasting impact on the Open Source space for years to come.

The post Mindshare Election: Interview with Sumantro Mukherjee (sumantrom) appeared first on Fedora Community Blog.

Revue de presse de Fedora Linux 40

Posted by Charles-Antoine Couret on May 18, 2024 10:43 PM

Cela fait depuis Fedora 19 que je publie sur la liste de diffusion de Fedora-fr une revue de presse de chaque sortie d'une nouvelle version. Récapituler quels sites en parle et comment. Je le fais toujours deux semaines après la publication (pour que tout le monde ait le temps d'en parler). Maintenant, place à Fedora Linux 40 ! Bon, ok cette fois il y a un léger retard. ;)

Bien entendu je passe sous silence mon blog et le forum de fedora-fr.

Sites web d'actualité

Soit 6 sites.

Les vidéos

Soit 2 vidéos.

Bilan

Le nombre de sites parlant de Fedora Linux 40 est en légère baise de même que les vidéos.

La semaine de sa sortie, nous avons eu globalement une augmentation de visites par rapport à la semaine d'avant de cet ordre là :

  • Forums : hausse de 9,7% (plus de 300 visites en plus)
  • Documentation : hausse d'environ 23,8% (soit environ 500 visites en plus)
  • Le site Fedora-fr : hausse de 29,6% (soit 60 visites en plus)
  • Borsalinux-fr : 0 visites depuis fin mars contre une vingtaine d'habitude, sans doute un bogue

Si vous avez connaissance d'un autre lien, n'hésitez pas à partager !

Rendez-vous pour Fedora Linux 41.

GNOME maintainers: here’s how to keep your issue tracker in good shape

Posted by Allan Day on May 17, 2024 03:29 PM

One of the goals of the new GNOME project handbook is to provide effective guidelines for contributors. Most of the guidelines are based on recommendations that GNOME already had, which were then improved and updated. These improvements were based on input from others in the project, as well as by drawing on recommendations from elsewhere.

The best example of this effort was around issue management. Before the handbook, GNOME’s issue management guidelines were seriously out of date, and were incomplete in a number of areas. Now we have shiny new issue management guidelines which are full of good advice and wisdom!

The state of our issue trackers matters. An issue tracker with thousands of open issues is intimidating to a new contributor. Likewise, lots of issues without a clear status or resolution makes it difficult for potential contributors to know what to do. My hope is that, with effective issue management guidelines, GNOME can improve the overall state of its issue trackers.

So what magic sauce does the handbook recommend to turn an out of control and burdensome issue tracker into a source of calm and delight, I hear you ask? The formula is fairly simple:

  • Review all incoming issues, and regularly conduct reviews of old issues, in order to weed out reports which are ambiguous, obsolete, duplicates, and so on
  • Close issues which haven’t seen activity in over a year
  • Apply the “needs design” and “needs info” labels as needed
  • Close issues that have been labelled “need info” for 6 weeks
  • Issues labelled “needs design” get closed after 1 year of inactivity, like any other
  • Recruit contributors to help with issue management

To some readers this is probably controversial advice, and likely conflicts with their existing practice. However, there’s nothing new about these issue management procedures. The current incarnation has been in place since 2009, and some aspects of them are even older. Also, personally speaking, I’m of the view that effective issue management requires taking a strong line (being strong doesn’t mean being impolite, I should add – quite the opposite). From a project perspective, it is more important to keep the issue tracker focused than it is to maintain a database of every single tiny flaw in its software.

The guidelines definitely need some more work. There will undoubtedly be some cases where an issue needs to be kept open despite it being untouched for a year, for example, and we should figure out how to reflect that in the guidelines. I also feel that the existing guidelines could be simplified, to make them easier to read and consume.

I’d be really interested to hear what changes people think are necessary. It is important for the guidelines to be something that maintainers feel that they can realistically implement. The guidelines are not set in stone.

That said, it would also be awesome if more maintainers were to put the current issue management guidelines into practice in their modules. I do think that they represent a good way to get control of an issue tracker, and this could be a really powerful way for us to make GNOME more approachable to new contributors.

Install PHP 8.3 on Fedora, RHEL, CentOS, Alma, Rocky or other clone

Posted by Remi Collet on May 17, 2024 12:11 PM

Here is a quick howto upgrade default PHP version provided on Fedora, RHEL, CentOS, AlmaLinux, Rocky Linux or other clones with latest version 8.3.

You can also follow the Wizard instructions.

 

Repositories configuration:

On Fedora, standards repositories are enough, on Enterprise Linux (RHEL, CentOS) the Extra Packages for Enterprise Linux (EPEL) and Code Ready Builder (CRB) repositories must be configured.

Fedora 40

dnf install https://rpms.remirepo.net/fedora/remi-release-40.rpm

Fedora 39

dnf install https://rpms.remirepo.net/fedora/remi-release-39.rpm

RHEL version 9.4

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-9.rpm
subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms

RHEL version 8.9

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm
subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms

Alma, CentOS Stream, Rocky version 9

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-9.rpm
crb install

Alma, CentOS Stream, Rocky version 8

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm
crb install

 

php module usage

 

With Fedora and EL  8, you can simply use the remi-8.3 stream of the php module

dnf module reset php
dnf module install php:remi-8.3

 

PHP upgrade

 

By choice, the packages have the same name as in the distribution, so a simple update is enough:

yum update

That's all :)

$ php -v
PHP 8.3.7 (cli) (built: May  7 2024 16:35:26) (NTS gcc x86_64)
Copyright (c) The PHP Group
Zend Engine v4.3.7, Copyright (c) Zend Technologies
    with Zend OPcache v8.3.7, Copyright (c), by Zend Technologies

 

Known issues

The upgrade can fail (by design) when some installed extensions are not yet compatible with  PHP 8.3.

See the compatibility tracking list: PECL extensions RPM status

If these extensions are not mandatory, you can remove them before the upgrade, otherwise, you must be patient.

Warning: some extensions are still under development, but it seems useful to provide them to upgrade more people and allow users to give feedback to the authors.

 

More information

If you prefer to install PHP 8.3 beside the default PHP version, this can be achieved using the php83 prefixed packages, see the PHP 8.3 as Software Collection post.

You can also try the configuration wizard.

The packages available in the repository were used as sources for Fedora 40.

By providing a full feature PHP stack, with about 130 available extensions, 10 PHP versions, as base and SCL packages, for Fedora and Enterprise Linux, and with 300 000 downloads per day, the remi repository became in the last 18 years a reference for PHP users on RPM based distributions, maintained by an active contributor to the projects (Fedora, PHP, PECL...).

See also:

Registration Open: Fedora 40 Release Party on May 24-25

Posted by Fedora Magazine on May 17, 2024 08:00 AM

Join us next weekend on Friday and Saturday, May 24-25 to celebrate the release of Fedora Linux 40! We’re going to be hearing from community members inside and outside of the Fedora Project on what is new in Fedora 40, what we can look forward to next, and how we come together as a community.

Register Here

Register to attend the Fedora 40 Release Party here!

What topics will be discussed?

  • We’ll learn about upgrades like KDE Plasma 6 and Fedora Asahi Remix 40
  • Community updates like from the Fedora DEI Team and Mentored Project Initiative
  • Infrastructure changes like with Kiwi for Fedora Cloud and the Git Forge investigation
  • Updates from our downstream friends at Universal Blue and Ultramarine
  • And more! Here’s the schedule of topics and speakers

Join us!


What is a release party?

Fedora Release Parties are virtual, user-focused conferences where the community comes together to talk about what’s new in the latest release of Fedora and where we’re going for future releases. Topics we’ve covered include the process of working through implementing a change and roadmaps for what different teams want to do next in Fedora. Sometimes there are updates from Fedora-associated groups who have something to share, like Amazon or Lenovo. We also have breaks for socials where we can talk to each other in video calls (you don’t have to share video or speak if you don’t want to). If you have an interest in a behind-the-scenes look at your favorite distro, come learn and hang out with the contributors who make it!

Where will it happen?

In previous years we used Hopin to run virtual conferences, but the Fedora 40 Release Party will be the first that we do in Matrix! We’ve wanted to do this since the Creative Freedom Summit showed how it could be done a couple of years ago. This is a step that allows us to lean more on open source for outreach.

However, we also want to be open in another way, and that’s with livestreaming. We will be streaming the talks on our Fedora Project YouTube channel. That way anyone can watch and the streams will be immediately available afterwards!

Please register for the release party! Once you do and provide your Matrix ID, the organizers will invite you to the Matrix channel and we’ll be on our way to a great celebration.

We hope to see you there!

Learn more

Check out the Fedora 39 Release Party to get an idea of the kinds of topics we cover.

Get hyped on social media with hashtag #FedoraReleaseParty!

Visualizing potential solar PV generation vs smart meter electricity usage

Posted by Daniel Berrange on May 16, 2024 10:35 PM

For many years now, energy suppliers have been rolling out “smart meters” to home across the UK. It has largely been promoted as a way to let consumers reduce their consumption by watching their energy usage on an in home display (IHD) gadget. This is massively under selling the potential benefits of smart meters, and combined with various other reasons, leaves many unenthusiastic about the change. A few more adventurous suppliers, most prominently Octopus Energy, are taking advantage of smart meters to offer innovative dynamic tariffs, which particularly benefit those with electric vehicles.

I have been exploring the possibility of a solar PV installation at home and after talking to various suppliers have been left feeling quite underwhelmed with the reports they provide with their quotations. Typically a selection of crude 2-dimensional charts showing potential generation per month and some rough estimate of how much grid import will be reduced and grid export available based on your yearly kwh usage figure. They’ll only illustrate one or two scenarios, so it is hard to understand the impact of altering the number of panels requested, or sizing of any battery storage.

The European PVGIS service provides a good site where you can visualize the solar generation at your home based on actual recorded atmospheric conditions from history (typically their data lags current day by 2-3 years). They still only make use of your yearly kwh consumption figure with a generic model to estimate benefits. With a smart meter I have historic electricity consumption for my home in kwh at 30 minute granularity. Match that against the PVGIS data and its possible to produce a fine grained simulation of solar PV generation potential against consumption.

And so came forth the idea of a simple python script I’m calling “smartsolar” to combine PVGIS data with smart meter data and produce reports evaluating potential deployment options. It can use smart meter records downloaded from n3rgy (compatible with any UK supplier once registered), or from Octopus Energy‘s customer portal. Using plotly the reports contain 3-dimensional interactive charts showing data day-by-day, hour-by-hour. Changing the configuration file lets you quickly evaluate many different deployment scenarios to understand the tradeoffs involved.

In the charts (screenshots of a fixed POV only, non-interactive) that follow I’m evaluating an arbitrary installation with 10 panels on a south-east orientation and 6 panels on a north-west orientation, panels rated at 400W, along with 10 kwh of battery storage. The smart meter consumption data covers primarily 2023, while the PVGIS generation data is from 2019, the latest currently available.

The first chart shows the breakdown of consumption data and exposes some interesting points. The consumption is broadly consistent from day-to-day across the whole year and is spread fairly flat across the time period from 8am to 11pm. This reflects the effects of full time remote working. Someone office based would likely see consumption skewed towards the evenings with less in the middle of the day. The enormous nighttime peaks in consumption in the last month of the year reflect the fact that I acquired an electric vehicle and started charging it at home on a Zappi. The huge day time usage peak was a side effect of replacing our heating system in Dec, which required temporary use of electric heaters. NB, the flat consumption around Sept timeframe is a result of me accidentally loosing about 25 days of smart meter data.

The next two charts give the real world PV generation of the two hypothesized solar arrays, based on the 2019 PVGIS data source. The larger south east facing array starts generating as early as 5am in the middle of summer, but drops off sharply around 3pm. There is still useful generation in mid-winter from 8am till around 2pm, but obviously the magnitude is far below summer peak.

Conventional wisdom is that PV is a waste of time on predominantly north facing roofs, but what is the real world performance like ? The small north west facing array shows a gradual ramp in mid-summer from 6am to reach a peak at around 6pm, and drops off to nothing by 8pm. For the 2 months either side of mid-winter, the generation is so negligible to be called zero. The absolute peak is 1/2 that of the south-east array, but there are 10 panels south east and only 6 panels north west. So peak generation per panel is not too terrible on the north east side. The limitations are really about the terrible winter performance, and the skewing of generation towards the evening hours. The evening bias, however, is potentially quite useful since that could match quite well with some people’s consumption patterns and early evening is commonly when the national grid has highest CO/2 intensity per kwh.

Seeing generation of each array separately is interesting in order to evaluate their relative performance. For comparison with consumption data though, a chart illustrating the combined generation of all arrays is more useful. This combined chart shows the south east and north west arrays complementing each other very well to expand the width of the peak generation time in the summer months, with peak summer output of 4kw per hour and covering the main hours of consumption. There is still useful improvement in the spring and autumn months, but winter is unchanged.

By taking the intersection of consumption and generation data, we can evaluate what subset of the generated energy is capable of being consumed at home, ignoring the possibility of battery storage. This chart shows the the generated energy can make a significant contribution to recorded consumption across the whole year. Obviously generated energy is only being consumed during daylight hours, since this chart discounts the usage of the any hypothetical home storage battery

It gets more interesting when taking into account the battery storage. This new chart shows that the house can run from self-generated energy around the clock for most of the year, but it especially shows how the battery can “time shift” generated energy into the evening once daylight has faded, but the house still has significant energy needs and continues to supply the house throughout the night even for some days in winter.

Another way of looking at the data is to consider how much energy is being imported from the national grid, as this indicates an inability to generate sufficient from the local PV arrays. First without the battery storage, it can be seen that in the middle of the day grid import is negligible outside of the winter months, but there is still considerable import in the evenings. The spikes from EV charging and temporary electric heaters are also still present. The winter months still show some significant grid import even in the middle of the day.

Adding in the battery storage calculations has a really dramatic impact on grid import. With 10kw of storage, there is enough buffer to be 100% self-sufficient in energy generation for 7 months of the year, which is a really astonishing result. A battery of this size obviously can’t address the peaks demanded by the EV charging periods, since the car battery capacity is well in excess of the home storage battery, but that’s to be expected.

The total PV generation capacity over the year with the 16 panels is 5300 kwh while yearly consumption is only 3000 kwh, so clearly there is a lot of excess electricity going unused. To visualize this, a chart showing exports to the grid is useful to consider. Unsurprisingly, with no battery, we see strong grid export rates across all the day time hours when the panels are generating.

If we now add in the effects of a home battery, the grid export pattern changes somewhat. In winter months there is absolutely no grid export, with 100% of solar generation now able to be consumed locally, since demand far exceeds what is able to be generated in these months. In the rest of the months, export doesn’t start until an hour or two later in the morning. This shows the battery being recharged after overnight usage, until it is full and exporting of energy resumes.

An alternative way to consider the grid import and export is to combine the two charts to illustrate the flow, with positive being export and negative being import. This nicely visualizes when the direction of flow flips during the day.

With battery storage taken into account, it is very apparent when there is neither grid import nor grid export during summer months.

After considering the import and export rates for the grid, the final thing to look at is the charge and discharge rate of the battery storage. In terms of charging, the pattern broadly reflects the initial period of daylight hours throughout the year, as there is always either a slight mismatch in generation vs demand, or a huge surplus available. The battery typically fills up very quickly at the start of the day and remains that way for most of the day. This might suggest the need for a bigger battery, but the grid import chart shows that the house can run entirely from local consumption for 8 months of the year, an in winter months all PV generation is consumed, so there is not much to gain from a bigger battery.

The battery discharge pattern is perhaps the more interesting of the two charts, as it shows exactly when the benefit of the battery is most felt. In summer there are some peaks of discharge at the start of the day, this reflects some temporary loads which exceed the available generation, such as running the dishwasher or washing machine. In the middle of the day there is very little discharge except for winter months, but the evening is there is really shines. The solar PV generation has peaked, but there is still major consumption demand for cooking dinner on an electric induction hob, and/or oven, and the periodic use of a tumble dryer. Finally the battery takes up the load throughout the night when there is zero PV generation, but fairly low baseline demand.

Again the charge and discharge charts can be combined to show the flow in (positive) and out (negative) of the battery

The final chart to look at is the battery charge level, which will give an idea of how well sized the battery is. If it never reaches a 100% full state, then it is likely oversized, but if it spends the whole year at 100% full state, then it is likely undersized. The ideal with be tradeoff somewhere in between, perhaps with a battery large enough to eliminate grid import for perhaps the middle 50% of the year, showing periods of strong charge and discharge.

With this walkthrough complete, the potential benefits of having fine grained electricity consumption data from smart meters is becoming more apparent. Having access to both consumption and generation data for your home and its location allows evaluation of an arbitrary number of different solar PV + battery storage deployment options, where a commercial installer might only show 2-3 scenarios at most.

There are still many limitations to the visualization that should be kept in mind

  • The electricity consumption data reflects a point in time before solar PV is installed, and it is commonly reported that people tend to change their usage patterns to take most advantage of free electricity they’ve got available. IOW, the level of self-consumption in the no-battery scenario is probably understated, while the potential gain in self-consumption from adding a battery is slightly over-stated.
  • The electricity consumption data reflects one year, while the PVGIS solar irradiation data is from a different year. Electricity consumption may well vary depending on weather, for example increased use of tumble dryers when it is cloudy and wet, or decreased use of ovens when it is hot. Or the year chosen for either consumption or generation data may have quirks that makes it non-representative of a typical year. It could be worth trying different years for the PVGIS data to see if it impacts the results produced. An enhancement would be for the tool to average PVGIS data from a number of years in particular.
  • The data is being processed at 1 hour granularity, with an assumption that generation and consumption is spread evenly across the hour. In reality this does not likely line up so well, and so self-consumption in the no battery scenario is likely overstated. The with battery charts, however, are likely to be fairly unaffected as the battery will easily compensate for short term mis-matches in generation and consumption
  • In houses with hot water storage cylinders, it is very common to fit a solar diverter, such that when there is excess generation, it will be used to heat hot water instead of being exported to the grid. Thus the level of grid export is likely overstated, and self-consumption understated. There is also no visualization of the reduction in gas bill from the use of free electricity to heat water instead of a gas heater. Thus the potential benefits from having home storage batteries will be overstated to some degree.
  • In houses with EV chargers, it is also typical to divert excess generation into the car, so again the level of grid export is likely overstated and self-consumption understated. Again this will have the effect of overstating the benefits of a home stokrage battery.
  • The generation figures don’t take into account losses from the equipment, or localized degradation from shading on individual panels
  • The consumption figures don’t reflect potential future changes in usage. For example, if the gas boiler were to be replaced by a heat pump, demand in the winter months in particular would massively increase, and summer months would increase to some extent for heating of hot water. This might push towards oversizing the PV array in the short term.

Despite these caveats, the visualization should still be very beneficial in evaluating different solar PV and battery installation scenarios.

Using the ATEN CV211 (all-in-one KVM adapter) with Fedora Linux

Posted by Andreas Haerter on May 16, 2024 05:32 PM

The ATEN CV211 is an all-in-one KVM (Keyboard, Video, Mouse) adapter that turns your laptop into a KVM console, combining the functionality of a wormhole switch, capture box, external DVD-ROM, keyboard, mouse, and monitor, all in one compact and convenient unit. I really like the hardware in daily operations, especially when I have to a takeover new environments with “historically grown” cabling. It is nice to have the ability to get the screen and keyboard control of a yet unknown server without hassle—all with a small USB adapter in your backpack:

<figure>ATEN CV211 KVM switch: photo of the hardware</figure>

If you connect the adapter, you’ll get a 10 MiB drive mounted with the following contents, containing a Microsoft Windows Client WinClient.exe (basically a Runtime Environment and wrapper) and the real application JavaClient.jar:

$ ll
total 9,1M
drwxr-xr-x. 2 user user  16K  1. Jan 1970  .
drwxr-x---+ 3 root root   60 30. Apr 19:08 ..
-rw-r--r--. 1 user user 3,7M 30. Dez 2019  JavaClient.jar
-rw-r--r--. 1 user user 2,0M 30. Dez 2019  Vplayer.jar
-rwxr-xr-x. 1 user user 3,5M 30. Dez 2019  WinClient.exe

The “login failed” problem

The JavaClient.jar KVM console is mostly the same as ATEN uses for all their IP KVM stuff. They just bind the service to some high port on localhost and use the hardcoded credentials -u administrator -p password to connect (which is obvious in several places):

<figure>ATEN CV211 KVM switch: credentials</figure>

Sadly, the Java application is not able to run out-of-the-box on a Fedora 40 Linux with OpenJDK / Java SE. The application will start but sometimes does not even list the device. And if there is a device to connect to, the login will fail:

<figure>ATEN CV211 KVM switch: login failed with OpenJDK</figure>

The JavaClient.jar will not be able to connect with any supported OpenJDK or Azul Zulu Java RE:

# incompatible Java version :-(
$ java -version
openjdk version "17.0.9" 2023-10-17

Solution: Oracle JDK 7

For anybody having the same problem, the following should help:

  1. Use a copy of the Oracle JDK 7 (the patch level does not matter) and the application will work without flaws.1
  2. Make sure the current working directory is the USB mount point so the .jar files are in ./.

For example, if you just extract jdk-7u80-linux-x64.tar.gz to /tmp, you can use the application as follows:

tar -xvf jdk-7u80-linux-x64.tar.gz -C /tmp
cd /run/media/user/disk # or wherever the ATEN CV211 storage was mounted
sudo /tmp/jdk1.7.0_80/bin/java -jar ./JavaClient.jar
<figure>ATEN CV211 KVM switch: screenshot of the working application</figure>

You can download the Oracle JDK 7 from https://www.oracle.com/de/java/technologies/javase/javase7-archive-downloads.html, but keep in mind to check the license conditions, especially if you are operating in a commercial environment.


  1. Do not use this old, unpatched Java RE for anything else because of known security vulnerabilities. ↩︎

Use copyleft licenses for open source or life with the consequences

Posted by Andreas Haerter on May 16, 2024 03:26 PM

A good open-source license allows reuse of source code while retaining copyright. But you should also think about copyleft when starting a open-source project or company.

Licenses like the General Public License (GPL) are usually better for the open-source ecosystem than permissive ones like Apache 2 or MIT as they require that any modifications or derivative works are shared, promoting a cycle of continuous contributions and improvements. Enhancements are distributed, benefiting the entire community rather than allowing the exploitation of open source code without giving back (looking at you, Amazon Web Services).

Community > Hypergrowth

Licenses like the GNU Affero General Public License (AGPL) might prevent some corporations from using an open-source project because they do not want to release the source code of their own modifications to it. Sadly, corporate compliance often prohibits the usage of copyleft projects altogether, even if nobody plans to modify anything. Especially the legal departments of large “enterprizy” organizations often prefer software with licenses like MIT as they want it simple and “risk”-free.

In light of license changes, the impression comes to mind that many start-ups use open source not because of freedom but as an argument for adoption in the enterprise ecosystem. They avoid choosing (A)GPLv3 licenses to facilitate easier corporate adoption without generating enough revenue, while being funded by venture capital and without getting contributions back by organization who could easily afford giving back something. Then, after being adopted, they complain.

While the open-source contributions from corporations like HashiCorp are impressive, the overall situation is complex. There’s a reason why Linux (GPL licensed) is still around, growing, and making money for so many while companies behind widespread open source projects often fail financially and burning insane amounts of money. It might work out for individuals and owners when getting bought, but it hurts users and ecosystems who relied on something.

Your SaaS will not compete against AWS or internal IT staff

So don’t be surprised if licenses like MIT attract large corporations and users who don’t care about you or the community, making it difficult to find fair cooperation (including financial resources) with them later. Stick with a real, copyleft license that has less adoption by other enterprises and focus on organic growth with people who care about the project.

Alternatively, be prepared for the consequences that Amazon or other hyperscalers will attract a large number of customers using your product without giving anything back—as you stated that’s OK by using e.g the MIT license.

If you still want to go that route, you must establish another source of income right away. One has to be realistic: Competing with your open-source product’s own SaaS against any SaaS by Amazon or other hyperscalers—or even the well-trained on-premises operations team—will not work out if that’s your only way to make money. Services that support users in operating in-house or enabling paid development (e.g., prioritizing features for a fee) are also possible with open source and are the better choice.

Server Updates/Reboots

Posted by Fedora Infrastructure Status on May 15, 2024 09:00 PM

We will be applying updates to all our servers and rebooting into newer kernels. Services will be up or down during the outage window.

Experimental syslog-ng packages for Amazon Linux 2023

Posted by Peter Czanik on May 15, 2024 11:43 AM

Last year, I received many requests about syslog-ng for Amazon Linux 2023, but I could not find an easy way to create syslog-ng packages. Recently, however, I found that Fedora Copr supports building packages for Amazon Linux 2023. So, with a little bit of experimentation, I got a cut down version of syslog-ng compiled.

Read more at https://www.syslog-ng.com/community/b/blog/posts/experimental-syslog-ng-packages-for-amazon-linux-2023

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Database Migrations

Posted by Fedora Infrastructure Status on May 14, 2024 08:00 PM

We will be migrating a number of our database servers to RHEL9, newer versions of database software and more resources. During the migration services that use these databases may be offline completely. The small servers ( db-fas01 and db03 ) should move and have service restored sooner than the two larger hosts …

CentOS Stream 8 END OF LIFE : 2024-05-31

Posted by Stephen Smoogen on May 14, 2024 02:51 PM

CentOS Stream 8 EOL

CentOS Stream 8 will reach its end of life on May 31st, 2024. When that happens, the CentOS Infrastructure will follow the standard practice it has been doing since the early days of CentOS Linux:
  1. Move all content to https://vault.centos.org/
  2. Stop the mirroring software from responding to EL-8 queries. 

The first change usually causes any mirror doing an rsync with delete options to remove the contents from their mirror. The second change will cause the dnf or yum commands to break with errors. The vault system is also a single server with few active mirrors of its contents. As such, it is likely to be overloaded and very slow to respond as additional requests are made of it.

In total, if you are using CentOS Stream 8 in either production or a CI/CD system, you can expect a lot of errors on June 1st or shortly afterwords. 

What you can do!

There are several steps you can do to get ahead of a possible tsunami of problems:
  1. You can look to moving to a newer release of CentOS Stream before the end of the month. This usually will require deployment of new images or installs versus straight updates. 
  2. You can see if any of the 'move my Enterprise Linux' tools have added support for moving from CentOS Stream 8 to their EL8.10. For releases before 8.10, this was very hard because CentOS Stream 8 was usually the next release, but 8.10 is at a point where Alma, Oracle, Red Hat Enterprise Linux, or Rocky are at the same revisions or newer.
  3. You can start mirroring the CentOS Stream 8 content into your infrastructure and point any CI/CD or other systems to that mirror which will allow you to continue to function.

Mirroring CentOS Stream

Of the three options, I recommend the third. However in working out what is needed to mirror CentOS Stream, I realized I needed newer documentation and it would probably be a long post in itself. For a shorter version for self-starters, I recommend the documentation on the CentOS wiki,  https://wiki.centos.org/HowTos(2f)CreateLocalMirror.html  While the information was written for CentOS Linux 6 which was end-of-lifed in 2020, it covers most of the instructions needed. The parts which may need updating is the amount of disk space required for CentOS Stream which seems to be about 280 GB for everything and maybe around 120GB for any one architecture. 

 References:

 

CentOS Linux 7: End of Life 2024-06-30

Posted by Stephen Smoogen on May 14, 2024 02:51 PM

CentOS 7 EOL

If this looks a lot like the CentOS Stream 8 EOL content.. well it is because they aren't too different. It is just that instead of doing this once every 4 years, we get to do this twice in one year.
 
The last CentOS Linux release, CentOS Linux 7, will reach its end of life on June 30th, 2024. When that happens, the CentOS Infrastructure will follow the standard practice it has been doing since the early days of CentOS Linux:
  1. Move all content to https://vault.centos.org/
  2. Stop the mirroring software from responding to EL-7 queries.  
  3. Additional software for EL-7 may also be removed from other locations.

The first change usually causes any mirror doing an rsync with delete options to remove the contents from their mirror. The second change will cause the yum commands to break with errors. The vault system is also a single server with few active mirrors of its contents. As such, it is likely to be overloaded and very slow to respond as additional requests are made of it. 

 At the same time, the EPEL software on https://dl.fedoraproject.org/pub/epel/7 will be moved to /pub/archive/epel/7 and the mirrormanager for that will be updated appropriately.

In total, if you are using CentOS Linux 7 in either production or a CI/CD system, you can expect a lot of errors on July 1st or shortly afterwords.

What you must do!

There are several steps you can do to get ahead of the tsunami of problems:
  1. You can convert to Red Hat Enterprise Linux 7 and see about getting a Extended LifeCycle Support contract with the system. This is a stop gap measure for you to move toward a newer release of a Linux operating system.
  2. You can move your system to a replacement Enterprise Linux distribution. The Alma Project, Rocky Linux, Oracle and Red Hat all offer tools which can transition an EL7 system to a version of the operating system they will support for several years.
  3. If you are not able to move your systems in the next 45 days, you should look at mirroring the CentOS Linux 7 operating system to a more local location and move your update configs to use your mirror. With the large size of systems which could potentially try to use the vault system, I would not expect this to be very useful. As you will probably need to reinstall, add software or do continuous CI/CD in other areas.. you should keep a copy of the operating system local to your networks.

 References:

New badge: Fedora Week of Diversity 2024 !

Posted by Fedora Badges on May 14, 2024 02:41 PM
Fedora Week of Diversity 2024You contributed or participated in the Fedora Week of Diversity 2024!

Copr: build your Fedora / RHEL packages for POWER

Posted by Peter Czanik on May 14, 2024 09:10 AM

I’m often asked, how can I be an IBM Champion for POWER, if I do not own an IBM POWER server or workstation. Yes, life would definitely be easier if I had one. However, I have an over 30 years history with POWER, and there are some fantastic resources available to developers for free. Both help me to stay an active member of the IBM POWER open source community.

<figure><figcaption>

Talos II POWER9 mainboard

</figcaption> </figure>

Last time I introduced you to the openSUSE Build Service. This time I show you Copr, the Fedora build service.

Copr

Just like OBS, Fedora Copr also started out as a (relatively) simple service to build Fedora and CentOS packages for x86. As Copr is a project by Fedora, the public instance maintained by Fedora at https://copr.fedorainfracloud.org/ only allows you to build open source software. However, you can also install Copr yourself on your own infrastructure. The source code of Copr is available at https://copr.fedorainfracloud.org/, where you can also find links to the documentation.

Today you can use Copr to build packages not just for Fedora x86, but almost all RPM distributions, including openSUSE and OpenMandriva. In addition to x86, you can build packages for 64 bit ARM (aarch64), IBM mainframes (s390x), and IBM POWER 64 bit, little Endian (ppc64le).

<figure><figcaption>

Platform selection in Fedora Copr

</figcaption> </figure>

You can access Copr using its web interface. There is also a command-line utility, but it was very limited when I last checked. Enabling support for POWER in your project is easy: just select the POWER architecture versions of distributions when you setup the project. You can enable support for POWER also later, but Copr does not automatically build packages for the new architecture. TL;DR: enable support for POWER before building any packages to make your life easier.

How do I use Copr?

Just as with the openSUSE Build Service, my first use of Copr was to make up-to-date syslog-ng packages available to the community. Along the way I used Copr to build some syslog-ng dependencies not yet available in Fedora or RHEL. Some of these are already part of the official distributions.

I did not have a chance yet to benchmark syslog-ng on POWER10, however in the POWER9 era POWER was the best platform to run syslog-ng. I measured syslog-ng collecting over 3 million log messages a second on a POWER9 box when x86 servers could barely go above the 1 million mark.

When I make the latest syslog-ng versions available, I build my EPEL (Extra Packages for Enterprise Linux) packages not just for x86, but also for POWER. I do not know how accurate Copr download statistics are, but for some syslog-ng releases it shows that almost a fourth of all downloads were for POWER syslog-ng packages: https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng44/.

Why Copr?

If your primary focus is to build packages for the Red Hat family of operating systems, Copr provides you with the widest range of possibilities. You can regularly test if your software still compiles on Fedora Rawhide, while providing your users with packages for all the Fedora and RHEL releases. Best of all: even if you do not have a POWER server to work on, you can serve your users with packages built for POWER.

Recent EPEL dnf problems with some EL8 systems

Posted by Stephen Smoogen on May 13, 2024 05:12 PM

Last week there were several reports about various systems having problems with EPEL-8 repositories. The problems started shortly after various systems in Fedora had been updated from Fedora Linux 38 to Fedora 40, but the problems were not happening to all EL-8 systems. The problems were root-caused to the following:

  1. Fedora 40 had createrepo-1.0 installed which defaults to using zstd compression for various repositories.
  2. The EL8 systems which were working had some version of libsolv-0.7.20 installed which links against libzstd and so works.
  3. The EL8 systems which did not work either had versions of libsolv before 0.7.20 (and were any EL before 8.4) OR they had versions of libsolv-0.7.22

The newer version of libsolv was traced down to coming from repositories of either Red Hat Satellite or a related tool pulp, and was a rebuild of the EL9 package. However it wasn't a complete rebuild as the EL9 version has libzstd support, but the EL8 rebuild did not. 

A band-aid fix was made by Fedora Release Engineering to have EPEL repositories use the xz compression method for various files in /repodata/ versus libzstd. This is a slower compression method, but is possible to be used with all of the versions of libsolv reported in the various bugs and issue trackers.

This is a band-aid fix because users will run into this with any other repositories using the newer createrepo. A fuller fix will require affected systems to do one of the following:

  1. If the system is either Red Hat Enterprise Linux, Rocky Linux, Alma Linux, or Oracle Enterprise Linux and not running EL-8.9 or later, they need to upgrade to that OR point to an older version of a repository in https://dl.fedoraproject.org/pub/archive/epel/
  2. If the system has versions of libsolv-0.7.22 without libzstd support, they should contact the repository to see if a rebuild with libzstd support can be made. 
  3. If the system is CentOS Stream 8, then you should be making plans to upgrade to a different operating system as that version will be EOL on May 31st 2024.

External References

 

Fedora Ops Architect Weekly

Posted by Fedora Community Blog on May 13, 2024 01:11 PM

Hi folks, welcome to the weekly roundup from the wrangler Operations Architect 🙂 Read on for some information on upcoming events, F41 dates and some reading recommendations from around the project!

Events

F40 Release Party!

The F40 Release Party will be on Friday 24th & Saturday 25th May. More details can be found at the magazine article and looking forward to seeing you there!

Fedora Week of Diversity

The Fedora week of diversity is running from June 27th – 22nd. You can read more about the event on their latest blog post.

Flock to Fedora

Flock to Fedora is August 7th – 10th in Rochester, NY, USA. Our review panel is currently working through all of the talk submissions we received to build a great schedule for the conference. Thank you to everyone who submitted their ideas, and notifications will be sent in the coming weeks.

Devconf.cz

If you are attending devconf.cz next month and want to connect with some fellow fedorans, feel free to join the Fedora matrix room Fedora@devconf.cz to coordinate meetups, etc.

Fedora Linux 41

F41 is approaching quickly, so here are some important dates for change propoals, and other key dates and milestones can be found on the release schedule.

  • June 19th – Changes requiring infrastructure changes
  • June 25th – Changes requiring mass rebuild
  • June 25th – System Wide changes
  • July 16th – Self Contained changes

Announced Changes

Changes with FESCo

Hot Topics

There is a lot of conversations happening around Fedora, and it can be hard to keep track of them all! Below is the top two on my own list from both discussion.fpo and devel@lists.fedoraproject.org, in case you need some inspiration

Help Wanted

The post Fedora Ops Architect Weekly appeared first on Fedora Community Blog.

OpenShift upgrade

Posted by Fedora Infrastructure Status on May 13, 2024 08:00 AM

We will be upgrading our production OpenShift cluster that runs many of our applications. Normally, this would just be a 0 downtime event, but in this case we are switching networking models, so we need to completely reboot all the nodes, causing some applications to be unavailable for short time …

An alternative way of saving toolboxes

Posted by Fedora Magazine on May 13, 2024 08:00 AM

A previous article in the Fedora Magazine describes how toolboxes can be saved in a container image repository and restored on the same or a different machine. The method described there works well for complex scenarios where setting up the toolbox takes considerable time or effort. But most of the time, toolboxes are simpler than that, and saving them as container images is… well, wasteful. Let’s see how we can store and use our toolboxes in a cheaper way.

Problem statement

  1. Define toolboxes in simple text files so that they are easy to transfer or version in a GIT repository.
  2. Have the ability to describe a toolbox in a declarative way.
  3. Have everything set up automatically so that the toolbox is ready to use upon initial entry.
  4. The system should support customizations or extensions to accommodate any corner cases that were not foreseeable from the start.

Proposed solution

Toolboxes are just instances of a container image. However, they have access to the home directory and use the same configuration as the host shell. On the other hand, each toolbox has a special file, visible only inside the toolbox, that keeps the name of the toolbox. This is /run/.containerenv. Combining these two facts presents a possible route to the desired state: simply keep whatever initialization or configuration steps as simple rc files and use a convention to source them only for a specific toolbox. Consider, step by step, how this can be accomplished (using bash as an example because it’s the default shell in Fedora. Other shells should support the same mechanisms).

For the system to work, two things are needed:

  • A few toolbox definitions.
  • Something to take those definitions and turn them into toolboxes.

Storing toolbox Definitions

The first point is easy. Put the files anywhere. For example, they can reside in~/.bashrc.d/toolboxes. (This path is used throughout the rest of the discussion.) Each rc file is named after the toolbox it configures. For example, ~/.bashrc.d/toolboxes/taskwarrior.rc for a toolbox named taskwarrior to be used for tasks and time tracking. The contents of this file is described later in this article.

Converting Definitions into toolboxes

For the second point, to keep things nice and tidy, place the orchestration logic in a separate file in the ~/.bashrc.d directory. For example, call it toolbox.rc. It should have the following content:

function expose(){
[ -f "$1" ] || echo -e "#!/bin/sh\nexec /usr/bin/flatpak-spawn --host $(basename $1) \"\$@\"" | sudo tee "$1" 1>/dev/null && sudo chmod +x "$1"
}

function install_dependencies(){
[ -f /.first_run ] || sudo dnf -y install $@
}

if [ -f "/run/.toolboxenv" ]
then
TOOLBOX_NAME=$( grep -oP "(?<=name=\")[^\";]+" /run/.containerenv )
if [ -f "$HOME/.bashrc.d/toolboxes/${TOOLBOX_NAME}.rc" ]
then
. "$HOME/.bashrc.d/toolboxes/${TOOLBOX_NAME}.rc"
fi

if ! [ -f /.first_run ] ; then
[[ $(type -t setup) == function ]] && setup
sudo touch /.first_run
fi
fi

With the orchestration logic set up, some toolboxes are needed to orchestrate. Continuing with the taskwarrior toolbox example, a definition is added. Create a file called ~/.bashrc.d/toolboxes/taskwarrior.rc and add the following line in it:

install_dependencies task timew

This practically says that in this toolbox these two packages are to be installed: task and timew.

Creating toolboxes

Now, create and enter the toolbox:

$ toolbox create taskwarrior
$ toolbox enter taskwarrior

When the newly created toolbox is entered, dnf runs automatically to install the declared dependencies. Exit the toolbox and enter it again and you will get the prompt directly because all the packages are already installed.

It is now possible to stop the container and recreate the toolbox and everything will be installed back automatically.

$ podman stop timewarrior
$ toolbox rm timewarrior
$ toolbox create taskwarrior
$ toolbox enter taskwarrior

Getting More Complicated

Now complicate things a bit. What if some random commands need to run to set up the toolbox? This is done using a special function, called setup(), which is called from toolbox.rc, as shown above. As an example, here is the setup for a toolbox for working with AWS resources. It is named awscli and the associated configuration file is ~/.bashrc.d/toolboxes/awscli.rc:

setup() {
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
}

Another use case to cover is exposing host commands inside the toolbox. Consider a toolbox for file management that opens pdf files in a browser previously installed as a flatpak on the host. Employ the expose() function, also defined in toolbox.rc, to give access to flatpak inside the toolbox. Below is the vifm.rc file used to create the vifm toolbox:

install_dependencies vifm fuse-zip curlftpfs shared-mime-info ImageMagick poppler-utils
expose /usr/bin/flatpak

With that, from inside the toolbox, start the browser using:

$ flatpak run com.vivaldi.Vivaldi

Extending the toolbox

The last point in the problem statement (wish list) was extensibility. What is shown thus far is just a basis that already works pretty well. However, one can see how this system is adaptable and can evolve to meet other requirements. For example, new functions can be defined, both as implementations in toolbox.rc or as hooks to be called if defined inside the toolbox configuration file. Also, the existing functionality can be mixed and matched depending on current needs. The last example, for instance, demonstrates a combination of dependencies and host command access. Do not forget that the toolbox configurations defined in ~/.bashrc.d/toolboxes are ultimately just shell files and you can add everything that the shell supports: aliases, custom functions, environment variables, etc.

Troubleshooting

Problem: nothing is happening when I enter a new toolbox.

The ~/.bashrc might not be set up to source files from ~/.bashrc.d. Check that the following block exists in ~/.bashrc:

# User specific aliases and functions
if [ -d ~/.bashrc.d ]; then
for rc in ~/.bashrc.d/*; do
if [ -f "$rc" ]; then
. "$rc"
fi
done
fi
unset rc

Problem: when I enter a toolbox I get a command not found error.

The order in which files are sourced might not be correct. Try renaming ~/.bashrc.d/toolbox.rc to ~/.bashrc.d/00-toolbox.rc, then delete and recreate the toolbox.

PHP version 8.2.19 and 8.3.7

Posted by Remi Collet on May 13, 2024 07:32 AM

RPMs of PHP version 8.3.7 are available in the remi-modular repository for Fedora ≥ 38 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php83 repository for EL 7.

RPMs of PHP version 8.2.19 are available in the remi-modular repository for Fedora ≥ 38 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php82 repository for EL 7.

emblem-notice-24.png The Fedora 39, 40, EL-8 and EL-9 packages (modules and SCL) are available for x86_64 and aarch64.

emblem-notice-24.pngThere is no security fix this month, so no update for version 8.1.28.

emblem-important-2-24.pngPHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

or, the old EL-7 way:

yum-config-manager --enable remi-php83
yum update php\*

Parallel installation of version 8.3 as Software Collection

yum install php83

Replacement of default PHP by version 8.2 installation (simplest):

dnf module switch-to php:remi-8.2/common

or, the old EL-7 way:

yum-config-manager --enable remi-php82
yum update

Parallel installation of version 8.2 as Software Collection

yum install php82

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-9 RPMs are built using RHEL-9.3
  • EL-8 RPMs are built using RHEL-8.9
  • EL-7 RPMs are built using RHEL-7.9
  • intl extension now uses libicu73 (version 73.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.9, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 21.13 on x86_64, 19.23 on aarch64
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php81 / php82 / php83)

Episode 428 – GitHub artifact attestation

Posted by Josh Bressers on May 13, 2024 12:00 AM

Josh and Kurt talk about a new to sign artifacts on GitHub. It’s in beta, it’s not going to be easy to use, it will have bugs. But that’s all OK. This is how we start. We need infrastructure like this to enable easier to use features in the future. Someday, everything will be signed by default.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3382-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_428_GitHub_artifact_attestation.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_428_GitHub_artifact_attestation.mp3</audio>

Show Notes

changing author on lots of commits at once

Posted by Adam Young on May 10, 2024 08:18 PM
git rebase HEAD~4 --exec 'git commit --amend --author "Adam Young admiyo@os.amperecomputing.com" --no-edit'

Week 19 update

Posted by Ankur Sinha "FranciscoD" on May 10, 2024 07:46 PM

I'm trying to get back to blogging, and to make it regular occurrence in my work week, I thought I'd write a weekly work update.

Infra and RelEng Update – Week 19 2024

Posted by Fedora Community Blog on May 10, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 06 May – 10 May 2024

<figure class="wp-block-image size-full is-style-default">Infra&releng Infographic</figure>

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

Community Design

CPE has few members that are working as part of Community Design Team. This team is working on anything related to design in Fedora Community.

Updates

The post Infra and RelEng Update – Week 19 2024 appeared first on Fedora Community Blog.

Unlocking the power of Fedora CoreOS

Posted by Fedora Magazine on May 10, 2024 08:00 AM

Fedora CoreOS is an automatically updating, immutable operating system built on the trusted Fedora Linux distribution. It allows containerized workloads to run securely and at scale. It combines the benefits of containerization with the reliability and security of an immutable infrastructure. In this article, we’ll explore the unique capabilities of Fedora CoreOS and its use cases.

The Essence of Immutability in Fedora CoreOS

One of the core principles of Fedora CoreOS is immutability. In traditional operating systems, individual packages are updated. In Fedora CoreOS, updates are applied atomically as a complete replacement of the entire OS. This approach ensures consistency and reliability across deployments. It also eliminates potential drift or configuration issues caused by incremental updates.

Automated Deployments with Ignition and Butane

Fedora CoreOS leverages Ignition configuration files for automated provisioning and configuration of instances during the initial boot process. However, instead of manually creating these JSON-formatted Ignition files, you can write the configurations in a simpler, human-readable, YAML based format called Butane config. Butane files are then converted into the corresponding Ignition files using the Butane tool.

By automating the provisioning process with Ignition configs generated from Butane specifications, you can consistently deploy and configure Fedora CoreOS instances across different environments. These configuration files ensure repeatable and reliable deployments. The instances apply the configuration defined in the Ignition config on the first boot.

This automation capability is particularly valuable for infrastructure components use cases that require high availability and minimal downtime. These may include systems such as load balancers, firewalls, and other critical systems.

Container orchestration and Kubernetes cluster setup

Fedora CoreOS is optimized for running containerized workloads and seamlessly integrates with container orchestration platforms like Kubernetes. It comes pre-installed with both Podman and Moby-engine (Docker) for all your container needs.

Fedora CoreOS is also at the core of OKD, the community distribution of Kubernetes. It is built from the same projects as Red Hat OpenShift. Additionally, you can customize Fedora CoreOS for specific workloads or environments. This makes it particularly useful for setting up dedicated Kubernetes clusters for different applications or environments.

One notable example is Typhoon. A free and open-source project that provides declarative Kubernetes infrastructure management and integrates with Fedora CoreOS. With Typhoon, you can define your desired Kubernetes cluster configuration using human-readable language. And it will provision and configure additional cluster components, including Fedora CoreOS machines serving as worker nodes. This integration enables efficient and flexible Kubernetes deployments tailored to your needs. It ensures consistent and repeatable configurations across diverse environments like bare metal, cloud providers, and local networks.

Customization and workload optimization

While Fedora CoreOS is immutable, it is still customizable for specific workloads or environments. This capability enables you to optimize Fedora CoreOS instances for particular applications or use cases by adding necessary packages, configurations, or services.

By tailoring Fedora CoreOS to your workloads, you can strike a balance between the benefits of an immutable operating system and the flexibility to meet your requirements. This approach ensures that your applications run in a consistent and optimized environment while still leveraging the security and reliability advantages of CoreOS.

Automatic Updates and Resilience

Fedora CoreOS follows a structured release cycle. Update releases typically occur every two weeks after undergoing extensive testing and validation through multiple update streams, such as “testing” and “next”. This automatic update mechanism ensures that Fedora CoreOS instances stay up-to-date with the latest stable releases. This mechanism also minimizes security risks and provides access to new features and enhancements.

You can opt to run instances on these testing streams to automatically evaluate upcoming releases before deploying to production. This way, you can identify and mitigate potential issues or incompatibilities. If an update introduces problems or vulnerabilities, the ability to roll back to a previous version further enhances the resilience of Fedora CoreOS-based infrastructure.

Hybrid cloud and General-Purpose Server Capabilities

While Fedora CoreOS is optimized for running containerized workloads and Kubernetes clusters, it is also designed to be operable as a standalone, general-purpose server operating system. This versatility makes it a compelling choice for a wide range of server workloads, beyond just containerized applications or Kubernetes clusters. It can provide benefits such as improved security, reliability, and reduced maintenance overhead, even for traditional server applications.

Exploration and learning on Fedora CoreOS

As an open-source project, Fedora CoreOS serves as a valuable resource for exploration and learning about immutable operating systems, containerization, and modern infrastructure practices. The Fedora CoreOS rich documentation includes articles like “Getting Started with Fedora CoreOS” and a host of other useful information about Fedora CoreOS. The Fedora CoreOS FAQ provides a solid starting point for understanding and experimenting with Fedora CoreOS. If you are new to Fedora CoreOS, the tutorial section is a great place to start. For further information about ignition files and how they are made from butane files, check out this section of the Fedora CoreOS documentation.

Additionally, the open-source nature of Fedora CoreOS fosters community collaboration and contribution, enabling knowledge sharing and collective advancement of the project. This inclusive ecosystem encourages users to explore, learn, and contribute to its development, further enhancing its capabilities and adoption.

In conclusion

Fedora CoreOS offers a powerful combination of immutability, automatic updates, container optimization, and customization capabilities. All of this makes it a versatile choice for modern infrastructure and application deployment scenarios. Fedora CoreOS provides a robust foundation to meet diverse needs. Whether you’re deploying containerized applications, setting up Kubernetes clusters, exploring edge computing or IoT, or building secure and resilient infrastructure.

By embracing the principles of immutable infrastructure, automated deployments, and containerization, you can unlock the full potential of Fedora CoreOS and drive innovation in your organization’s infrastructure and application delivery pipelines.

Configuring and using iSCSI

Posted by Dusty Mabe on May 10, 2024 12:00 AM
Recently I looked into enabling and testing multipath on top of iSCSI for Fedora and Red Hat CoreOS. As part of that process I had the opportunity to learn about iSCSI, which I had never played with before. I’d like to document for my future self how to go about setting up an iSCSI server and how to then access the exported devices from another system. Setting up an iSCSI server First off there are a few good references that were useful when setting this up.

How not to waste time developing long-running processes

Posted by Adam Young on May 09, 2024 06:22 PM

Developing long running tasks might be my least favorite coding activity. I love writing and debugging code…I’d be crazy to be in this profession if I did not. But when a task takes long enough, your attention wanders and you get out of the zone.

Building the Linux Kernel takes time. Even checking the Linux Kernel out of git takes a non-trivial amount of time. The Ansible work I did back in the OpenStack days to build and tear down environments took a good bit of time as well. How do I keep from getting out of the zone while coding on these? It is hard, but here are some techniques.

Put everything in the container ahead of time

Build processes necessarily need a lot of tools. On a Fedora System, I often start by doing

yum groupinstall "Development Tools"

And then still may need to install a few more things: Java, Rust, or Python libraries, meson, Dwarves, bison, and others.

If you are doing CI inside a container, as is the norm for the git hosting services, you can specify a custom container. Anything that is not part of your build should be in the container.

Note that by doing this, you are putting another variable in your build process: which version of the container was used. By installing packages at build time, you are using the latest. By using a pre-built container, some of those packages may have gone stale. This is kind of the reason FOR using containers, as you can change package versions at your own cadence, but make sure you are deliberate about it.

Get it working, then disable it

If part of a long running task works, hold on to the output of that task, and disable the task itself. This implies that any information generated from this stage of the task can be deduced from the artifacts.

A Linux Kernel build takes a long time, even with all 100+ processors working on it. Skip the build until you need it to be fresh again.

use mkdir to simulate git checkouts

If you are doing a git checkout, but don’;t actually need that checkout for the follow on stage, you can short circuit that checkout by doing a mkdir of the root directory. This is a more useful hack than I originally realized. Pulling down a huge tree like the Linux kernel can take quite some time. Skipping it until you are ready to test it can save that time.

Use touch or fallocate to simulate build artifacts

If your automation is primarily for moving artifacts around, you can temporarily skip the stage that builds the artifacts until you get the follow on stages working. This is often done in conjunction with the git checkout skip above.

There are many variations to these work-arounds, as well as several more I have not yet documented. The point is to identify where you are waiting on tasks, and to figure out ways to avoid that time while developing the rest of your automation.

If you are working with Ansible, using the flag to start on a specific task can be a powerful time saver. However, that means structuring your playbook such that you don’t gather up all the information you need for a task at the beginning.

The same general idea is true for bash based operations: you can use functions that you call directly, but make sure you don;’t need too much context for that function call. Make it easy to start in the middle, and end in the middle.

The syslog-ng Insider 2024-05: documentation; grouping-by(); PAM Essentials; health

Posted by Peter Czanik on May 09, 2024 11:06 AM

The May syslog-ng newsletter is now on-line:

  • The official syslog-ng OSE documentation got a new look

The syslog-ng Administration Guide received a new look and easier navigation. Not only that, but it is also up-to-date now. Besides, there are now contributor guides available both for the documentation and for syslog-ng developers.

The admin guide is available at: https://syslog-ng.github.io/admin-guide/README

You can reach all syslog-ng OSE-related documentation at: https://syslog-ng.github.io/

If you find any issues, pull requests and problem reports are welcome. The contributor guide describes how you can fix / extend the documentation. You can report issues at: https://github.com/syslog-ng/syslog-ng.github.io/issues

  • Aggregating messages in syslog-ng using grouping-by()
  • Alerting on One Identity Cloud PAM Essentials logs using syslog-ng
  • The syslog-ng health check

It is available at https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2024-05-documentation-grouping-by-pam-essentials-health

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

libwacom and Huion/Gaomon devices

Posted by Peter Hutterer on May 09, 2024 12:01 AM

TLDR: Thanks to José Exposito, libwacom 2.12 will support all [1] Huion and Gaomon devices when running on a 6.10 kernel.

libwacom, now almost 13 years old, is a C library that provides a bunch of static information about graphics tablets that is not otherwise available by looking at the kernel device. Basically, it's a set of APIs in the form of libwacom_get_num_buttons and so on. This is used by various components to be more precise about initializing devices, even though libwacom itself has no effect on whether the device works. It's only a library for historical reasons [2], if I were to rewrite it today, I'd probably ship libwacom as a set of static json or XML files with a specific schema.

Here are a few examples on how this information is used: libinput uses libwacom to query information about tablet tools.The kernel event node always supports tilt but the individual tool that is currently in proximity may not. libinput can get the tool ID from the kernel, query libwacom and then initialize the tool struct correctly so the compositor and Wayland clients will get the right information. GNOME Settings uses libwacom's information to e.g. detect if a tablet is built-in or an external display (to show you the "Map to Monitor" button or not, if builtin), GNOME's mutter uses the SVGs provided by libwacom to show you an OSD where you can assign keystrokes to the buttons. All these features require that the tablet is supported by libwacom.

Huion and Gamon devices [3] were not well supported by libwacom because they re-use USB ids, i.e. different tablets from seemingly different manufacturers have the same vendor and product ID. This is understandable, the 16-bit product id only allows for 65535 different devices and if you're a company that thinks about more than just the current quarterly earnings you realise that if you release a few devices every year (let's say 5-7), you may run out of product IDs in about 10000 years. Need to think ahead! So between the 140 Huion and Gaomon devices we now have in libwacom I only counted 4 different USB ids. Nine years ago we added name matching too to work around this (i.e. the vid/pid/name combo must match) but, lo and behold, we may run out of unique strings before the heat death of the universe so device names are re-used too! [4] Since we had no other information available to userspace this meant that if you plugged in e.g. a Gaomon M106 and it was detected as S620 and given wrong button numbers, a wrong SVG, etc.

A while ago José got himself a tablet and started contributing to DIGIMEND (and upstreaming a bunch of things). At some point we realised that the kernel actually had the information we needed: the firmware version string from the tablet which conveniently gave us the tablet model too. With this kernel patch scheduled for 6.10 this is now exported as the uniq property (HID_UNIQ in the uevent) and that means it's available to userspace. After a bit of rework in libwacom we can now match on the trifecta of vid/pid/uniq or the quadrella of vid/pid/name/uniq. So hooray, for the first time we can actually detect Huion and Gaomon devices correctly.

The second thing Jose did was to extract all model names from the .deb packages Huion and Gaomon provide and auto-generate all libwacom descriptions for all supported devices. Which meant, in one pull request we added around 130 devices. Nice!

As said above, this requires the future kernel 6.10 but you can apply the patches to your current kernel if you want. If you do have one of the newly added devices, please verify the .tablet file for your device and let us know so we can remove the "this is autogenerated" warnings and fix any issues with the file. Some of the new files may now take precedence over the old hand-added ones so over time we'll likely have to merge them. But meanwhile, for a brief moment in time, things may actually work.

[1] fsvo of all but should be all current and past ones provided they were supported by Huions driver
[2] anecdote: in 2011 Jason Gerecke from Wacom and I sat down to and decided on a generic tablet handling library independent of the xf86-input-wacom driver. libwacom was supposed to be that library but it never turned into more than a static description library, libinput is now what our original libwacom idea was.
[3] and XP Pen and UCLogic but we don't yet have a fix for those at the time of writing
[4] names like "HUION PenTablet Pen"...

Keeping the CI logic in bash

Posted by Adam Young on May 08, 2024 02:46 PM

As much as I try to be a “real” programmer, the reality is that we need automation, and setting up automation is a grind. A necessary grind.

One thing that I found frustrating was that, in order to test our automation, I needed to kick off a pipeline in our git server (gitlab, but the logic holds for others) even though the majority of the heavy lifting was done in a single bash script.

In order to get to the point where we could run that script in a gitlab runner, we needed to install a bunch of packages (Dwarves, Make, and so forth) as well as do some SSH Key provisioning in order to copy the artifacts off to a server. The gitlab-ci.yml file ended up being a couple doze lines long, and all those lines were bash commands.

So I pulled the lines out of gitlab-ci.yml and put them into the somewhat intuitively named file workflow.sh. Now my gitlab-ci.yml file is basically a one liner that calls workflow.sh.

But I also made it so workflow.sh can be called from the bash command line of a new machine. This is the key part. By doing this, I am creating automation that the rest of my team can use without relying on gitlab. Since the automation will be run from gitlab, no one can check in a change that breaks the CI, but they can make changes that will make life easier for them on the remote systems.

The next step is to start breaking apart the workflow into separate pipelines, due to CI requirements. To do this, I do three things:

  • Move the majority of the logic into functions, and source a functions.sh file. This lets me share across top-level bash scripts
  • Make one top-level function for each pipeline.
  • replace workflow.sh with a script per pipeline. These are named pipeline_<stage>. These scripts merely change to the source directory, and then call top level functions in functions.sh.

The reason for the last split is to keep logic from creeping into the pipeline functions. They are merely interfaces to the single set of logic in functions.sh.

The goal of having the separate functions source-able is to be able to run interior steps of the overall processing without having to run the end-to-end work. This is to save the sitting-around time for waiting for a long running process to complete….more on that in a future article.

Fedora Strategy 2028: High-Level View

Posted by Fedora Community Blog on May 08, 2024 01:37 PM

As described in Fedora Strategy 2028: April 2024 Update, we came out of our annual face-to-face meeting with a new presentation for our strategy for the next five years. That article gave the background — this is the high-level strategy itself.

Our Guiding Star

We’re going to double the number of contributors who are active every week.

What we’re measuring — and why

Our goal is to ensure that Fedora is healthy and sustainable. As a project, we’re generally in great shape.  However:  there are many areas where everyone feels under-resourced, and we have too many places where we have a very poor “yak farm factor” — if one or two people are ready for a change and go off to start new lives, will the areas they’re working in collapse? Plus, there’s always so much more exciting new stuff that we could be doing, and maybe need to do to remain relevant as the computing landscape changes.

We can measure aspects of this in many different ways: interconnectedness, onboarding, burnout, team resilience, and so many more. But, the weekly-active-contributor number gives us a simple, basic check. If that number is going up, we must be doing something right.

The metric itself isn’t the goal in itself.. We don’t want to merely inflate a number, after all. So, we also plan to watch those other community health metrics, and we’ll adjust as needed to make sure that the Guiding Star is really leading us to the right path.

What is a contributor?

This means different things to different people and is often different across projects. However, for this purpose, we’re using a broad definition.

A Fedora Project contributor is anyone who:

  1. Undertakes activities
  2. which sustain or advance the project towards our mission and vision
  3. intentionally as part of the Project,
  4. and as part of our community in line with our shared values.

Fedora has numerous already-public data sources for activity, and we plan to use those as widely as possible. Unlike smaller projects, we can’t simply count commits in a git repo — and, I think that’s a good thing, because in order to get a meaningful number, we need to count more than just code and other technical contributions.

Next: Foundations and Focus Areas

Upcoming posts:

  • Freedom Foundation: Accessibility; Cross-Community Collaboration
  • Friends Foundation: Mentorship; Local Communities; Collaboration Tooling
  • Features Foundation: Preinstalled Systems; SIG Revamp; AI; Marketing
  • First Foundation: Atomic (“Immutable”); Language Stacks; Spins & Rebuilds

The post Fedora Strategy 2028: High-Level View appeared first on Fedora Community Blog.

Fedora Asahi Remix 40 is now available

Posted by Fedora Magazine on May 08, 2024 08:00 AM

We are happy to announce the general availability of Fedora Asahi Remix 40. This release brings the newly released Fedora Linux 40 to Apple Silicon Macs.

Fedora Asahi Remix is developed in close collaboration with the Fedora Asahi SIG and the Asahi Linux project. It was unveiled at Flock 2023 and first released later in December with Fedora Asahi Remix 39.

In addition to all the exciting improvements brought by Fedora Linux 40, Fedora Asahi Remix brings conformant OpenGL 4.6 support to Apple Silicon. It also continues to provide extensive device support, including high quality audio out of the box.

Fedora Asahi Remix offers KDE Plasma 6 as our flagship desktop experience. It also features a custom Calamares-based initial setup wizard. A GNOME variant is also available, featuring GNOME 46, with both desktop variants matching what Fedora Linux offers. Fedora Asahi Remix also provides a Fedora Server variant for server workloads and other types of headless deployments. Finally, we offer a Minimal image for users that wish to build their own experience from the ground up.

You can install Fedora Asahi Remix today by following our installation guide. Existing systems, running Fedora Asahi Remix 39, can be updated following the usual Fedora upgrade process.

Please report any Remix-specific issues in our tracker, or reach out in our Discourse forum or our Matrix room for user support.

amazon-ec2-utils in Fedora

Posted by Major Hayden on May 08, 2024 12:00 AM
The amazon-ec2-utils package in Fedora makes it a bit easier to find devices in an AWS EC2 instance. ️🌤️

Get Involved with Fedora Bootable Containers

Posted by Fedora Magazine on May 07, 2024 08:00 AM

For quite a while now, we’ve had image-based Fedora Linux variants—starting with Fedora Atomic Host and Atomic Desktop. The original variants evolved into Fedora CoreOS, Fedora IoT, a whole family of Fedora Atomic Desktops, and the awesome Universal Blue project. Bootable containers make it much simpler to create and collaborate on image-based Fedora systems. Here’s how you can get involved.

If you’ve used one of these image-based Fedora systems, you know how easy they are to update, upgrade, or, if things aren’t working quite right, to roll back. However, creating your own custom image-based Fedora system has always been a bit tricky, requiring special tools, processes and infrastructure.

Over the past couple of years, the tools and methods available for building image-based systems in Fedora have evolved. They now natively support OCI/Docker containers as a transport and delivery mechanism for operating system content. 

With these changes, and the introduction of bootc, we now have the tools to build image-based systems using ordinary Containerfiles and regular OCI-container build tools. We also have the infrastructure to define, build, deploy and manage Linux systems. 

For instance, the Fedora IoT WG has plans to deliver two bootc containers for Fedora IoT users. There is a cut down minimal version for users to use as a base to build their own vision of Fedora IoT. There is also a second image to deliver the traditional Fedora IoT user experience.

Taking the Initiative

We have a great opportunity to enhance collaboration among image-based Fedora variants, and to empower other projects and individual users to create their own Fedora-based derivatives by working together on bootable container technologies

That’s why I’m excited to announce that we’ve proposed a new Fedora Community Initiative to seize this opportunity. This initiative aims to bring together the Fedora working groups that build and promote image-based Fedora variants.

The contributors working on the proposed initiative will:

  • Identify opportunities to share base images
  • Identify and work through Fedora Infra issues
  • Document use of bootable container tools and processes
  • Promote the use of Fedora bootable containers with blog and mailing list posts, social media, and coordination with Fedora Marketing Team
  • Reach out to projects outside of Fedora proper that might want to collaborate

Check out the initiative wiki page for more information. You can learn who to reach out to if you’d like to get involved in the effort, in the context of one of the Fedora variants. You can also join the discussion on Fedora’s matrix instance in the fedora-bootc room, or on Fedora discussion by following or posting with the bootc-initiative tag. Finally, check out this doc page to kick the tires on Fedora bootc for yourself.

Save the Date: Fedora 40 Release Party on May 24-25

Posted by Fedora Magazine on May 06, 2024 08:00 AM

After the hard work of pushing out the Fedora Linux 40 release, we now look at celebrating with a release party! The Fedora 40 Release Party will take place on May 24-25, Friday and Saturday.

What is a release party?

Fedora Release Parties are virtual, user-focused conferences where the community comes together to talk about what’s new in the latest release of Fedora and where we’re going for future releases. Topics we’ve covered include the process of working through implementing a change and roadmaps for what different teams want to do next in Fedora. Sometimes there are updates from Fedora-associated groups who have something to share, like Amazon or Lenovo. We also have breaks for socials where we can talk to each other in video calls (you don’t have to share video or speak if you don’t want to). If you have an interest in a behind-the-scenes look at your favorite distro, come learn and hang out with the contributors who make it!

Where will it happen?

In previous years we used Hopin to run virtual conferences, but the Fedora 40 Release Party will be the first that we do in Matrix! We’ve wanted to do this since the Creative Freedom Summit showed how it could be done a couple of years ago. This is a step that allows us to lean more on open source for outreach.

However, we also want to be open in another way, and that’s with livestreaming. We will be streaming the talks on our Fedora Project YouTube channel. That way anyone can watch and the streams will be immediately available afterwards!

Details for registering will come soon, but for now please save the date for May 24-25!

We hope to see you there!

Learn more

Check out the Fedora 39 Release Party to get an idea of the kinds of topics we cover.

Get hyped on social media with hashtag #FedoraReleaseParty!

Week 18 in Packit

Posted by Weekly status of Packit Team on May 06, 2024 12:00 AM

Week 18 (April 30th – May 6th)

  • Packit will now upload to lookaside cache sources that are not specified by URLs and are present in the dist-git repo during release sync. Additionally, all the actions ran during syncing release will provide the PACKIT_PROJECT_VERSION environment variable. (packit#2297)
  • We have introduced a new status_name_template option that allows you to configure status name for a Packit job. For further details have a look at our docs. This feature is still experimental and at the moment it is not possible to retry such jobs via GitHub Checks' re-run. (packit-service#2402)

Episode 427 – Will run0 replace sudo?

Posted by Josh Bressers on May 06, 2024 12:00 AM

Josh and Kurt talk about a sudo replacement going into systemd called run0. It sounds like it’ll get a lot right, but systemd is a pretty big attack surface and not everyone is a fan. We shall have to see if this ends up replacing sudo.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3377-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_427_Will_run0_replace_sudo.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_427_Will_run0_replace_sudo.mp3</audio>

Show Notes