Fedora People

Fedora 31 elections voting now open

Posted by Fedora Community Blog on November 21, 2019 12:00 AM
Fedora 26 Supplementary Wallpapers: Vote now!

Voting in the Fedora 31 elections is now open. Go to the Elections app to cast your vote. Voting closes at 23:59 UTC on Thursday 5 December. Don’t forget to claim your “I Voted” badge when you cast your ballot. Links to candidate interviews are below.

Fedora Council

There is one seat open on the Fedora Council.

Fedora Engineering Steering Committee (FESCo)

There are five seats open on FESCo.

Mindshare Committee

There is one seat open on the Mindshare Committee

The post Fedora 31 elections voting now open appeared first on Fedora Community Blog.

FESCo election: Interview with Justin Forbes (jforbes)

Posted by Fedora Community Blog on November 20, 2019 11:55 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 21 November and closes promptly at 23:59:59 UTC on Thursday, 5 December 2019.

Interview with Justin Forbes

  • Fedora account: jforbes
  • IRC nick: jforbes (found in #fedora-kernel #fedora-devel #fedora-qa #fedora-arm)
  • Fedora user wiki page

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

There is no question that modularity is the biggest technical issue affecting the Fedora community at the moment, and probably over the next year. I believe my insight comes from a few places. I was involved with rPath quite some time ago, where we tackled some of the issues that modularity is trying to solve. And as a kernel maintainer by day to day job, I don’t have any particular stake in modularity, so I can view it objectively, with an eye to what is best for Fedora over the long term. I have been involved with Fedora for a very long time, I do have a vested interest in the continued improvement of Fedora and the success and growth of the community.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

Right now, I think working out the issues with modularity should be our top priority, as the current state has become a problem. Outside of that, there is really a long list of details, but it can be all be summarized with “ensure our processes for introducing new technologies are managed in such a way as to not alienate existing contributors, users, and use cases, while publishing stable releases with useful new features”

This includes things like:

  • Improving our QA processes
  • Taking the time improve our release process
  • Ensuring new changes are ready before wedging them into a release
  • Making sure changes are not too disruptive to existing workflows.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

Modularity continues to be a trouble spot. It is a great feature in theory, adding capabilities to the distro as shipped, but even more so to people who are using or deploying the distro in their own environments. In practice, it has not been managed as well as it could be. There was always some backlash from the community members who just don’t care about modularity, or understand what it brings, but now there is also justified backlash in how things are being currently deployed. FESCo can help to ensure that further features around modularity are properly planned and executed, though there are a few fires to be put out first.

I also believe we can work to bake more security process automation into our workflow, to ensure that good practices are used for package builds, and CVEs are addressed in a timely manner.

The post FESCo election: Interview with Justin Forbes (jforbes) appeared first on Fedora Community Blog.

FESCo election: interview with Peter Walter (pwalter)

Posted by Fedora Community Blog on November 20, 2019 11:55 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 21 November and closes promptly at 23:59:59 UTC on Thursday, 5 December 2019.

Interview with Peter Walter

  • Fedora account: pwalter
  • IRC nick: pwalter (found in #fedora-devel)
  • Fedora user wiki page (none)

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

We have a lot of people being unhappy how Modularity was “forced” on them in Fedora. I’d like to be a voice of this community and advocate of going back to simple yum repos to ship the default package set, and leaving Modularity strictly as an add-on one can choose, but doesn’t have to use.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

Simplify Fedora packaging. Remove Modularity that makes everything more complicated. Help more people get involved in Fedora development by simplifying processes.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

In my opinion, Modularity is a “trouble spot”. It needs to go.

The post FESCo election: interview with Peter Walter (pwalter) appeared first on Fedora Community Blog.

FESCo election: Interview with Kevin Fenzi (kevin)

Posted by Fedora Community Blog on November 20, 2019 11:55 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 21 November and closes promptly at 23:59:59 UTC on Thursday, 5 December 2019.

Interview with Kevin Fenzi

  • Fedora account: kevin
  • IRC nick: nirik (found in #fedora-noc, #fedora-apps, #fedora-admin, #fedora-devel, #fedora, many more…)
  • Fedora user wiki page

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

I think that modularity and the issues around it are going to continue for a while. I hope I can provide some help in bringing the ‘lets drop modularity and forget it happened’ and the ‘lets modularize everything’ camps together on some solution that works not only for Fedora, but our downstream distros too.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

I think we really should look at a rework of our packaging workflow. Many of the things we setup in the past don’t make as much sense as they did then. Something like using signed tags to let the automation know what you want (build for X Y Z, or please run the CI on a build of this, or whatever). It’s not going to be easy, but I think if we design things right it will be a lot less work on our packagers.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

Well, the workflow as noted. I think we just need to come up with a good framework and let the community help fill in the blanks. We did gather some good info on the devel list recently.

The post FESCo election: Interview with Kevin Fenzi (kevin) appeared first on Fedora Community Blog.

FESCo election interview: Zbigniew Jędrzejewski-Szmek (zbyszek)

Posted by Fedora Community Blog on November 20, 2019 11:55 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors who are a member of at least one non-CLA group. The voting period starts on Thursday, 21 November and closes promptly at 23:59:59 UTC on Thursday, 5 December 2019.

Interview with Zbigniew Jędrzejewski-Szmek

  • Fedora account: zbyszek
  • IRC nick: zbyszek (found in fedora-devel, #systemd, #fedora-rust, #fedora-python, #fedora-neuro)
  • Fedora user wiki page

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

We have plenty of new stuff happening in package creation and
delivery: automatic generation of build requirements and reuse of
upstream project metadata, new packaging macros, rawhide gating,
library ABI checks, packaging automation like Packit and Modularity,
more CI capabilities, rawhide side-tags and gating, new delivery
formats like Silverblue, containers, flatpaks, modules. The challenge
is how to integrate those technologies into existing workflows so that
the old approaches can coexist and be gradually replaced; how to allow
hawkish packagers who are ready to switch to the latest and greatest
to move forward quickly, without breaking workflows of more
conservative maintainers. We also need to keep in mind packaging
outside of the core distribution: copr users, spins, distributions
which rebuild our packages, people who maintain in-house packages.

We declare that allowing people to build solutions on top of Fedora is
our goal, and we need to weigh the changes that we introduce in this
light.

FESCo does not have the ability to tell people what to do. It is only
useful as a place where questions are asked, and voices of interested
parties are recorded. As a FESCo member my goal will be to ask
questions and gather feedback until any given issue is clear. We can’t
achieve unanimity, and we can’t always satisfy everyone, but we need
to anticipate problems and minimize disruption while still allowing
the technology to move forward.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

In the next two release cycles:

  • Make gating finally widely used, by concentrating of general tests
    that are applicable to a wide range of packages, to catch
    install/uninstall/upgrade issues, library ABI compatibility breaks,
    missing dependencies. This will make packaging more robust, and empower
    packagers to move on to more interesting tasks.
  • Finish the transition to Python 3, keep only those Python 2 packages
    that we absolutely must.
  • Transition to Java 11 and make the Java ecosystem healthy again.
  • Figure out how to enable the good parts of Modularity, as implemented
    now or as equivalent functionality, without making things difficult for
    everyone else.

On slightly longer scale: automatize packaging of language-specific
stacks like Python or Rust by reusing upstream metadata and making the
package creation and update mechanism as simple as possible. Make the
flow from upstream releases to distro packages fully automatic.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

No big surprises considering what I wrote above: we have a lot of
friction where new technologies are being introduced, and the new
conflicts with the old. Not every project that is brought forward is
ultimately worth integrating in Fedora, and FESCo should ask hard
questions and require contingency plans and coordinate reverts if need
be.

Another problem is the stagnation or even shrinking of the number of
contributors. I believe the technical problems discussed above are
significantly contributing to this: the “packaging story” is nowadays
much more complicated and uncertain and less documented than it used
to be; at the same time many experienced packagers are busy fighting
fires and fighting on the mailing lists, so we don’t have time for
newcomers and docs and polish. This is a problem now, but the upside
is that once those pressing issues are be solved, the experience for
new packagers will improve and we can hope to grow the project again.

The post FESCo election interview: Zbigniew Jędrzejewski-Szmek (zbyszek) appeared first on Fedora Community Blog.

FESCo election interview: Randy Barlow (bowlofeggs)

Posted by Fedora Community Blog on November 20, 2019 11:55 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors who are a member of at least one non-CLA group. The voting period starts on Thursday, 21 November and closes promptly at 23:59:59 UTC on Thursday, 5 December 2019.

Interview with Randy Barlow

  • Fedora account: bowlofeggs
  • IRC nick: bowlofeggs (found in #fedora-apps, #fedora-admin, #fedora-noc, #fedora-devel, #redhat-cpe)
  • Fedora user wiki page

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

There have been many regressions with ease of use for tooling that packagers need to use to deliver software to Fedora’s users over the past few years. Quite a few things are manual now that used to be automatic. As a member of the infrastructure group, I have some first hand knowledge of how and why these changes happened, and I have ideas on how we can improve them.

There is also a project aimed at bringing the CentOS and Fedora dist-gits together in the horizon. I’ve been working on gathering requirements for this project with some other folks, and has potential to lead towards many technical changes being proposed.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

I think Fedora is more difficult to contribute to than it needs to be. Some of this is due to the usability regressions I mentioned above, but I also think the barrier of entry is fairly high to join the package maintainers. There are some other distributions that allow contributors to simply open pull requests on GitHub, as an example of how low the barrier to entry could be. We, in comparison, require contributors to sign a CLA, submit a package review, perform several practice package reviews, and find a sponsor to join. I suspect that this limits our contributor pool significantly. I think it would be wise to rethink how we operate here, and find ways to lower the barrier of entry for new contributors. One example is to make it easier for contributors to send patches to the project without requiring them to be packagers.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

As we’ve seen on the devel list recently, there is friction within the community around the modularity project. Some of the issues are technical, some of them are related to policy, and likely most of them are due to misunderstandings of communication and of the project itself. I think FESCo can help here by working to clarify these misunderstandings so that everybody can see the problem statements and proposed solutions more clearly.

The post FESCo election interview: Randy Barlow (bowlofeggs) appeared first on Fedora Community Blog.

FESCo election: interview with Miro Hrončok (churchyard)

Posted by Fedora Community Blog on November 20, 2019 11:55 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors who are a member of at least one non-CLA group. The voting period starts on Thursday, 21 November and closes promptly at 23:59:59 UTC on Thursday, 5 December 2019.

Interview with Miro Hrončok

  • Fedora account: churchyard
  • IRC nick: mhroncok (found in #fedora-devel, #fedora-python, #fedora-ambassadors, #fedora-3dprinting, #fedora-meeting, #fedora-meeting-1)
  • Fedora user wiki page

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

I think that the most important issue the Fedora community is facing at the moment, and will keep facing for the foreseeable future, is not really technical but instead a communication problem of how to talk about our technical changes and challenges. At FESCo level I will continue to work on:

  • enforcing policies that aren’t being followed, which leads to stagnation, fo example problems where nobody communicated failures and breakage in a timely manner;
  • adjusting our policies to reflect community feedback, focusing on better notifications about relevant issues;
  • encouraging Fedora contributors (e.g. change owners) to communicate their intent and impact properly and making sure that the plans are adjusted based on the feedback they get from the community;
  • making it easier to contribute when blocked by nonresponsive maintainers;
  • assuring that impactful changes are communicated through the Fedora Change Process (and, if needed, changing the process itself to make it easier);
  • redirecting technical discussions from FESCo tickets to the appropriate mailing lists (most of the time the devel list), where there’s a greater chance of engaging community members in the discussion.

One of the biggest things that is currently happening in Fedora is Modularity. It has high-impact technical challenges. In my opinion, FESCo must assure that the technical issues our community is experiencing are addressed, even if it means we decide not to deliver certain Modularity features until that happens. I’m quite concerned that alienating a big part the contributor community would be much harder to remedy than not delivering some features in the next Fedora release.

I’ll end this answer in the same way I did in the previous interview, because I still stand by that opinion: The things in Fedora Engineering are moving forward at great speed towards progress. However, I think we should regularly take a moment to slow down and look back. How are the changes we made affect the life of an average Fedora packager? Tester? Translator? How does this affect our packaging standards? Is there buy-in from the community, or is it just for the ones involved?

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

I see two main concerns in this question:

First: to keep Fedora on the cutting edge of open source development. For Fedora contributors, we must make sure that they can contribute to Fedora as easily as possible. As an example, I believe we’ve managed to make it easier to unblock contributors when it comes to nonresponsive maintainers or packages that fail to build from sources. As FESCo, we can hardly go and fix all the ignored packages and build failures, but instead we have made the processes easier for individual contributors by adjusting the existing policies. We should continue in this and strive to make Fedora a more collaborative environment. One thing that I would like to see changed, is that packages or modules (components generally) are still pretty much owned by individuals. Most of them are awesome and open to collaboration, yet some of them are unfortunately acting like road blocks, rather than experts open to suggestions. We should make sure that nothing in Fedora is private property, but rather implement a collaborative workflow and environment.

Throughout my past year at FESCo, I’ve helped to change some of our policies to improve the situation and I’d like to continue to do so.

Secondly, to keep Fedora on the cutting edge of open source development for people using Fedora as their platform of choice, we must try to hear from them and model their use cases. When considering a change (or the status quo) in Fedora, we should ask how it impacts a Rust developer, a web designer, a Lua hacker, a Linux kernel engineer, a JavaScript student, a Python data analyst, a hardware specialist, a QA engineer, or any other kind of developer. For example, what happens if we deliberately delay a Fedora release for half a year? Will those people get the cutting edge tools they need? Will we provide the tools for them via regular updates or in some other way? How do we achieve it without telling everyone to use COPR or rawhide, without exhausting our contributors? And so on. We should figure such things out before making a tough decision.

During my past year at FESCo, I’ve asked myself such questions when approving or rejecting change proposals or other kinds of tickets. In several cases, I have even asked the change owners similar questions.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

As for the distribution, I think it’s the modularity/non-modularity gap between people. For our processes, the nonresponsive-maintainers and fail-to-build-from-source / fail-to-install policies have been recently adapted, as they needed improvement the most. But the job is never done, we need to keep looking at these polices and adapt them to make Fedora not only a healthier tech stack, but also a healthier community. FESCo should assess new Fedora changes not only on the basis of how they are making Fedora superior, but also on how do they affect current Fedora contributors and whether the tooling for them is ready (or at least planned as part of a Fedora change).

The post FESCo election: interview with Miro Hrončok (churchyard) appeared first on Fedora Community Blog.

FESCo election: interview with Fabio Valentini (decathorpe)

Posted by Fedora Community Blog on November 20, 2019 11:55 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors who are a member of at least one non-CLA group. The voting period starts on Thursday, 21 November and closes promptly at 23:59:59 UTC on Thursday, 5 December 2019.

Interview with Fabio Valentini

  • Fedora account: decathorpe
  • IRC nick: decathorpe (found in fedora-{golang,java,python,ruby,rust,stewardship,meeting*})
  • Fedora user wiki page

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

One of the big issues I see today is the increasingly large number of packages that fail to build or install on fedora, which seems to have about doubled between Fedora 29 and rawhide, according to my data. I am trying to reintroduce a regular dependency check report for rawhide (and maybe stable/testing as well), which would at least make the problem more visible, and provide pointers to the most problematic missing dependencies.

There’s also the fallout from the – currently incomplete (or broken, depending on who you ask) – implementation of Modularity, which has caused upgrade issues (the “libgit2 issue”), various issues around the Java stack, including the broken eclipse packages in fedora 31+ and the “forced move” to modules (or even the recommendation to use the flatpak version instead), and so on. I’ve been actively working to keep the non-modular Java stack maintained under the umbrella of the Stewardship SIG, so packagers who can’t (or won’t) move their packages into modules don’t suffer from this current, broken situation.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

With stagnating contributor numbers and an ever growing package base, I think the most important thing to tackle is making the packager workflow more streamlined. I already see progress being made into the right direction here (for example, making package un-orphaning a push-button action in pagure instead of having to go through a releng ticket manually). This both makes working on fedora packages easier for existing maintainers, and should make fedora a more attractive, modern project to contribute to for newcomers. The multi-build update workflow for rawhide with on-demand side-tags also seems like a good improvement, both for packagers, and for the stability of rawhide. I think a gating check that rejects packages which introduce broken dependencies into the repository would be a good addition, as well (and it might help reduce the number of broken dependencies over time).

Improvements in this area directly benefits the stability of fedora and the ability of packagers to deliver new features faster – which means fedora can continue to be the best platform for developers.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

I see a lot of issues surrounding Modularity (both regarding policy and implementation), but that topic has already sparked more debates than any other change in fedora since I joined a few years ago, so let’s not focus on that for now 😉

Other than that, I see the large (and growing) number of packages with broken dependencies as a big issue, and a sign that the fedora repositories aren’t in a great state – not even mentioning the fact that dependencies that aren’t satisfiable from the official repos might pose a legal risk for fedora, as well.

I think the updated FTBFS and Non-responsive maintainer processes already help here, since they make the problem of broken packages more visible for dependent packagers, and introduce an automatic cleanup of packages that fail to build and that nobody cares about.

Another area that I sometimes find a bit lacking is documentation for some common tasks (like getting a package’s dependency tree, or querying things that need to be rebuilt in case of ABI changes, …). While there is documentation and policy covering these tasks, it might be good to give concrete examples for correctly determining the impact of a change. This might also help to reduce the number of broken updates (and surprise SONAME bumps) in fedora, making development of rawhide more stable and predictable.

The post FESCo election: interview with Fabio Valentini (decathorpe) appeared first on Fedora Community Blog.

FESCo election: interview with David Cantrell (dcantrel)

Posted by Fedora Community Blog on November 20, 2019 11:55 PM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors who are a member of at least one non-CLA group. The voting period starts on Thursday, 21 November and closes promptly at 23:59:59 UTC on Thursday, 5 December 2019.

Interview with David Cantrell

  • Fedora account: dcantrel
  • IRC nick: dcantrell (found in #fedora-devel, #fedora-qa, and #fedora-ambassadors and other channels for projects I am involved in)
  • Fedora user wiki page

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

  • Developer controls for gating and CI. A lot of work has been happening in the context of continuous integration. We created new services, developed processes, and wrote tests. These are all beneficial. I think Fedora needs to ensure we implement developer tools that do not disrupt workflows and which are stable. In my project rpminspect, a Koji RPM and module build analysis tool, I think about developers who are running it to compare builds. A comparison of builds of zlib is very different than comparing two kernel builds, yet I still have a desire to make the tool work for both use cases, so I have added functionality to ensure it will. As we work on projects for gating and CI, we need to keep in mind the broad range and types of software that makeup Fedora.
  • Modularity and the many ways we make software available to users. Modularity is a topic that continues to get a lot of attention on the mailing lists. I want to see modularity discussions focus on goals and technical requirements. We can look at modularity as something contributed by Red Hat, a Fedora Project community member. Red Hat had a set of use cases and a deadline. Now we have it in Fedora because Fedora serves as the upstream for RHEL. We should remember that Red Hat relies on Fedora for technical leadership upstream. We should view modularity as a new starting point for a technical discussion on how we deliver software to users. Remember that RHEL-5 shipped with Xen virtualization, and the migration to KVM was worked out in Fedora, so RHEL-6 shipped with KVM.
  • Improve integration between our tools and services. I will use the Packit project as an example. Packit is an exciting project because it offers a bridge between upstream development and building for Fedora. And they are interested in improving that developer workflow and making it easier for software to be available in Fedora. Just like Packit, I see tools and infrastructure that we could focus on to improve the experience for package maintainers and other contributors. I would like to see the fedpkg command work for packages that pull their source archives from an upstream git repo. As a package maintainer, I would like to specify a git repo and a tag and then fedpkg take care of the rest. That reduces the work I need to do with dist-git and leaves me more time to focus on bug reports, tracking upstream changes, and submitting patches there. There are a lot of examples like this where small improvements can contribute to improving the developer experience. Fedora needs to treat these problems with a high enough priority to both keep developers interested in working on the project as well as avoiding process fatigue where we feel we have no time to improve things because there’s so much work to do–because we have not improved our tools.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

There is a lot I think FESCo can do here. FESCo should continue to communicate that it is a policy setting body and not a design body. Where a technical decision needs to be made that affects the project as a whole, FESCo should lead that process and get to a conclusion. They do that, but I think this is a definition that bears repeating.

Goals:

  • Release policy. I think it is time to revisit how we define and make releases. There is no question that the release cycle is a significant stress factor for the community. I feel that with the introduction of automated testing services and gating tests, we can discuss redefining what a release means and make more of the development asynchronous. For example, if we were to define Fedora releases as coming from a stable tag in git, release engineering, and QA could trust that building a release from that tag would be usable. Releases could then be happening at any point in time that we see fit. They could be rollups of a dot zero release plus updates. This example is hypothetical, but I think the classic Linux distribution release model is something worth discussing and possibly redefining given our new workflow abilities.
  • Contributing changes to upstream projects to help with the minimization effort. I have been at Red Hat for a while, and the topic of “minimizing the install” is a joint discussion for RHEL and Fedora. Nearly all talks end with it being an intractable problem because you have to agree on what minimal means. That goes beyond just defining the files. It also represents functionality. The minimization efforts underway look like good starts, but I would like to see us work with upstream projects to make more functionality conditional at build time if it is not required or contributing support for building with other libraries. For example, if a program includes HTTP download support, but we decide that it is not part of minimal, we should work to contribute patches that let us build it without libcurl (or whatever the library is).
    Another example is software with line editing capabilities using readline or libedit. A minimization effort should work to have one line editing library installed, so patches to use that other line editing library help. Too many discussions focus on the built RPMs when minimizing them, but we have made those to enable nearly all functionality and be interdependent. To truly get minimal, we need to think deeper.
  • DNF integration with language package managers. I was toying around with this idea earlier in the year, but I think it would be neat to be able to do “dnf install –pip PIP_PACKAGE” or “dnf install –cpan CPAN_PACKAGE.” Integrating these per-language package managers into our system-wide package management could be interesting.

I’m sure I can think of more here, but I have already written a lot of text.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

Aside from everything I have already mentioned, I think Fedora should
be working to improve the contributor experience. Helping a new
community member get started in Fedora is confusing for me, and I have
been doing this for a long time. Having a getting started guide, and
current documentation is part of that, but our tools and services
should be more discoverable for new and existing contributors. I
think FESCo could help here by defining some minimum requirements for
developer tools and services. That could also help identify existing
tools and services we should prioritize to bring them up to a better
state.

The post FESCo election: interview with David Cantrell (dcantrel) appeared first on Fedora Community Blog.

Council election: Interview with Alberto Rodríguez Sánchez (bt0dotninja)

Posted by Fedora Community Blog on November 20, 2019 11:50 PM

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 21 November and closes promptly at 23:59:59 UTC on Thursday, 5 December 2019.

Interview with Alberto Rodríguez Sánchez

  • Fedora account: bt0dotninja
  • IRC nick: bt0 (found in #ffedora-commops, #fedora-Latam, #fedora-join, #fedora-neuro, #fedora-mindshare, and more)
  • Fedora user wiki page

Questions

Why are you running for Council?

I know that it is a great responsibility and also know than the time of my fellow contributors is very valuable so I don’t want to waste it. I will be in every meeting and commenting on every ticket doing always my best.

Why should people vote for you?

I think we can reproduce some ideas becoming from the experience of running the Monthly User/Contributors meetings in Mexico. Those experiences are helping me to grow as a contributor and my skills in teamwork. I’m feeling than I can do good work in the Council.

What do you want to accomplish as a member of the Fedora Council?

My goal is to propose actions that help us increase the number of active collaborators, particularly in packaging, design, and D&I; this will have an impact (at least in theory) on our user base.

The post Council election: Interview with Alberto Rodríguez Sánchez (bt0dotninja) appeared first on Fedora Community Blog.

Council election: Interview with John M. Harris, Jr. (johnmh)

Posted by Fedora Community Blog on November 20, 2019 11:50 PM

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 21 November and closes promptly at 23:59:59 UTC on Thursday, 5 December 2019.

Interview with John M. Harris, Jr.

  • Fedora account: johnmh
  • IRC nick: johnmh (found in #fedora-commops, #fedora-games, #fedora-kde, #fedora-3dprinting, #fedora-security, #fedora-devel, #openblox)
  • Fedora user wiki page

Questions

Why are you running for Council?

I believe that we’ve been rushing to make change where there is no call for it recently. We may be inadvertently ostracizing users and developers by moving from conventional tools, and moving away from our Four Foundations: Freedom, Friends, Features and First.

For example, recently users were provided with easy ways to install proprietary software on Fedora (NVIDIA proprietary drivers, Google Chrome browser), without being told why we don’t have proprietary software (other than firmware) in the repositories to begin with. More and more, we often seem to be overlooking the first of the Four Foundations, Freedom.

Why should people vote for you?

I hope to represent the views of those who believe that Free Software is still important.

What do you want to accomplish as a member of the Fedora Council?

I want to work towards ensuring that Freedom is always considered when considering changes to Fedora, and that, when possible, Fedora is not used as a testing ground for projects like Modularity.

The post Council election: Interview with John M. Harris, Jr. (johnmh) appeared first on Fedora Community Blog.

All systems go

Posted by Fedora Infrastructure Status on November 20, 2019 10:39 PM
Service 'The Koji Buildsystem' now has status: good: Everything seems to be working.

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on November 20, 2019 10:03 PM
Service 'The Koji Buildsystem' now has status: scheduled: Planned koji outage happening now.

Hanging the Red Hat

Posted by Alberto Ruiz on November 20, 2019 06:22 PM

This is an extract of an email I just sent internally at Red Hat that I wanted to share with the wider GNOME and Freedesktop community.

After 6+ wonderful years at Red Hat, I’ve decided to hang the fedora to go and try new things. For a while I’ve been craving for a new challenge and I’ve felt the urge to try other things outside of the scope of Red Hat so with great hesitation I’ve finally made the jump.

I am extremely proud of the work done by the teams I have had the honour to run as engineering manager, I met wonderful people, I’ve worked with extremely talented engineers and learned lots. I am particularly proud of the achievements of my latest team from increasing the bootloader team and improving our relationship with GRUB upstream, to our wins at teaching Lenovo how to do upstream hardware support to improvements in Thunderbolt, Miracast, Fedora/RHEL VirtualBox guest compatibility… the list goes on and credit goes mostly to my amazing team.

Thanks to this job I have been able to reach out to other upstreams beyond GNOME, like Fedora, LibreOffice, the Linux Kernel, Rust, GRUB… it has been an amazing ride and I’ve met wonderful people in each one of them.

I would also like to make a special mention to my manager, Christian Schaller, who has supported me all the way in several ways both professionally and personally. There is this thing people say: “people do not leave companies, they leave managers”. Well this is certainly not the case, in Christian I have found not only a great manager but a true friend.

images

As for my experience at Red Hat, I have never lasted more than 2 years in the same spot before, I truly found my place there, deep in my heart I know I will always be a Red Hatter, but there are some things I want to try and learn elsewhere. This job switch has been the hardest departure I ever had and in many ways it breaks my heart to leave. If you are considering joining Red Hat, do not hesitate, there is no better place to write and advocate for Free Software.

I will announce what I will be doing next once I start in December.

Set up single sign-on for Fedora Project services

Posted by Fedora Magazine on November 20, 2019 08:00 AM

In addition to an operating system, the Fedora Project provides services for users and developers. Services such as Ask Fedora, the Fedora Project wiki and the Fedora Project mailing lists help users learn how to best take advantage of Fedora. For developers of Fedora, there are many other services such as dist-git, Pagure, Bodhi, COPR and Bugzilla for the packaging and release process.

These services are available with a free account from the Fedora Accounts System (FAS). This account is the passport to all things Fedora! This article covers how to get set up with an account and configure Fedora Workstation for browser single sign-on.

Signing up for a Fedora account

To create a FAS account, browse to the account creation page. Here, you will fill out your basic identity data:

<figure class="wp-block-image"><figcaption>Account creation page</figcaption></figure>

Once you enter your data, the account system sends an email to the address you provided, with a temporary password. Pick a strong password and use it.

<figure class="wp-block-image"><figcaption>Password reset page</figcaption></figure>

Next, the account details page appears. If you want to contribute to the Fedora Project, you should complete the Contributor Agreement now. Otherwise, you are done and you can use your account to log into the various Fedora services.

<figure class="wp-block-image"><figcaption>Account details page</figcaption></figure>

Configuring Fedora Workstation for single sign-On

Now that you have your account, you can sign into any of the Fedora Project services. Most of these services support single sign-on (SSO), so you can sign in without re-entering your username and password.

Fedora Workstation provides an easy workflow to add your Fedora credentials. The GNOME Online Accounts tool helps you quickly set up your system to access many popular services. To access it, go to the Settings menu.

<figure class="wp-block-image"></figure>

Click on the option labeled Fedora. A prompt opens for you to provide your username and password for your Fedora Account.

<figure class="wp-block-image"></figure>

GNOME Online Accounts stores your password in GNOME Keyring and automatically acquires your single-sign-on credentials for you when you log in.

Single sign-on with a web browser

Today, Fedora Workstation supports three web browsers out of the box with support for single sign-on with the Fedora Project services. These are Mozilla Firefox, GNOME Web, and Google Chrome.

Due to a bug in Chromium, single sign-on doesn’t work currently if you have more than one set of Kerberos (SSO) credentials active on your session. As a result, Fedora doesn’t enable this function out of the box for Chromium in Fedora.

To sign on to a service, browse to it and select the login option for that service. For most Fedora services, this is all you need to do; the browser handles the rest. Some services such as the Fedora mailing lists and Bugzilla support multiple login types. For them, select the Fedora or Fedora Account System login type.

That’s it! You can now log into any of the Fedora Project services without re-entering your password.

Special consideration for Google Chrome

To enable single sign-on out of the box for Google Chrome, Fedora takes advantage of certain features in Chrome that are intended for use in “managed” environments. A managed environment is traditionally a corporate or other organization that sets certain security and/or monitoring requirements on the browser.

Recently, Google Chrome changed its behavior and it now reports Managed by your organization or possibly Managed by fedoraproject.org under the ⋮ menu in Google Chrome. That link leads to a page that says, “If your Chrome browser is managed, your administrator can set up or restrict certain features, install extensions, monitor activity, and control how you use Chrome.” However, Fedora will never monitor your browser activity or restrict your actions.

Enter chrome://policy in the address bar to see exactly what settings Fedora has enabled in the browser. The AuthNegotiateDelegateWhitelist and AuthServerWhitelist options will be set to *.fedoraproject.org. These are the only changes Fedora makes.

Growing the fwupd ecosystem

Posted by Richard Hughes on November 19, 2019 12:26 PM

Yesterday I wrote a blog about what hardware vendors need to provide so I can write them a fwupd plugin. A few people contacted me telling me that I should make it more generic, as I shouldn’t be the central point of failure in this whole ecosystem. The sensible thing, of course, is growing the “community” instead, and building up a set of (paid) consultants that can help the OEMs and ODMs, only getting me involved to review pull requests or for general advice. This would certainly reduce my current feeling of working at 100% and trying to avoid burnout.

As a first step, I’ve created an official page that will list any consulting companies that I feel are suitable to recommend for help with fwupd and the LVFS. The hardware vendors would love to throw money at this stuff, so they don’t have to care about upstream project release schedules and dealing with a gumpy maintainer like me. I’ve pinged the usual awesome people like Igalia, and hopefully more companies will be added to this list during the next few days.

If you do want your open-source consultancy to be added, please email me a two paragraph corporate-friendly blurb I can include on that new page, also with a link I can use for the “more details” button. If you’re someone I’ve not worked with before, you should be in a position to explain the difference between a capsule update and a DFU update, and be able to tell me what a version format is. I don’t want to be listing companies that don’t understand what fwupd actually is :)

Stories from the amazing world of release-monitoring.org #8

Posted by Fedora Community Blog on November 19, 2019 06:23 AM

The evening wind was cold, but I protected myself by the fire spell. It was nice to sit outside and look at the whole release-monitoring.org realm in the sunset. One could see the beauty behind all this hard work and it’s ignites a nice feeling inside one’s heart. Lately I didn’t have much time to appreciate this beauty. To be honest I didn’t have much time to work on this realm in the last few months. But still some work was done even here.

I heard the footsteps behind me. “Traveler, it’s nice to see you again. Do you want to join me?” Footsteps stopped beside me and my companion was looking at the sunset with me. “I suppose you are here to hear about the news from this world. I assure you there are many things I want to share with you. Just listen…”

Wizard’s journey to OpenAlt

At the beginning of this month I visited the symposium of mages OpenAlt in ancient city of Brno (Czech Republic). It was a nice place with interesting people, I learnt plenty of new things and as every mage knows, magic is about knowledge. If you want to see how it was, there are already some pictures (Saturday and Sunday gallery) painted by the talented artist. If you look at the gallery more closely, you can see that I represented this realm with my pointy hat. 😉

Saturday on OpenAlt

On Saturday I visited few interesting talks from different realms. First was about the realm of Bootloader from wizard Jan Hlaváč. He talked about history and current state of Bootloader realm in Fedora Universe. It was interesting to hear about the various solutions introduced in the Bootloader realm. Like MBR, GPT, Grub and SecureBoot to name a few.

The second really interesting talk was Freedom and openness of internet in Europe. This talk was presented by Marcel Kolaja, who is vice-president of European Parliament. I think the mage of release-monitoring.org still sounds better, but it is pretty impressive title nonetheless. This talk was introducing legislations that are serious threat to the freedom and openness of the internet. One is the Digital Copyright Law, which was unfortunately already accepted by European Parliament and the second is trying to solve the terrorist content on the internet, which is now in phase of trialog (which is discussion between three parties). This talk showed much about the processes in European Parliament and it was interesting even for wizard from completely different realm.

I spent rest of the day talking with other mages about various topics from different realms, it was nicely spent time and I learnt a few new spells.

Sunday on OpenAlt

On Sunday I was more busy, there were so many interesting talks I didn’t know where to go first. But there was one talk I couldn’t miss. My own. But let’s go through the talks one by one.

First one was from Jiří Konečný about Anaconda world. This world is part of the Fedora universe and it’s the first thing you will get in touch if you want to become a part of Fedora Universe (Anaconda is OS installer, not only for Fedora). This talk was about the various other realms that were created thanks to Anaconda. It was nice to see how many things could develop from the world like Anaconda.

Than it was time for my own talk. On this symposium I talked about something different then release-monitoring.org. I started by introducing myself and why I have a wizard hat. Than I started talking about the world of Silverblue and LibreELEC and how you could use those to create a vision impairing spell (Home Theater PC). I even had some nice performance with tricks (demos) prepared, but one of them didn’t worked. But in the end we spent most of the time discussing various topics about these worlds. I think the talk went well and it was interesting for everyone who attended.

Next talk was again from Jiří Konečný, but this time he talked about the Python universe and it’s history and current state. This was funny presentation with plenty of Monty Python’s humour (Python is named after Monty Python). I learnt few new things I didn’t knew and in the second part of the talk I also saw what is wrong with some new things in this universe. I think this was interesting talk to anyone, who is visiting the Python universe.

Next talk was from Pavel Baksy and it was a little darker talk about security and privacy. Security is pretty important topic for number of mages. It is interesting to see how some organizations are trying to invade privacy of people. He recommended to join the world of NextCloud and use it to keep all your magical books and manuscripts safely hidden in there. It was interesting talk for anyone who wants to take security and privacy seriously.

The last talk on Sunday for me was Gaming on Linux from Jiří Folta. This was a pretty nostalgic talk for me, because I played most of the games he talked about in his gaming history. I was and still am an active Linux gamer by myself, although my wizard responsibilities are taking much more time these days. He also talked about the current state of Linux gaming world and it’s a pretty happy world to live in right now. I should take a vacation and travel there for some time.

I also spent some time with mages from the world of Mozzila and Fedora Silverblue, both of them talked about very interesting topics like flatpaks and what we could expect from them in the future. To summarize all this, it was really nice weekend although it was really tiring.

Anitya 0.17.2 released

Anitya 0.17.0 was released on 3rd September and after some time in the purgatory (staging) a few issues were identified, which were fixed in 0.17.1 and 0.17.2. This version is now live and available for entities (users), who want to use the latest features, like:

  • Teach new languages to Anitya scribes (Add semantic and calendar version scheme). This will allow us to better understand the news we are receiving (sorting other than by RPM version scheme).
  • Don’t bother projects, we are watching, too often (Use If-modified-since HTTP header when checking for new versions)
  • Add new worker for libraries.io (Libraries.io consumer is now part of the Anitya)
  • Lifting curses (Fixing bugs)

the-new-hotness 0.12.0 released

Version 0.12.0 of the-new-hotness was released on 26th September and it’s currently in purgatory. We need to synchronize with my collegue from world of Pagure to be able to make it available for everyone. Both of these contains changes which are dependent on each other. Otherwise the purgatory trial seems good and no issues were seen so far. The new version has two main changes:

  • Collaboration with world of Pagure, which will now replace the Great Oraculum (scm-requests pagure repository) (Retrieve the monitoring status from dist-git instead of fedora-scm-requests)
  • Preventive solutions against letting the same messenger go to world of Bugzilla again and again (Handle the Bugzilla error by not acknowledging the Fedora messaging message)

What awaits us?

Now to a serious question, why I’m not that active on release-monitoring.org anymore? Some time ago our conclave of mages started to change and these changes are good, because there is a more collaboration between mages and different worlds we are taking care of. But as this is progressing I’m having much less time to actively work on the release-monitoring.org. Right now I’m helping other mages finish the work on Bodhi’s Rawhide Gating initiative and it’s nice to work with others, finding solutions to problems together and having some discussion in the meantime. But as I said this is taking time away from the realm of release-monitoring.org.

What this means for you? This means that releases will not be that often and the same will be true for this blog as you could see the gap between this post and the previous one. I’m still working on the release-monitoring.org, so no worry, there will still be something new. 🙂

Post scriptum

This is all for now from the world of release-monitoring.org. Do you like this world and want to join our conclave of mages? Seek me (mkonecny) in the magical yellow pages (IRC freenode #fedora-apps) and ask how can you help. Or visit the Bugcronomicon (GitHub issues on Anitya or the-new-hotness) directly and pick something to work on.

The post Stories from the amazing world of release-monitoring.org #8 appeared first on Fedora Community Blog.

WT Social

Posted by Gwyn Ciesla on November 18, 2019 04:38 PM

I’m curious if this can actually be a viable replacement for Facebook. If you’re curious, use my invite link!

https://wt.social/gi/gwyn-ciesla/friends/8aby

Building Successful Products

Posted by Russel Doty on November 18, 2019 03:52 PM

Building a new product is hard. Building a successful new product is even harder. And building a profitable new product is the greatest challenge! To make things even more interesting, the fundamental customer requirements for a product change as the product and market mature. The very things that are required for success in an early stage product will hinder or even prevent success later on.

Markets, technologies and products go through a series of predictable stages. Understanding this evolution – and understanding what to do at each stage! – is vital for navigating the shoals of building a successful and profitable product.

Technology Adoption Lifecycle

In 1991 Geoffrey Moore revolutionized technology marketing with his seminal book Crossing the Chasm. This book changed the perception of market growth from a smooth curve to a curve with a large hole. The idea was that there were major differences between early adopters of a technology and mainstream users – and that this difference is so large that there is a chasm between these groups.

Key to the Chasm Model is the idea that early adopters have completely different requirements and expectations than mainstream users. These differences are so large that completely different approaches are required. The things that produce success when dealing with innovators and early adopters will fail with mainstream users. The things that mainstream users require are of little interest to innovators.

Just to make things interesting, new markets start with innovators – without innovators you have no starting point for developing and proving new technologies. Mainstream users, on the other hand, will not adopt new, unproven products and technologies.

Innovators are seeking competitive advantage. They want new capabilities that let them leapfrog several steps ahead of their competitors. They are willing to take risk. They want something that no-one else has, and they want it now. Their metrics are capabilities, features, and time to market. They are willing to accept the chance of failure to gain the chance of success. Innovators are willing to do a lot of work themselves and to accept point solutions.

The majority of the market, on the other hand, is seeking mature products. They expect things to work out of the box. They are looking for proven solutions that they can integrate into their existing environment. They want support, upgrades, and even documentation. They are not willing to accept significant risk.

Part of Moore’s argument is that a product or technology that is successful in the innovator and early adopter markets can create enough momentum to become a defacto standard in the mainstream markets – that success in the early markets has the potential to lock competitors out of the lucrative mainstream markets.

If you are prepared for the Chasm, your goal is to cross the Chasm, establish a beachhead in the majority market, and then build out from this beachhead to conquer the profitable majority markets.

Another possible outcome is for a competitor who is already established in the majority markets to keep a close eye on new entrants into the market. As long as they are in the Innovator and Early Adopter phases they can be monitored with no action taken. When someone successfully crosses the chasm and establishes a beachhead they are vulnerable – at this point a fast follower can swoop in, use their greater resources to address the needs of the majority markets, and take over just as the market becomes profitable.

The challenge for a fast follower is judging where the new entrants are. Move too early while the market is still Early Adopters and you waste resources on people that don’t care about your strengths. Going after a market entrant who hasn’t established a proven beachhead and started to move beyond it and you may waste resources on an unproven market. Or move too late, after a new entrant has established their beachhead and moved into majority markets, and you can find yourself facing an entrenched competitor with adequate resources and newer technology and products.

Of course, there is a lot more detail than this – see the book!

Gartner Hype Cycle

Exciting new technologies are exactly that – exciting! This excitement and inflated expectations for a new technology are often taken as proof of a large market just waiting for new products.

The Gartner Hype Cycle shows what actually happens. The Hype Cycle looks at customer expectations and perception as a technology is introduced and then matures over time.

The Gartner Hype Cycle is critical for understanding the difference between excitement and revenue. As a general rule, the more a new technology is covered in media, conferences, and even airline magazines, the lower the current real market opportunity.

To fully understand and appreciate the Hype Cycle, see Mastering the Hype Cycle: How to Choose the Right Innovation at the Right Time.

Bringing the Pieces Together

Now let’s bring these two models together:

This illustration shows that the greatest excitement around a new technology – the greatest hype – occurs before the profitable market emerges. The innovators represent about 2.5% of the total market and the early adopters another 13.5% – meaning that about 16% of the total market exists before the chasm.

This chart explains why innovation and excitement typically don’t directly lead to large revenues – there simply aren’t enough of the innovators and early adopters, and the majority markets will not accept early stage technology and immature products.

 

Google and fwupd sitting in a tree

Posted by Richard Hughes on November 18, 2019 03:41 PM

I’ve been told by several sources (but not by Google directly, heh) that from Christmas onwards the “Designed for ChromeBook” sticker requires hardware vendors to use fwupd rather than random non-free binaries. This does make a lot of sense for Google, as all the firmware flash tools I’ve seen the source for are often decades old, contain layer-on-layers of abstractions, have dubious input sanitisation and are quite horrible to use. Many are setuid, which doesn’t make me sleep well at night, and I suspect the security team at Google also. Most vendor binaries are built for the specific ODM hardware device, and all of them but one doesn’t use any kind of source control or formal review process.

The requirement from Google has caused mild panic among silicon suppliers and ODMs, as they’re having to actually interact with an open source upstream project and a slightly grumpy maintainer that wants to know lots of details about hardware that doesn’t implement one of the dozens of existing protocols that fwupd supports. These are companies that have never had to deal with working with “outside” people to develop software, and it probably comes as quite a shock to the system. To avoid repeating myself these are my basic rules when adding support for a device with a custom protocol in fwupd:

  • I can give you advice on how to write the plugin if you give me the specifications without signing an NDA, and/or the existing code under a LGPLv2+ license. From experience, we’ll probably not end up using any of your old code in fwupd but the error defines and function names might be similar, and I don’t anyone to get “tainted” from looking at non-free code, so it’s safest all round if we have some reference code marked with the right license that actually compiles on Fedora 31. Yes, I know asking the legal team about releasing previously-nonfree code with a GPLish licence is difficult.
  • If you are running Linux, and want our help to debug or test your new plugin, you need to be running Fedora 30 or 31. If you run Ubuntu you’ll need to use the snap version of fwupd, and I can’t help you with random Ubuntu questions or interactions between the snap version and the distro version. I know your customer might be running Debian Stable or Ubuntu LTS, but that’s not what I’m paid to support. If you do use Fedora 29+ or RHEL 7+ you can also use the nice COPR I provide with git snapshots of master.
  • Please reflect the topology of your device. If writes have to go through another interface, passthru or IC, please give us access to documentation about that device too. I’m fed up having to reverse engineer protocols from looking at the “wrong side” of the client source code. If the passthru is implemented by different vendor, they’ll need to work on the same terms as this.
  • If you want to design and write all of the plugin yourself, that’s awesome, but please follow the existing style and don’t try to wrap your existing code base with the fwupd plugin API. If your device has three logical children with different version numbers or firmware formats, we want to see three devices in fwupdmgr. If you want to restrict the child devices to a parent vendor, that’s fine, we now support that in fwupd and on the LVFS. If you’re adding custom InstanceIDs, these have to be documented in the README.md file.
  • If you’re using an nonstandard firmware format (as in, not DFU, Intel HEX or Motorola SREC) then you’ll need to write a firmware parser that’s going to be valgrind’ed and fuzzed. We will need all the header/footer documentation so we can verify the parser and add some small redistributable fuzz targets. If the blob is being passed to the hardware without parsing, you still might need to know the format of the header so that the plugin can do a sanity check that the firmware is suitable for the hardware, and that any internal CRC is actually correct. All the firmware parsers have to be paranoid and written defensively, because it’s me that looks bad on LWN if CVEs get issued.
  • If you want me to help with the plugin, I’m probably going to ask for test hardware, and two different versions of the firmware that can actually be flashed to the hardware you sent. A bare PCB is fine, but if you send me something please let me know so I can give you my personal address rather than have to collect it from a Red Hat office. If you send me hardware, ensure you also include a power supply that’s going to work in the UK, e.g. 240V. If you want it back, you’ll also need to provide me with UPS/DHL collection sticker.
  • You do need to think how to present your device version number. e.g. is 0x12345678 meant to be presented as “12.34.5678” or “18.52.86.120” – the LVFS really cares if this is correct, and users want to see the “same” version numbers as on the OEM web-page.
  • You also need to know if the device is fully functional during the update, or if it operates in a degraded or bootloader mode. We also need to know what happens if flashing fails, e.g. is the device a brick, or is there some kind of A/B partition that makes a flash failure harmless? If the device is a brick, how can it be recovered without an RMA?
  • After the update is complete fwupd need to “restart” the device so that the new firmware version can be verified, so there needs to be some kind of command the device understands – we can ask the user to reboot or re-plug the device if this is the only way to do this, although in 2019 we can really do better than that.
  • If you’re sharing a huge LGPLv2+ lump of code, we need access to someone who actually understands it, preferably the person that wrote it in the first place. Typically the code is uncommented and a recipe for a headache so being able to ask a human questions is invaluable. For this, either IRC, email or even just communicating via a shared Google doc (more common than you would believe…) is fine. I can’t discuss this stuff on Telegram, Hangouts or WhatsApp, sorry.
  • Once a plugin exists in fwupd and is upstream, we will expect pull requests to add either more VID/PIDs, #defines or to add variations to the protocol for new versions of the hardware. I’m going to be grumpy if I just get sent a random email with demands about backporting all the VID/PIDs to Debian stable. I have zero control on when Debian backports anything, and very little influence on when Ubuntu does a SRU. I have a lot of influence on when various Fedora releases get a new fwupd, and when RHEL gets backports for new hardware support.

Now, if all this makes me sound like a grumpy upstream maintainer then I apologize. I’m currently working with about half a dozen silicon suppliers who all failed some or all of the above bullets. I’m multiplexing myself with about a dozen companies right now, and supporting fwupd isn’t actually my entire job at Red Hat. I’m certainly not going to agree to “signing off a timetable” for each vendor as none of the vendors actually pay me to do anything…

Given interest in fwupd has exploded in the last year or so, I wanted to post something like this rather than have a 10-email back and forth about my expectations with each vendor. Some OEMs and even ODMs are now hiring developers with Linux experience, and I’m happy to work with them as fwupd becomes more important. I’ve already helped quite a few developers at random vendors get up to speed with fwupd and would be happy to help more. As the importance of fwupd and the LVFS grows more and more, vendors will need to hire developers who can build, extend and support their hardware. As fwupd grows, I’ll be asking vendors to do more of the work, as “get upstream to do it” doesn’t scale.

Fedora Toolbox. Unprivileged development environment at maximum

Posted by Luca Ciavatta on November 18, 2019 12:05 PM

Fedora Toolbox is a tool for developing and debugging software that basically is a frontend to the Podman container system. A simple way to test applications without getting billions of dependencies and cluttering up your operating system. First, Podman (Pod Manager tool) is a daemon less container engine for developing, managing, and running OCI Containers on your Linux System. With[...]

The post Fedora Toolbox. Unprivileged development environment at maximum appeared first on CIALU.NET.

Fedora shirts and sweatshirts from HELLOTUX

Posted by Fedora Magazine on November 18, 2019 08:50 AM

Linux clothes specialist HELLOTUX from Europe recently signed an agreement with Red Hat to make embroidered Fedora t-shirts, polo shirts and sweatshirts. They have been making Debian, Ubuntu, openSUSE, and other Linux shirts for more than a decade and now the collection is extended to Fedora.

<figure class="wp-block-image"><figcaption>Embroidered Fedora polo shirt.</figcaption></figure>

Instead of printing, they use programmable embroidery machines to make the Fedora embroidery. All of the design work is made exclusively with Linux; this is a matter of principle.

Some photos of the embroidering process for a Fedora sweatshirt:

<figure class="wp-block-image"></figure> <figure class="wp-block-image"></figure>

You can get Fedora polos and t-shirts in blue or black and the sweatshirt in gray here.

Oh, “just one more thing,” as Columbo used to say: Now, HELLOTUX pays the shipping fee for the purchase of two or more items, worldwide, if you order within a week from now. Order on the HELLOTUX website.

Extending proprietary PC embedded controller firmware

Posted by Matthew Garrett on November 18, 2019 08:19 AM
I'm still playing with my X210, a device that just keeps coming up with new ways to teach me things. I'm now running Coreboot full time, so the majority of the runtime platform firmware is free software. Unfortunately, the firmware that's running on the embedded controller (a separate chip that's awake even when the rest of the system is asleep and which handles stuff like fan control, battery charging, transitioning into different power states and so on) is proprietary and the manufacturer of the chip won't release data sheets for it. This was disappointing, because the stock EC firmware is kind of annoying (there's no hysteresis on the fan control, so it hits a threshold, speeds up, drops below the threshold, turns off, and repeats every few seconds - also, a bunch of the Thinkpad hotkeys don't do anything) and it would be nice to be able to improve it.

A few months ago someone posted a bunch of fixes, a Ghidra project and a kernel patch that lets you overwrite the EC's code at runtime for purposes of experimentation. This seemed promising. Some amount of playing later and I'd produced a patch that generated keyboard scancodes for all the missing hotkeys, and I could then use udev to map those scancodes to the keycodes that the thinkpad_acpi driver would generate. I finally had a hotkey to tell me how much battery I had left.

But something else included in that post was a list of the GPIO mappings on the EC. A whole bunch of hardware on the board is connected to the EC in ways that allow it to control them, including things like disabling the backlight or switching the wifi card to airplane mode. Unfortunately the ACPI spec doesn't cover how to control GPIO lines attached to the embedded controller - the only real way we have to communicate is via a set of registers that the EC firmware interprets and does stuff with.

One of those registers in the vendor firmware for the X210 looked promising, with individual bits that looked like radio control. Unfortunately writing to them does nothing - the EC firmware simply stashes that write in an address and returns it on read without parsing the bits in any way. Doing anything more with them was going to involve modifying the embedded controller code.

Thankfully the EC has 64K of firmware and is only using about 40K of that, so there's plenty of room to add new code. The problem was generating the code in the first place and then getting it called. The EC is based on the CR16C architecture, which binutils supported until 10 days ago. To be fair it didn't appear to actually work, and binutils still has support for the more generic version of the CR16 family, so I built a cross assembler, wrote some assembly and came up with something that Ghidra was willing to parse except for one thing.

As mentioned previously, the existing firmware code responded to writes to this register by saving it to its RAM. My plan was to stick my new code in unused space at the end of the firmware, including code that duplicated the firmware's existing functionality. I could then replace the existing code that stored the register value with code that branched to my code, did whatever I wanted and then branched back to the original code. I hacked together some assembly that did the right thing in the most brute force way possible, but while Ghidra was happy with most of the code it wasn't happy with the instruction that branched from the original code to the new code, or the instruction at the end that returned to the original code. The branch instruction differs from a jump instruction in that it gives a relative offset rather than an absolute address, which means that branching to nearby code can be encoded in fewer bytes than going further. I was specifying the longest jump encoding possible in my assembly (that's what the :l means), but the linker was rewriting that to a shorter one. Ghidra was interpreting the shorter branch as a negative offset, and it wasn't clear to me whether this was a binutils bug or a Ghidra bug. I ended up just hacking that code out of binutils so it generated code that Ghidra was happy with and got on with life.

Writing values directly to that EC register showed that it worked, which meant I could add an ACPI device that exposed the functionality to the OS. My goal here is to produce a standard Coreboot radio control device that other Coreboot platforms can implement, and then just write a single driver that exposes it. I wrote one for Linux that seems to work.

In summary: closed-source code is more annoying to improve, but that doesn't mean it's impossible. Also, strange Russians on forums make everything easier.

comment count unavailable comments

LAS 2019: A GNOME + KDE conference

Posted by Julita Inca Chiroque on November 18, 2019 03:21 AM

Thanks to the sponsorship of GNOME, I was able to attend the Linux App Summit 2019 held in Barcelona. This conference was hosted by two free desktop communities such as GNOME and KDE. Usually the technologies used to create applications are GTK and QT, and the objective of this conference was to present ongoing application projects that run in many Linux platforms and beyond on both, desktops and on mobiles. The ecosystem involved, the commercial part and the U project manager perspective were also presented in three core days. I had the chance to hear some talks as pictured: Adrien Plazas, Jordan and Tobias, Florian are pictured in the first place. The keynote was in charge of Mirko Boehm with the title “The Economics of FOSS”, Valentin and Adam Jones from Freedesktop SDK and Nick Richards where he pointed out the “write” strategy. You might see more details on Twitter.

Women’s presence was very noticeable in this conference. As it is shown in the following picture, a UX designer such as Robin presented a communication approach to understand what the users want, the developer Heather Ellsworth explained also her work at Ubuntu making GNOME Snap apps, the enthusiastic Aniss from OpenStreetMap community also did a lightning talk about her experiences to make a FOSS community stronger. At the bottom of the picture we see the point of view of the database admin: Shola, the KDE developer: Hannah, and the closing ceremony presented by Muriel (local team organizer).

On Friday, some BoFs were set. The engagement Bof leading by Nuritzi is pictured first, followed by the KDE team. The Snap Packaging Workshop happened in the meeting room.

Lighting talks were part also of this event at the end of the day, every day. Nuritzi was prized for her effort to run the event. Thanks Julian & Tobias for joining to see Park Güell.Social events were also arranged, we started a tour from the Casa Batlló and we walked towards the Gothic quarter. The tours happened at nigth after the talks, and lasted 1.5 h.Food expenses were covered by GNOME in my case as well as for other members. Thanks!

My participation was basically done a talk in the unconference part, I organized the GNOME games with Jorge (a local organizer) and I wrote a GTK code in C with Matthias.The games started with the “Glotones” where we used flans to eat quickly, the “wise man” where lots of Linux questions were asked, and the “Sing or die” game where the participants changed the lyrics of sticky songs using the words GNOME, KDE and LinuxAppSummit. Some of the moments were pictured as follows:The imagination of the teams were fantastic, they sang and created “geek” choreographies as we requested:One of the games that lasted until the very end was “Guessing the word”. The words depicted in the photo:LAS, root, and GPL played by Nuritzi, Neil, and Jordan, respectively.It was lovely to see again several-years GNOME’s members as Florian, who is always supporting my ideas for the GNOME games 🙂 the generous, intelligent and funny Javier Jardon, and the famous GNOME developer Zeeshan who also loves Rust and airplanes.

It was also delightful to meet new people. I met GNOME people such as Ismael, and Daniel who helped me to debug my mini GTK code. I also met KDE people such as Albert and Muriel. In the last photo, we are in the photo with the “wise man” and the “flan man”

Special Thanks to the local organizers Jorge and Aleix, Ismael for supporting me for almost the whole conference with my flu, and Nuritzi for the sweet chocolates she gave me.The photo group was a success, and generally, I liked the event LAS 2019 in Barcelona.

Barcelona is a place with novel architectures and I enjoyed the walking time there…

Thanks again GNOME, I will finish my reconstruction image GTK code I started in this event to make it also in parallel using HPC machines in the near future.

How to debug Smart Card authentication client

Posted by Simon Lukasik on November 18, 2019 12:00 AM

Dummy intro to smart card authentication

Smart card authentication is just like authentication with certificates and private keys (X509, PKI). The difference is that instead of fetching your private key and certificate from the disk, you let smart card do the cryptographic operations for you and private key never leaves the card. Special protocols are used on the client to allow communication between browser and the smart card.

Setting things can be difficult

Smart card authentication over HTTPS may be challenging thing to deploy. Especially, for newcomers. At one step You have to set-up all the components properly.

  • First of all, you need to prepare your smart card, PIV, and make sure proper X509 key and certificate is present.
  • Then You need to configure your browser and whole client stack with nss, opensc, pcscd, p11-kit-proxy, etc).
  • Finally, your server needs to be set-up properly to solicit and validate inserted card.

When the you make even one mistake with one of the components the authentication will not work. And there will be very little debugging steps available. What You need to do is to test each component alone to identify which component is not set-up properly.

Good idea is to start debug client parts (the card & the browser).

GnuTLS test server with stock CA & certificates

GnuTLS upstream provides minimalistic HTTPS server that logs detailed debugging information about what the client system is sending over. You can either compile the gnutls from sources or you can start in no time with single-purpose container

podman run -it \
    -p "5556:5556" \
    quay.io/slukasik/gnutls-smart-card-auth-tester

Upon this command, you will have HTTPS server running on localhost on port 5556, point your browser to https://127.0.0.1:5556/

You will be presented with browser warning about unknown certificate. This is expected. The server running in the container does not contain certificate signed by any of the well known certificate authorities. You are safe to accept the risk and continue.

warning

After accepting the risk, your connection will fail.

warning

You can now review the server logs in the podman container.

* Received alert '46': Unknown certificate.
Error in handshake: A TLS fatal alert has been received.

The failure here is caused by the fact that client and server cannot build shared trust. Our server never saw your smart card in action and thus it should not admit the user in.

To fix the above error we either need the smart card certificate to be signed. And CA of the signature needs to be known by our server.

GnuTLS test server with particular CA and stock certificates

In an easy case, where you already have your smartcard signed and CA certificate is available to You. You can restart the gnutls container with bindmouted CA to proper location. Following command assumes the CA certificate in ca.crt.pem.

podman run -it -p "5556:5556" \
    --mount type=bind,source=ca.crt.pem,target=/gnutls/doc/credentials/x509/ca.pem \
    quay.io/slukasik/gnutls-smart-card-auth-tester

GnuTLS test server with custom CA and custom certificates

Getting smart card signed

In case you don’t have smart card signed and do not have access to the CA. You can generate your own CA and get your smartcard secrets signed by that. There are two ways to achieve this, either you generate keys outside of smart cards and upload it to the card, or you let card generate the keys and CSR (Certificate Signing Request) and after your CA signs the CSR you upload resulting certificate to the card. The latter option is preferential as private key never leaves the card.

Create your own CA

mkdir ca
cd ca/
mkdir newcerts certs crl private requests
touch index.txt
echo 1000 > serial

# Download dummy config for your testing CA
wget http://isimluk.com/blog-data/2019-smart_card/openssl_root.cnf

# Generate private key
openssl ecparam -genkey -name secp384r1 \
    | openssl ec -aes256 -out private/ca.key.pem

# Generate certificate for the private key
openssl req -config openssl_root.cnf -new -x509 -sha384 \
    -extensions v3_ca -key private/ca.key.pem -out certs/ca.crt.pem

# Check created certificate
openssl x509 -noout -text -in certs/ca.crt.pem

Generate CSR on the card

Consult your smart card manual on how to generate Certificate Signing Request.

Sign the CSR by Your CA

# verify CSR is well formed
openssl req -verify -in CSRfromCard.csr -text -noout

# sign CSR
openssl x509 -req -days 360 -in CSRfromCard.csr \
    -CA certs/ca.crt.pem -CAkey private/ca.key.pem
    -CAcreateserial -out signedCardCert.crt

# checked singned certificate
openssl x509 -text -noout -in signedCardCert.crt

Upload the signed certificate to your smart card

Consult your smart card manual on how to upload the certificate. Make sure to not remove your private key that remains still on the smart card.

Create your own server certificates

Now, when we have client keys in place. We can generate keys for server as well.

openssl req -x509 -newkey rsa:2048 -days 365 -nodes \
    -keyout serverPrivate.key \
    -out serverCert.pem \
    -subj '/CN=mycert'

GnuTLS test server with custom CA & certificates

podman run -it -p "5556:5556" \
    --mount type=bind,source=$(pwd),target=/certs \
    quay.io/slukasik/gnutls-smart-card-auth-tester \
    ../../src/gnutls-serv -d 4 --require-client-cert \
        --x509cafile /certs/ca/certs/ca.crt.pem \
        --x509certfile /certs/serverCert.pem \
        --x509keyfile /certs/serverPrivate.key

Now, when you visit yours https://127.0.0.1:5556/ you will be asked for the PIV pin for your smart card.

success

Upon correct PIV pin entry, smart card will offer keys(s) on the card for you to select appropriate key for this test server.

success

Finally success page will show up.

success

Kudos go to Jakub Jelen who hinted GnuTLS server to me. Thank You Jakube!

Episode 170 - Until that quantum computer is cracking RSA keys, go sit back down!

Posted by Open Source Security Podcast on November 18, 2019 12:00 AM

Josh and Kurt talk about banking and privacy. It's very likely nothing will get better anytime soon, humans will continue to be terrible at understanding certain risks. We also discuss what quantum supremacy means (or doesn't  mean) for security.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/12070109/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


    Fedora Update Weeks 39--45

    Posted by Elliott Sales de Andrade on November 17, 2019 11:42 PM
    Somehow, my semi-weekly updates turned into monthly things. Mostly, updates per week have been rather light and stable, so it always seemed that there was no need to write an update. Of course, that ends up meaning one really large update after a long time. This past week was pretty busy, so I thought it best to finally write up a post. One small changeset was removing automated Suggests from R packages when they do not exist in Fedora yet.

    Use swap on NVMe to run more dev KVM guests, for when you run out of RAM

    Posted by Christopher Smart on November 17, 2019 07:26 AM

    I often spin up a bunch of VMs for different reasons when doing dev work and unfortunately, as awesome as my little mini-itx Ryzen 9 dev box is, it only has 32GB RAM. Kernel Samepage Merging (KSM) definitely helps, however when I have half a dozens or so VMs running and chewing up RAM, the Kernel’s Out Of Memory (OOM) killer will start executing them, like this.

    [171242.719512] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/machine.slice/machine-qemu\x2d435\x2dtest\x2dvm\x2dcentos\x2d7\x2d00.scope,task=qemu-system-x86,pid=2785515,uid=107
    [171242.719536] Out of memory: Killed process 2785515 (qemu-system-x86) total-vm:22450012kB, anon-rss:5177368kB, file-rss:0kB, shmem-rss:0kB
    [171242.887700] oom_reaper: reaped process 2785515 (qemu-system-x86), now anon-rss:0kB, file-rss:68kB, shmem-rss:0kB

    If I had more slots available (which I don’t) I could add more RAM, but that’s actually pretty expensive, plus I really like the little form factor. So, given it’s just dev work, a relatively cheap alternative is to buy an NVMe drive and add a swap file to it (or dedicate the whole drive). This is what I’ve done on my little dev box (actually I bought it with an NVMe drive so adding the swapfile came for free).

    Of course the number of VMs you can run depends on the amount of RAM each VM actually needs for what you’re running on it. But whether I’m running 100 small VMs or 10 large ones, it doesn’t matter.

    To demonstrate this, I spin up a bunch of CentOS 7 VMs at the same time and upgrade all packages. Without swap I could comfortably run half a dozen VMs, but more than that and they would start getting killed. With 100GB swap file I am able to get about 40 going!

    Even with pages swapping in and out, I haven’t really noticed any performance decrease and there is negligible CPU time wasted waiting on disk I/O when using the machines normally.

    The main advantage for me is that I can keep lots of VMs around (or spin up dozens) in order to test things, without having to juggle active VMs or hoping they won’t actually use their memory and have the kernel start killing my VMs. It’s not as seamless as extra RAM would be, but that’s expensive and I don’t have the slots for it anyway, so this seems like a good compromise.

    Installing Vagrant (Laravel Homestead) on Fedora

    Posted by Robbi Nespu on November 16, 2019 04:57 AM

    Assalamualaikum (Peace upon you),

    Few of my friend are asking me how I manage Laravel (PHP) development locally. Previously I installed every stack (Nginx/Apache, PHP, MariaDB/MySQL, PostgreSQL, Redis.. etc) that I want to use and yeah that is overkill, especially when something are breakdown and I need to focus how to fix it.

    “Time is GOLD”, instead of too much focus on this kind of problem and wasting so much time, I now prefer to using “ready baked” configuration box.

    If you are using Windows there is XAMPP, Laragon and others LAMP package software that you can use for local development purpose. Some people use docker too.

    But for me whom love use Linux and sometimes I booted on Windows…. I prefer to use Vagrant.

    Why Vagrant? It much easier since it officially supported by Laravel which is called as HOMESTEAD.

    As long you have good disk storage, good amount of RAM and a threesome of software.. which is VirtualBox, Vagrant and Kernel stuff.

    Lets go step by step..shall we?

    1. Kernel

    You need to preparing kernel to be working with VirtualBox on your machine.

    $ sudo dnf install gcc binutils make glibc-devel \
    patch libgomp glibc-headers  \
    kernel-headers kernel-devel-`uname -r` dkms
    $ sudo dnf groupinstall "Development Tools"
    

    Take note, Fedora is bleeding edge distro (which mean it is always rolling out with the latest software, features, driver etc). The newer kernel may doest work with newest Virtualbox. Because of this matter, I locked my kernel and only unlock (update) when necessary.

    2. Virtualbox

    I am using Virtualbox from RPM fusion repository. I don’t like to use Virtualbox official repos because they are mixing package as currently they mixing package for fedora 29 with Fedora 30 and Fedora 31. That ugly! :expressionless:

    So use rpmfusion repos and install Virtual Box

    $ sudo dnf install VirtualBox akmod-VirtualBox \
    VirtualBox-kmodsrc VirtualBox-server \
    virtualbox-guest-additions
    
    # Add your user account to the vboxusers group.
    $ sudo usermod -a -G vboxusers ${USER}
    

    3. Vagrant

    Always use official RPM files from HashiCorp. I do not suggesting you to use vagrant from Fedora repos because it always outdated and few version behind :smirk:

    # Package I installed from HashiCorp is more up-to-date
    $ vagrant -v
    Vagrant 2.2.6
    
    $ rpm -qi vagrant 
    Name        : vagrant
    Epoch       : 1
    Version     : 2.2.6
    Release     : 1
    Architecture: x86_64
    Install Date: Thu 14 Nov 2019 01:26:06 AM +08
    Group       : default
    Size        : 119202403
    License     : MIT
    Signature   : (none)
    Source RPM  : vagrant-2.2.6-1.src.rpm
    Build Date  : Tue 15 Oct 2019 01:01:04 AM +08
    Build Host  : localhost
    Relocations : / 
    Packager    : HashiCorp <support@hashicorp.com>
    Vendor      : root@localhost.localdomain
    URL         : https://www.vagrantup.com
    Summary     : Vagrant is a tool for building and distributing development environments.
    Description :
    Vagrant is a tool for building and distributing development environments.
    
    # Package from Fedora repos is few version behind
    $ sudo dnf provides vagrant
    vagrant-2.2.3-1.fc30.noarch : Build and distribute virtualized development environments
    Repo        : fedora
    Matched from:
    Provide    : vagrant = 2.2.3-1.fc30
    
    vagrant-2.2.5-1.fc30.noarch : Build and distribute virtualized development environments
    Repo        : updates
    Matched from:
    Provide    : vagrant = 2.2.5-1.fc30
    
    vagrant-1:2.2.6-1.x86_64 : Vagrant is a tool for building and distributing development environments.
    Repo        : @System
    Matched from:
    Provide    : vagrant = 1:2.2.6-1
    

    The terminal output above just a prove, incase someone need it. I experience lot of problem using vagrant packages from Fedora. I learned my lesson.

    So go to www.vagrantup.com/downloads and copy RPM package download link for CentOS and install on your machine like this.

    $ sudo rpm -iVh https://releases.hashicorp.com/vagrant/2.2.6/vagrant_2.2.6_x86_64.rpm
    

    If everything OK then reboot your computer and visit Laravel website and follow how to download and using homestead here because now you are good to follow official documentation :fire:

    Red Hat IdM as an LDAP Identity Provider in OpenShift Container Platform 4

    Posted by Adam Young on November 15, 2019 11:27 PM

    For my OpenShift Demo, I want to use a Red Hat IdM server as the identity provider. It took a little trial and error to get the mechanism to work right.

    Following the docs didn’t quite work. When I try to log in, I get:

    I1114 20:20:28.598896  122974 helpers.go:198] server response object: [{
      "metadata": {},
      "status": "Failure",
      "message": "Internal error occurred: unexpected response: 500",
      "reason": "InternalError",
      "details": {
        "causes": [
          {
            "message": "unexpected response: 500"
          }
        ]
      },
      "code": 500
    }]
    

    How do I debug? The basics steps are:

    oc project  openshift-authentication
    oc get pods
    oc log $(podname)
    

    For example:

    [ayoung@ayoungP40 ocp4.2]$ oc project openshift-authentication
    Already on project "openshift-authentication" on server "https://api.demo.redhatfsi.com:6443".
    [ayoung@ayoungP40 ocp4.2]$ oc get pods
    NAME                               READY   STATUS    RESTARTS   AGE
    oauth-openshift-5bf5fcf955-dl6h8   1/1     Running   0          17m
    oauth-openshift-5bf5fcf955-mfcs5   1/1     Running   0          17m
    [ayoung@ayoungP40 ocp4.2]$ oc log oauth-openshift-5bf5fcf955-dl6h8
    log is DEPRECATED and will be removed in a future version. Use logs instead.
    Copying system trust bundle
    I1115 23:06:20.525713       1 secure_serving.go:65] Forcing use of http/1.1 only
    I1115 23:06:20.526427       1 secure_serving.go:127] Serving securely on 0.0.0.0:6443
    

    I had two different pods, so sometimes I got nothing, and would have to pull the log from the other pod. However I did see the following errors


    Error authenticating login “ayoung” with provider “ldapidp”: LDAP Result Code 200 “Network Error”: dial tcp:

    This one was tricky. The error was that my IdM server was in the same domain as the OpenShift cluster. I Started with idm.demo.redhatfsi.com as the IdM server. Since the local DNS was trying to resolve that, and failing, I could not connect to it. I ended up creating a new IdM server: idm.infra.redhatfsi.com. With that, I was able to resolve this issue and carry on

    Error authenticating “ayoung” with provider “ldapidp”: LDAP Result Code 200 “Network Error”: TLS handshake failed (x509: certificate signed by unknown authority)

    This was due to me forgetting to update the config map with the new certificate.

    Error authenticating “ayoung” with provider “ldapidp”: multiple entries found matching “ayoung” I

    This had to due with the BaseDN I was using to search. There is a “compat” tree in a FreeIPA server. If you search at a top level BaseDN, you get two records per user. One starts like this:

    # ayoung, users, compat, infra.redhatfsi.com
    dn: uid=ayoung,cn=users,cn=compat,dc=infra,dc=redhatfsi,dc=com

    To get the more limited set of users, I change to the equivalent of the following LDAP search:

    ldapsearch -x -H ldap://idm.infra.redhatfsi.com -L -b ‘cn=accounts,dc=infra,dc=redhatfsi,dc=com’ ‘uid=ayoung’

    Here is the ldap.yaml file I used to finally configure the system. Note that I created a non-admin user named “Open Shift” to do the queries.

    apiVersion: config.openshift.io/v1
    kind: OAuth
    metadata:
      name: cluster
    spec:
      identityProviders:
      - name: ldapidp 
        mappingMethod: claim 
        type: LDAP
        ldap:
          attributes:
            id: 
            - dn 
            email: 
            - mail
            name: 
            - cn
            preferredUsername: 
            - uid
          bindDN: "uid=openshift,cn=users,cn=accounts,dc=infra,dc=redhatfsi,dc=com" 
          bindPassword: 
            name: ldap-secret
          ca: 
            name: ca-config-map
          insecure: false 
          url: "ldap://idm.infra.redhatfsi.com./cn=accounts,dc=infra,dc=redhatfsi,dc=com?uid" 
    

    FPgM report: 2019-46

    Posted by Fedora Community Blog on November 15, 2019 09:42 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week. Fedora 29 will reach end of life on 26 November. Elections voting begins next week. Candidates must submit their interviews before the deadline or they will not be on the ballot.

    Announcements

    CfPs

    <figure class="wp-block-table">
    ConferenceLocationDateCfP
    CentOS DojoBrussels, BE31 Jan 2020closes 18 Nov
    FOSDEM Distro DevroomBrussels, BE2 Feb 2020closes 1 Dec
    Indy Cloud ConfIndianapolis, IN, US26–27 Mar 2020closes 21 Dec
    </figure>

    Help wanted

    Upcoming meetings

    Releases

    Fedora 31

    Schedule

    • 21 November — Elections voting begins
    • 26 November — Fedora 29 EOL

    Fedora 32

    Changes

    Announced
    Submitted to FESCo

    Approved by FESCo

    Withdrawn by owner

    CPE update

    Rawhide Gating

    • Bodhi 5.0 has been deployed in production
      • We received some feedback about the changes in the UI and are looking at how we can address them.
      • We are working on documenting on how packagers will use Bodhi for multi-builds updates in rawhide: https://github.com/fedora-infra/bodhi/issues/2322 https://fedoraproject.org/w/index.php?title=User:Nphilipp/Package_update_HOWTO#Rawhide_and_early_Branched
    • A 5.1 release is scheduled, likely some time next week, to address some of the issues encountered and fixed in the 5.0 release.

    repoSpanner

    • Our team came across some test reliability issues last week & these are impeding progress on the 83x patch.
    • Some more debugging/discussions around RCMs pushes that seem to be either very slow or hanging but the team is looking into some solutions
    • Some progress was also made on the 83x patch however and we managed to fix one unreliable test.

    Community Application Handover & Retirement Updates

    • Elections — Blocking issue was fixed (https://pagure.io/fedora-infrastructure/issue/8253)
    • Fedocal — jlanda hitting permission error in communishift https://pagure.io/fedora-infrastructure/issue/8274
    • Nuancier — Benson Muite is now working on OIDC authentication. We are emailing him to check the progress and if he needs any help
    • fpaste — Published an article in Fedora Magazine. Sunset date: 1 December 2019.
    • Badges — GDPR query under investigation

    The post FPgM report: 2019-46 appeared first on Fedora Community Blog.

    What's a kernel headers package anyway

    Posted by Laura Abbott on November 15, 2019 07:00 PM

    I've written before about what goes into Fedora's kernel-devel package. Briefly, it consists of files that come out of the kernel's build process that are needed to build kernel modules.

    In contrast to kernel-devel, the headers package is for userspace programs. This package provides #defines and structure definitions for use by userspace programs to be compatible with the kernel. The system libc comes with a set of headers for platform independent libc purposes (think printf and the like) whereas the kernel headers are more focused on providing for the kernel API. There's often some overlap for things like system calls which are tied to both the libc and the kernel. Sometimes the decision to support them in one place vs the other comes down to developer choices.

    While the in-kernel API is not guaranteed to be stable, the userspace API must not be broken. There was an effort a few years ago to have a strict split between headers that are part of the userspace API and those that are for in-kernel use only.

    Unlike how kernel-devel gets packaged, there are proper make targets to generate the kernel-headers (thankfully). make headers_install will take care of all the magic. These headers get installed under /usr/include

    Related to kernel-headers is kernel-cross-headers. They are called cross because using them makes you grumpy and cross they are designed for building on another target than your native architecture. A classic example is ARM embedded system where building anything would be dreadfully slow, if possible at all. Josh Boyer originally wrote the cross-headers package with a nice explanation of why we want such a package (spoiler: packaging toolchains is a nightmare, cross toolchains doubly so). Because there isn't a standard way to package this, we end up combining the make headers_install target with each architecture to generate a copy of the headers under /usr/$ARCH-linux-gnu/

    One of the changes that Fedora made a few years was to split out kernel-headers into a separate repo. This was done for a handful of reasons but notably it was done to reduce unnecessary rebuilds. If there are no changes to anything in the uapi directory, there is no need to rebuild. The result is that in Fedora you may not see a kernel-headers package for every version. This sometimes gets reported as a bug by end users but there should be no issue since the uapi headers are not versioned. This is in contrast to the devel package which is versioned per-kernel and must get rebuilt each time to ensure modules can be built against the correct kernel.

    If you don't see a new kernel-headers package for a stable kernel update, it's probably not a bug. If you can identify a specific reason why you think the headers need to be rebuilt (i.e. the kernel maintainers missed a change), please file a bug.

    PoC to auto attach USB devices in Qubes

    Posted by Kushal Das on November 15, 2019 09:48 AM

    Here is PoC based on qubesadmin API which can auto attach USB devices to any VM as required. By default Qubes auto attaches any device to the sys-usb VM, that helps with bad/malware full USB devices. But, in special cases, we may want to select special devices to be auto attached to certain VMs. In this PoC example, we are attaching any USB storage device, but, we can add some checks to mark only selected devices (by adding more checks), or we can mark few vms where no device can be attached.

    I would love to see what all magical ideas you all come up with. Have fun with the code.

    Btw, you can execute it in dom0 by

    python3 autoattach.py
    

    Fedora pastebin and fpaste updates

    Posted by Fedora Magazine on November 15, 2019 08:00 AM

    Fedora and EPEL users who use fpaste to paste and share snippets of text might have noticed some changes recently. Recently, an update went out which sends pastes made by fpaste to the CentOS Pastebin instead of the Modern Paste instance that Fedora was running. Don’t fear — this was an intentional change, and is part of the effort to lower the workload within the Fedora Infrastructure and Community Platform Engineering teams. Keep reading to learn more about what’s happening with pastebin and your pastes.

    About the service

    A pastebin lets you save text on a website for a length of time. This helps you exchange data easily with other users. For example, you can post error messages for help with a bug or other issue.

    The CentOS Pastebin is a community-maintained service that keeps pastes around for up to 24 hours. It also offers syntax highlighting for a large number of programming and markup languages.

    As before, you can paste files:

    $ fpaste sql/010.add_owner_ip_index.sql 
    Uploading (0.1KiB)...
    https://paste.centos.org/view/6ee941cc

    …or command output…

    $ rpm -ql python3 | fpaste
    Uploading (0.7KiB)...
    https://paste.centos.org/view/44945a99

    …or system information:

    $ fpaste --sysinfo 
    Gathering system info .............Uploading (8.1KiB)...
    https://paste.centos.org/view/8d5bb827

    What to expect from Pastebin

    On December 1st, 2019, Fedora Infrastructure will turn off its Modern Paste servers. It will then redirect fpaste.org, www.fpaste.org, and paste.fedoraproject.org to paste.centos.org.

    If you notice any issues with fpaste, first try updating your fpaste package. On Fedora use this command:

    $ dnf update fpaste

    Or, on machines that use the EPEL repository, use this command:

    $ yum update fpaste

    If you still run into issues, please file a bug on the fpaste issue tracker, and please be as detailed as possible. Happy pasting!


    Photo by Kelly Sikkema on Unsplash.

    On data encoding and complex text shaping

    Posted by Rajeesh K Nambiar on November 15, 2019 07:37 AM

    As part of the historical move of Janayugom news paper migrating into a completely libre software based workflow, Kerala Media Academy organized a summit on self-reliant publishing on 31-Oct-2019. I was invited to speak about Malayalam Unicode fonts.

    The summit was inaugurated by Fahad Al-Saidi of the Scribus fame, who was instrumental in implementing complex text layout (CTL). Prior to the talks, I got to meet the team who made it possible to switch Janayogom’s entire publishing process on to free software platform — Kubuntu based ThengOS, Scribus for page layout, Inkspace for vector graphics, GIMP for raster graphics, CMYK color profiling for print, new Malayalam Unicode fonts with traditional orthography etc. It was impressive to see that entire production fleet was transformed, team was trained and the news paper is printed every day without delay.

    I also met Fahad later and pleasantly surprised to realize that he already knows me from open source contributions. We had a productive discussion about Scribus.

    My talk was on data encoding and text shaping in Unicode Malayalam. The publishing industry in Malayalam is at large still trapped in ASCII which causes numerous issues now, and many are still not aware of Unicode and its advantages. I tried to address that in my presentation with examples — so the preface of my talk filled half of the session; while the second half focused on font shaping. Many in the industry seems to be aware of Unicode and traditional Malayalam orthography can be used in computers now; but many in the academia still has not realized it — evident from the talk of the moderator of the discussion, who is director of the school of Indian languages. There was a lively discussion with the audience in the Q&A session. After the talk, a number of people gave me feedback and requested the slides be made available.

    Slides on data encoding and complex text shaping are available under CC-BY-NC license here.

    Fedora 31 : Can be better? part 001.

    Posted by mythcat on November 14, 2019 09:09 PM
    I started using Fedora distribution several years ago after I tried several distros (Suse, RedHat 6, Debian, Gentoo and many more).
    I was pleased with the test and speed.
    I must admit that from the first used version 9 (Sulfur) and until now it is very changed and unknown for many users
    That's why I decided to present in this series of mini-tutorials called Can be better? various lesser-known aspects that underlie it and its more precise use.
    I will not abide by a predetermined order in the use of Fedora distribution.
    I will only point to useful information for any Fedora user.

    Let's start the first part of this tutorial named Can be better? part 001. with a brief introduction to the Wikipedia page and the official one.
    An interesting aspect of Fedora configuration is the file sysctl.conf.
    You can read the manual page with this command:
    man 8 sysctl
    Part of this file should contain these lines:
    vm.overcommit_memory=2
    vm.overcommit_ratio=100
    kernel.exec-shield=1
    This parameter named vm.overcommit_memory can be set this way:
    0: heuristic overcommit (this is the default)
    1: always overcommit, never check
    2: always check, never overcommit
    For this parameter named vm.overcommit_ratio any value to anything less than 100 is almost always incorrect.
    The last parameter named kernel.exec-shield fix NX protection also called Data Execution Prevention (DEP), to prevent buffer stacks from taking down your machine.
    You can check this with:
    [root@desk mythcat]# grep nx /proc/cpuinfo 

    rpminspect-0.9 released

    Posted by David Cantrell on November 14, 2019 08:49 PM
    rpminspect-0.9 has been released and it contains a lot of bug fixes found by both the integration test suite as well as from reporters.  For the details, see here:

    https://github.com/rpminspect/rpminspect/releases/tag/v0.9

    Very large packages (VLPs) are something I am working on with rpminspect.  For example, the kernel package.  A full build of the kernel source package generates a lot of files.  I am working on improving rpminspect's speed and fixing issues found with individual inspections.  These are only showing up when I do test runs comparing VLPs.  The downside here is that it takes a little longer than with any other typical package.

    As I write this, builds are happening in Koji.  As always, I appreciate the feedback from everyone using the program.

    Insider 2019-11: logging to Elasticsearch; PE 6 to 7 upgrade; Elastic 7; in-list(); off-line deb; Splunk conf;

    Posted by Peter Czanik on November 14, 2019 11:39 AM

    Dear syslog-ng users,

    This is the 76th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.

    NEWS

    Logging to Elasticsearch made simple with syslog-ng

    Elasticsearch is gaining momentum as the ultimate destination for log messages. You can store arbitrary name-value pairs coming from structured logging or message parsing and you can use Kibana as a search and visualization interface. You can use syslog-ng as an all in one solution (system and application logs, message parsing, filtering and queuing) to collect and forward log messages to Elasticsearch:

    https://www.syslog-ng.com/community/b/blog/posts/logging-to-elasticsearch-made-simple-with-syslog-ng

    Upgrading syslog-ng PE from version 6 to 7

    Learn the major steps necessary to upgrade your system from syslog-ng Premium Edition version 6 to 7. As you will see, it is no more difficult than any other major software version upgrade, and after the upgrade you can start using all the new and useful features that are available in version 7.

    https://www.syslog-ng.com/community/b/blog/posts/upgrading-syslog-ng-pe-from-version-6-to-7

    syslog-ng and Elasticsearch 7: getting started on RHEL/CentOS

    Version 7 of the Elastic Stack, packed with new features and improved performance, has now been available for some time. Elasticsearch is not the only one to have come up with a major new version recently: starting with version 3.21, syslog-ng features a new Elasticsearch destination driver (based on the http() destination) that does not require Java. In most cases, it is more resource friendly than the Java-based driver and it is definitely easier to configure. It also has the benefit that unlike the Java-based driver, it can be included in Linux distributions. This is a quick how-to guide to get you started with syslog-ng and Elasticsearch 7 on RHEL/CentOS 7.

    https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-and-elasticsearch-7-getting-started-on-rhel-centos

    Handling lists in syslog-ng: the in-list() filter

    Recently, a number of quite complex configurations came up while syslog-ng users were asking for advice. Some of these configurations were even pushing the limits of syslog-ng (regarding the maximum number of configuration objects). As it turned out, these configurations could be significantly simplified using the in-list() filter, one of syslog-ng’s lesser known features.

    https://www.syslog-ng.com/community/b/blog/posts/handling-lists-in-syslog-ng-the-in-list-filter

    Offline syslog-ng DEB package installer

    “How can I install the unofficial syslog-ng packages on a machine without Internet access?” This question has been raised several times recently. As it entails more than simply downloading the repository containing the packages, syslog-ng lead developer Laszlo Budai created a script that solves the problem for Debian and Ubuntu users. The script downloads syslog-ng along with its dependencies using a container, and produces an archive containing all DEB packages necessary to install syslog-ng as well as a simple script responsible for the installation. Using a container in this case might seem like over complication, but it is still the easiest way to ensure that all dependencies are included in the archive.

    https://www.syslog-ng.com/community/b/blog/posts/offline-syslog-ng-deb-package-installer

    syslog-ng at Splunk .conf 2019

    The syslog-ng team participated at Splunk .conf 2019 and came home with positive experiences. They met enthusiastic users, who use syslog-ng in front of Splunk as log management layer. Many of them already using the HTTP event collector and use syslog-ng to load-balance log messages among collectors. They also demonstrated how syslog-ng can collect over one million log messages a second over UDP.

    https://twitter.com/OneIdentity/status/1187852500222693377

    WEBINARS


    Your feedback and news, or tips about the next issue are welcome.

    Fedora BoF at Ohio LinuxFest 2019

    Posted by Fedora Community Blog on November 14, 2019 07:13 AM

    I held a Fedora Birds-of-a-Feather (BoF) session at Ohio LinuxFest in Columbus, Ohio on November 1. Ohio LinuxFest is a regional conference for free and open source software professionals and enthusiasts. Since it’s just a few hours drive from my house, it seemed like an obvious event for me to attend. We had a great turnout and a lively conversation of the course of an hour.

    The session started a little slowly as many people were still in the keynote. But a few minutes later, the room was nearly full. I didn’t take a count, but at the peak, we probably had about two dozen attendees. Some were existing Fedora users and some were there to learn more about Fedora.

    I didn’t plan any particular content, since I wanted to let the group drive the discussion based on what was interesting to them. We ended up talking about documentation a fair amount. Two of the attendees created a FAS account that weekend so they can start contributing to the docs! Several more claimed the OLF BoF badge, and I sent them all a follow-up email directing them to the Join SIG’s Welcome page.

    In addition to docs, we talked about the general Fedora release process—how we determine our schedule and how we decide when to release. I brought some USB sticks with Fedora 31 Workstation for people to try. And of course I had stickers, pens, and pins to give away.

    It can be hard to judge the value of events like this, but I’m encouraged by the fact that we got at least six new people to create FAS accounts. I’m looking forward to seeing contributions from those new contributors as they work with the Join SIG to get started.

    One of the attendees sent me an email a few days later that read in part:

    The encouragement I got at OLF pushed me to make my first pull request last night that was already accepted. I am still struggling with git but I got some wonderful help from a community member yesterday in IRC that lead to my first pull request.

    I’d like to have a Fedora booth at this event next year—it represents a great opportunity to grow our contributor community.

    The post Fedora BoF at Ohio LinuxFest 2019 appeared first on Fedora Community Blog.

    ProcDump for Linux in Fedora

    Posted by ABRT team on November 14, 2019 12:00 AM

    ProcDump is a nifty debugging utility which is able to dump the core of a running application once a user-specified CPU or memory usage threshold is triggered. For instance, the invocation procdump -C 90 -p $MYPID instructs ProcDump to monitor the process with ID $MYPID, waiting for a 90 % CPU usage spike. Once it hits, it creates the coredump and exits. This allows you to later inspect the backtrace and memory state in the moment of the spike without having to attach a debugger to the process, helping you determine which parts of your code might be causing performance issues.

    Originally for Windows, ProcDump has since been ported to Linux and is now available in official package repositories for Fedora 30 and later. This means you can install it on your machine simply using DNF with no additional hassle:

    # dnf install procdump
    

    To see ProcDump in action, head over to its GitHub page and check out the examples in the readme.

    ProcDump was originally created to watch for spikes in CPU usage of Windows applications and to create coredumps when they hit. It was first released in 2009 and is since being developed as part of the Windows Sysinternals family, gaining a wide range of configuration options and triggers along the way.

    Although ProcDump for Linux has a somewhat limited feature set for now, it should be enough to be useful in most situations. The latest version allows you to watch for low and high points in CPU and memory usage in a specified running process. You can also limit how many coredumps you want to create and how much time should elapse between them.

    Although there’s not yet a direct application of ProcDump in the ABRT subsystem, we’ll be on a lookout for possible use cases that might benefit the bug catching and reporting workflow. Stay tuned for future progress on this front.

    Laravel homestead box get corrupted!

    Posted by Robbi Nespu on November 13, 2019 08:02 PM

    I keep getting this same error message when I tried to run vagrant up on my machine

    $ vagrant up
    Bringing machine 'homestead' up with 'virtualbox' provider...
    ==> homestead: Importing base box 'laravel/homestead'...
    Progress: 90%
    There was an error while executing `VBoxManage`, a CLI used by Vagrant
    for controlling VirtualBox. The command and stderr is shown below.
    
    Command: ["import", "/home/rnm/.vagrant.d/boxes/laravel-VAGRANTSLASH-homestead/8.2.1/virtualbox/box.ovf", "--vsys", "0", "--vmname", "ubuntu-18.04-amd64_1573673848596_76505", "--vsys", "0", "--unit", "11", "--disk", "/home/rnm/VirtualBox VMs/ubuntu-18.04-amd64_1573673848596_76505/ubuntu-18.04-amd64-disk001.vmdk"]
    
    Stderr: 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
    Interpreting /home/rnm/.vagrant.d/boxes/laravel-VAGRANTSLASH-homestead/8.2.1/virtualbox/box.ovf...
    OK.
    0%...
    Progress state: NS_ERROR_INVALID_ARG
    VBoxManage: error: Appliance import failed
    VBoxManage: error: Code NS_ERROR_INVALID_ARG (0x80070057) - Invalid argument value (extended info not available)
    VBoxManage: error: Context: "RTEXITCODE handleImportAppliance(HandlerArg*)" at line 957 of file VBoxManageAppliance.cpp
    

    Arghh..this is quite anoyying.. I tried executing same command again and again but no luck. I check my disk space and there is plenty.

    I have no choice, I will delete ~/.vagrant.d and execute vagrant command again

    $ rm -rf ~/.vagrant.d
    removed '/home/rnm/.vagrant.d/insecure_private_key'
    removed '/home/rnm/.vagrant.d/rgloader/loader.rb'
    removed directory '/home/rnm/.vagrant.d/rgloader'
    removed directory '/home/rnm/.vagrant.d/tmp'
    removed directory '/home/rnm/.vagrant.d/gems/2.4.9'
    removed directory '/home/rnm/.vagrant.d/gems'
    removed '/home/rnm/.vagrant.d/setup_version'
    removed '/home/rnm/.vagrant.d/boxes/laravel-VAGRANTSLASH-homestead/8.2.1/virtualbox/ubuntu-18.04-amd64-disk001.vmdk'
    removed '/home/rnm/.vagrant.d/boxes/laravel-VAGRANTSLASH-homestead/8.2.1/virtualbox/Vagrantfile'
    removed '/home/rnm/.vagrant.d/boxes/laravel-VAGRANTSLASH-homestead/8.2.1/virtualbox/box.ovf'
    removed '/home/rnm/.vagrant.d/boxes/laravel-VAGRANTSLASH-homestead/8.2.1/virtualbox/metadata.json'
    removed directory '/home/rnm/.vagrant.d/boxes/laravel-VAGRANTSLASH-homestead/8.2.1/virtualbox'
    removed directory '/home/rnm/.vagrant.d/boxes/laravel-VAGRANTSLASH-homestead/8.2.1'
    removed '/home/rnm/.vagrant.d/boxes/laravel-VAGRANTSLASH-homestead/metadata_url'
    removed directory '/home/rnm/.vagrant.d/boxes/laravel-VAGRANTSLASH-homestead'
    removed directory '/home/rnm/.vagrant.d/boxes'
    removed '/home/rnm/.vagrant.d/data/machine-index/index.lock'
    removed directory '/home/rnm/.vagrant.d/data/machine-index'
    removed '/home/rnm/.vagrant.d/data/lock.dotlock.lock'
    removed '/home/rnm/.vagrant.d/data/checkpoint_signature'
    removed '/home/rnm/.vagrant.d/data/checkpoint_cache'
    removed '/home/rnm/.vagrant.d/data/lock.machine-action-13105a2995d25174a20c9b1d206a4f8a.lock'
    removed directory '/home/rnm/.vagrant.d/data'
    removed directory '/home/rnm/.vagrant.d'
    

    and when I execute vagrant, re-downloading the box then suddenly it worked!

    $ vagrant up
    Bringing machine 'homestead' up with 'virtualbox' provider...
    ==> homestead: Box 'laravel/homestead' could not be found. Attempting to find and install...
        homestead: Box Provider: virtualbox
        homestead: Box Version: >= 8.2.0
    ==> homestead: Loading metadata for box 'laravel/homestead'
        homestead: URL: https://vagrantcloud.com/laravel/homestead
    ==> homestead: Adding box 'laravel/homestead' (v8.2.1) for provider: virtualbox
        homestead: Downloading: https://vagrantcloud.com/laravel/boxes/homestead/versions/8.2.1/providers/virtualbox.box
        homestead: Download redirected to host: vagrantcloud-files-production.s3.amazonaws.com
    ==> homestead: Successfully added box 'laravel/homestead' (v8.2.1) for 'virtualbox'!
    ==> homestead: Importing base box 'laravel/homestead'...
    ==> homestead: Matching MAC address for NAT networking...
    ==> homestead: Checking if box 'laravel/homestead' version '8.2.1' is up to date...
    ==> homestead: Setting the name of the VM: homestead
    Vagrant is currently configured to create VirtualBox synced folders with
    the `SharedFoldersEnableSymlinksCreate` option enabled. If the Vagrant
    guest is not trusted, you may want to disable this option. For more
    information on this option, please refer to the VirtualBox manual:
    
      https://www.virtualbox.org/manual/ch04.html#sharedfolders
    
    This option can be disabled globally with an environment variable:
    
      VAGRANT_DISABLE_VBOXSYMLINKCREATE=1
    
    or on a per folder basis within the Vagrantfile:
    
      config.vm.synced_folder '/host/path', '/guest/path', SharedFoldersEnableSymlinksCreate: false
    ==> homestead: Clearing any previously set network interfaces...
    ==> homestead: Preparing network interfaces based on configuration...
        homestead: Adapter 1: nat
        homestead: Adapter 2: hostonly
    ==> homestead: Forwarding ports...
        homestead: 80 (guest) => 8000 (host) (adapter 1)
        homestead: 443 (guest) => 44300 (host) (adapter 1)
        homestead: 3306 (guest) => 33060 (host) (adapter 1)
        homestead: 4040 (guest) => 4040 (host) (adapter 1)
        homestead: 5432 (guest) => 54320 (host) (adapter 1)
        homestead: 8025 (guest) => 8025 (host) (adapter 1)
        homestead: 9600 (guest) => 9600 (host) (adapter 1)
        homestead: 27017 (guest) => 27017 (host) (adapter 1)
        homestead: 22 (guest) => 2222 (host) (adapter 1)
    ==> homestead: Running 'pre-boot' VM customizations...
    ==> homestead: Booting VM...
    ==> homestead: Waiting for machine to boot. This may take a few minutes...
        homestead: SSH address: 127.0.0.1:2222
        homestead: SSH username: vagrant
        homestead: SSH auth method: private key
        homestead: Warning: Connection reset. Retrying...
        homestead: Warning: Remote connection disconnect. Retrying...
        homestead: Warning: Remote connection disconnect. Retrying...
        homestead: Warning: Remote connection disconnect. Retrying...
        homestead: Warning: Connection reset. Retrying...
        homestead: Warning: Remote connection disconnect. Retrying...
        homestead: 
        homestead: Vagrant insecure key detected. Vagrant will automatically replace
        homestead: this with a newly generated keypair for better security.
        homestead: 
        homestead: Inserting generated public key within guest...
        homestead: Removing insecure key from the guest if it's present...
        homestead: Key inserted! Disconnecting and reconnecting using new SSH key...
    ==> homestead: Machine booted and ready!
    ==> homestead: Checking for guest additions in VM...
    ==> homestead: Setting hostname...
    ==> homestead: Configuring and enabling network interfaces...
    ==> homestead: Mounting shared folders...
        homestead: /vagrant => /home/rnm/Homestead
        homestead: /home/vagrant/project1 => /home/rnm/workplace/laravel/project1
    ==> homestead: Running provisioner: file...
        homestead: /home/rnm/Homestead/aliases => /tmp/bash_aliases
    ==> homestead: Running provisioner: shell...
        homestead: Running: inline script
    ==> homestead: Running provisioner: shell...
        homestead: Running: inline script
        homestead: 
        homestead: ssh-rsa
    ==> homestead: Running provisioner: shell...
        homestead: Running: inline script
    ==> homestead: Running provisioner: shell...
        homestead: Running: inline script
    ==> homestead: Running provisioner: shell...
        homestead: Running: inline script
    ==> homestead: Running provisioner: shell...
        homestead: Running: inline script
        homestead: Ignoring feature: mariadb because it is set to false
    ==> homestead: Running provisioner: shell...
        homestead: Running: inline script
        homestead: Ignoring feature: ohmyzsh because it is set to false
    ==> homestead: Running provisioner: shell...
        homestead: Running: inline script
        homestead: Ignoring feature: webdriver because it is set to false
    ==> homestead: Running provisioner: shell...
        homestead: Running: /tmp/vagrant-shell20191114-1822-10idfmj.sh
    ==> homestead: Running provisioner: shell...
        homestead: Running: /tmp/vagrant-shell20191114-1822-1pewpva.sh
    ==> homestead: Running provisioner: shell...
        homestead: Running: script: Creating Certificate: project1.test
    ==> homestead: Running provisioner: shell...
        homestead: Running: script: Creating Site: project1.test
    ==> homestead: Running provisioner: shell...
        homestead: Running: inline script
    ==> homestead: Running provisioner: shell...
        homestead: Running: /tmp/vagrant-shell20191114-1822-yobfu3.sh
    ==> homestead: Running provisioner: shell...
        homestead: Running: script: Checking for old Schedule
    ==> homestead: Running provisioner: shell...
        homestead: Running: script: Clear Variables
    ==> homestead: Running provisioner: shell...
        homestead: Running: script: Restarting Cron
    ==> homestead: Running provisioner: shell...
        homestead: Running: script: Restarting Nginx
    ==> homestead: Running provisioner: shell...
        homestead: Running: script: Creating MySQL Database: project1
    ==> homestead: Running provisioner: shell...
        homestead: Running: script: Creating Postgres Database: project1
    ==> homestead: Running provisioner: shell...
        homestead: Running: script: Update Composer
        homestead: Updating to version 1.9.1 (stable channel).
        homestead:    
        homestead: Use composer self-update --rollback to return to version 1.9.0
    ==> homestead: Running provisioner: shell...
        homestead: Running: /tmp/vagrant-shell20191114-1822-o4c7za.sh
    ==> homestead: Running provisioner: shell...
        homestead: Running: script: Update motd
    ==> homestead: Running provisioner: shell...
        homestead: Running: /tmp/vagrant-shell20191114-1822-13ajt5r.sh
    

    nice!

    $ vagrant global-status
    id       name      provider   state   directory                           
    --------------------------------------------------------------------------
    dd946c7  homestead virtualbox running /home/rnm/Homestead                 
     
    The above shows information about all known Vagrant environments
    on this machine. This data is cached and may not be completely
    up-to-date (use "vagrant global-status --prune" to prune invalid
    entries). To interact with any of the machines, you can go to that
    directory and run Vagrant, or you can use the ID directly with
    Vagrant commands from any directory. For example:
    "vagrant destroy 1a2b3c4d"
    
    $ vagrant ssh
    Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-64-generic x86_64)
    
    Thanks for using 
     _                               _                 _ 
    | |                             | |               | |
    | |__   ___  _ __ ___   ___  ___| |_ ___  __ _  __| |
    | '_ \ / _ \| '_ ` _ \ / _ \/ __| __/ _ \/ _` |/ _` |
    | | | | (_) | | | | | |  __/\__ \ ||  __/ (_| | (_| |
    |_| |_|\___/|_| |_| |_|\___||___/\__\___|\__,_|\__,_|
    
    * Homestead v9.2.2 released
    * Settler v8.2.0 released
    
    0 packages can be updated.
    0 updates are security updates.
    
    
    vagrant@homestead:~$ 
    

    Seems, my vagrant box was corrupted and re-download the box is the solution.

    Neuroscience planets have moved

    Posted by The NeuroFedora Blog on November 13, 2019 11:37 AM

    Just a short notification that the two neuroscience planets have moved to the NeuroFedora URL space. They can now be found at:

    The planet instances on my personal space will remain functional for the next few weeks till the end of November, after which I'll stop updating them. Please update your links/bookmarks. Since this is all hosted on Github pages, setting up permanent redirects is not easy but we'll see what we can do.

    Planet Neuroscientists collects RSS feeds from various neuroscience blogs and web-sites. This is a good read for non-experts. Planet Neuroscience, on the other hand, collects RSS feeds from various publication sources. So, the main content here will consists of peer-reviewed publications and pre-prints. They are updated twice a day at the moment, which seems to work well enough.

    You can add both these to your RSS feed reader by using the "Feeds" drop-down in the top right hand corner. You can also download the complete blog-roll there and choose individual feeds to add.

    These are both hosted on the NeuroFedora Github space. You can suggest more feeds to be added to the planets too, either by opening an issue/pull request in the Github source repositories, or by getting in touch with the NeuroFedora team via any of our communication channels.


    NeuroFedora is volunteer driven initiative and contributions in any form always welcome. You can get in touch with us here. We are happy to help you learn the skills needed to contribute to the project. In fact, that is one of the major goals of the initiative---to spread technical knowledge that is necessary to develop software for Neuroscience.

    Fedora and the November 12 Hardware Vulnerabilities.

    Posted by Justin M. Forbes on November 13, 2019 08:55 AM

    As all of the news sites are picking up stories on the latest hardware vulnerabilities, I felt it best to give the Fedora update.  I won't go into detail on the vulnerabilities themselves, as Red Hat has already done a good write up on each of the CVEs which I will link to below.  There is one case to note where Fedora will differ from the Red Hat write ups. For "Transactional Synchronization Extensions (TSX) Asynchronous Abort" Fedora has chosen to default to "tsx=off Disable the TSX feature".  This will likely be of no impact to most users, but as Fedora has taken a different stance from the Red Hat documentation here, it should be noted.

    Currently, kernel-5.3.11 is available for all supported releases. You will also need microcode_ctl-2.1-33 to take advantage of these mitigation options.  

    More detailed overviews of these vulnerabilities have been published by Red Hat and are available publicly via the Red Hat Customer Portal: 

    https://access.redhat.com/security/vulnerabilities/ifu-page-mce

    https://access.redhat.com/solutions/tsx-asynchronousabort

    https://access.redhat.com/solutions/i915-graphics


    Edit images on Fedora easily with GIMP

    Posted by Fedora Magazine on November 13, 2019 07:52 AM

    GIMP (short for GNU Image Manipulation Program) is free and open-source image manipulation software. With many capabilities ranging from simple image editing to complex filters, scripting and even animation, it is a good alternative to popular commercial options.

    Read on to learn how to install and use GIMP on Fedora. This article covers basic daily image editing.

    Installing GIMP

    GIMP is available in the official Fedora repository. To install it run:

    sudo dnf install gimp

    Single window mode

    Once you open the application, it shows you the dark theme window with toolbox and the main editing area. Note that it has two window modes that you can switch between by selecting Windows -> Single Window Mode. By checking this option all components of the UI are displayed in a single window. Otherwise, they will be separate.

    Loading an image

    <figure class="wp-block-image">Fedora 30 Background</figure>

    To load an image, go to File -> Open and choose your file and choose your image file.

    Resizing an image

    To resize the image, you have the option to resize based on a couple of parameters, including pixel and percentage — the two parameters which are often handy in editing images.

    Let’s say we need to scale down the Fedora 30 background image to 75% of its current size. To do that, select Image -> Scale and then on the scale dialog, select percentage in the unit drop down. Next, enter 75 as width or height and press the Tab key. By default, the other dimension will automatically resize in correspondence with the changed dimension to preserve aspect ratio. For now, leave other options unchanged and press Scale.

    <figure class="wp-block-image">Scale Dialog In GIMP</figure>

    The image scales to 0.75 percent of its original size.

    Rotating images

    Rotating is a transform operation, so you find it under Image -> Transform from the main menu, where there are options to rotate the image by 90 or 180 degrees. There are also options for flipping the image vertically or horizontally under the mentioned option.

    Let’s say we need to rotate the image 90 degrees. After applying a 90-degree clockwise rotation and horizontal flip, our image will look like this:

    <figure class="wp-block-image"><figcaption>Transforming an image with GIMP</figcaption></figure>

    Adding text

    Adding text is very easy. Just select the A icon from the toolbox, and click on a point on your image where you want to add the text. If the toolbox is not visible, open it from Windows->New Toolbox.

    As you edit the text, you might notice that the text dialog has font customization options including font family, font size, etc.

    <figure class="wp-block-image">Add Text To Images<figcaption>Adding text to image in GIMP</figcaption></figure>

    Saving and exporting

    You can save your edit as as a GIMP project with the xcf extension from File -> Save or by pressing Ctrl+S. Or you can export your image in formats such as PNG or JPEG. To export, go to File -> Export As or hit Ctrl+Shift+E and you will be presented with a dialog where you can select the output image and name.

    Fedora Women’s Day (FWD) 2019

    Posted by Solanch96 on November 12, 2019 03:19 PM

    El FWD (Fedora Women's Day) es un evento realizado anualmente, que se compromete a fomentar la diversidad y la inclusión en la comunidad de Fedora

    <script> __ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-5dd63614ec77c', location: 120, formFactor: '001', label: { text: 'Anuncios', }, creative: { reportAd: { text: 'Informar de este anuncio', }, privacySettings: { text: 'Privacy settings', } } }); }); </script>

    Fedora status updates: November 2019

    Posted by Fedora Community Blog on November 12, 2019 06:36 AM
    Fedora community elections

    Welcome to the monthly set of updates on key areas within Fedora. This update includes Fedora Council representatives, Fedora Editions, and Fedora Objectives. The content here is based on the weekly updates submitted to the Fedora Council, published to the project dashboard.

    Minimization objective

    The Minimization objective submitted an updated proposal to the Fedora Council. Objective lead Adam Šamalík developed a logic model to help define the future work of the Minimization objective.

    Silverblue

    The Fedora Silverblue team was not able to get the necessary changes into Fedora 31 to support having Flatpak pre-installed. They are looking at the possibility of re-spinning the Silverblue ISO to incorporate the changes. But they did update the Fedora 31 Flatpak runtime. The team updated the Flatpak’ed GNOME applications to GNOME 3.34 and built them against the Fedora 31 runtime.

    The team is planning for Fedora 32. See the Kanban board for more information.

    Workstation WG

    The Workstation WG is looking at how to increase membership and participation in the working group. Discussion is ongoing in issue #106. But the WG has started a subgroup investigating the possibility of encrypting user home directories by default. Meeting agendas and notes are available on the GNOME etherpad. Discussion is also occurring on the desktop mailing list.

    Of course, Fedora 31 released at the end of the month. Fedora 31 Workstation included better support for non-English users, improved H.264 support, and many other enhancements. The working group hopes to support Wayland by default on Nvidia GPUs in Fedora 32.

    The post Fedora status updates: November 2019 appeared first on Fedora Community Blog.

    Upgrade Fedora 30 to Fedora 31

    Posted by Robbi Nespu on November 12, 2019 05:42 AM
    <script src="https://gist.github.com/30fac19b83d93eca068f1295ef33cc0f.js"> </script>

    How to override bug details in Kiwi TCMS

    Posted by Kiwi TCMS on November 11, 2019 09:11 AM

    Starting with version 7.0 Kiwi TCMS pages displaying URLs to bugs also contain an info icon which shows additional information as tooltip. These are designed to provide more contextual information about the bug. By default the tooltip shows the OpenGraph metadata for that URL. This article will explain how to override this in 2 different ways.

    bug details shown

    Option #1: using the caching layer

    Additional bug information is cached on the application layer. The cache key is the bug URL! By default Kiwi TCMS uses local-memory caching which isn't accessible for external processes but can be reconfigured very easily. This example changes the CACHES setting to use a directory on the file system like so

    CACHES = {
        'default': {
            'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
            'LOCATION': '/tmp/kiwi-cache',
            'TIMEOUT': 3600,
        }
    }
    

    Then you need to poll your 3rd party bug tracker (and/or other systems) and update the cache for each URL

    from django.core.cache import cache
    from tcms.core.contrib.linkreference.models import LinkReference
    
    for reference in LinkReference.objects.filter(is_defect=True):
        # possibly filter objects coming only from your own bug tracker
        # in case there are multiple trackers in use
    
        # custom methods to grab more information. Must return strings
        title = fetch_title_from_bug_tracker(reference.url)
        description = fetch_description_from_bug_tracker(reference.url)
    
        # store the information in Kiwi TCMS cache
        cache.set(reference, {'title': title, 'description': description})
    

    Then execute the Python script above regularly. For example use the following as your cron script

    #!/bin/bash
    export VIRTUAL_ENV=/venv
    export PATH=/venv/bin:${PATH}
    cat /path/to/cache_updater.py | /Kiwi/manage.py shell
    

    bug details from customized cache

    IMPORTANT

    • Kiwi TCMS expires cache entries after an hour. Either change the TIMEOUT setting shown above or run your script more frequently
    • How to modify default Kiwi TCMS settings is documented here
    • The Python + Bash scripts above don't need to be on the same system where Kiwi TCMS is hosted. However they need the same Python 3 virtualenv and cache settings as Kiwi TCMS does
    • Information about Django's cache framework and available backends can be found here
    • memcached is a supported cache backend option, see here
    • django-elasticache is a backend for Amazon ElastiCache which provides several configuration examples
    • Both django-redis and django-redis-cache are good libraries which support Redis
    • Any 3rd party libraries must be pip3 install-ed into your own docker image

    Option #2: extend bug tracker integration

    Let's say you are already running a customized Docker image of Kiwi TCMS. Then you may opt-in to extend the existing bug tracker integration code which provides the information shown in the tooltip. In this example I've extended the KiwiTCMS bug tracker implementation but you can even provide your own from scratch

    class ExtendedBugTracker(KiwiTCMS):
        def details(self, url):
            result = super().details(url)
    
            result['title'] = 'EXTENDED: ' + result['title']
            result['description'] += '<h1>IMPORTANT</h1>'
    
            return result
    

    Then import the new ExtendedBugTracker class inside tcms/issuetracker/types.py like so

    index 9ad90ac..2c76621 100644
    --- a/tcms/issuetracker/types.py
    +++ b/tcms/issuetracker/types.py
    @@ -17,6 +17,9 @@ from django.conf import settings
    
     from tcms.issuetracker.base import IssueTrackerType
     from tcms.issuetracker.kiwitcms import KiwiTCMS  # noqa
    +from tcms.issuetracker.kiwitcms import ExtendedBugTracker
    

    and change the bug tracker type, via https://tcms.example.com/admin/testcases/bugsystem/, to ExtendedBugTracker.

    bug details extended internally

    IMPORTANT

    • ExtendedBugTracker may live anywhere on the filesystem but Python must be able to import it
    • It is best to bundle all of your customizations into a Python package and pip3 install it into your customized docker image
    • ExtendedBugTracker must be imported into tcms/issuetracker/types.py in order for the admin interface and other functions to find it. You may also place the import at the bottom of tcms/issuetracker/types.py
    • API documentation for bug tracker integration can be found here
    • Rebuilding the docker image is outside the scope of this article. Have a look at this Dockerfile for inspiration

    Happy testing!

    Understanding “disk space math”

    Posted by Fedora Magazine on November 11, 2019 07:44 AM

    Everything in a PC, laptop, or server is represented as binary digits (a.k.a. bits, where each bit can only be 1 or 0). There are no characters like we use for writing or numbers as we write them anywhere in a computer’s memory or secondary storage such as disk drives. For general purposes, the unit of measure for groups of binary bits is the byte — eight bits. Bytes are an agreed-upon measure that helped standardize computer memory, storage, and how computers handled data.

    There are various terms in use to specify the capacity of a disk drive (either magnetic or electronic). The same measures are applied to a computers random access memory (RAM) and other memory devices that inhabit your computer. So now let’s see how the numbers are made up.

    Prefixes are used with the number that specifies the capacity of the device. The prefixes designate a multiplier that is to be applied to the number that preceded the prefix. Commonly used prefixes are:

    • Kilo = 103 = 1,000 (one thousand)
    • Mega = 106 = 1,000,000 (one million)
    • Giga = 109 = 1000,000,000 (one billion)
    • Tera = 1012 = 1,000,000,000,000 (one trillion)

    As an example 500 GB (gigabytes) is 500,000,000,000 bytes.

    The units that memory and storage are specified in  advertisements, on boxes in the store, and so on are in the decimal system as shown above. However since computers only use binary bits, the actual capacity of these devices is different than the advertised capacity.

    You saw that the decimal numbers above were shown with their equivalent powers of ten. In the binary system numbers can be represented as powers of two. The table below shows how bits are used to represent powers of two in an 8 bit Byte. At the bottom of the table there is an example of how the decimal number 109 can be represented as a binary number that can be held in a single byte of 8 bits (01101101).

    <figure class="wp-block-table">

    Eight bit binary number

     

    Bit 7

    Bit 6

    Bit 5

    Bit 4

    Bit 3

    Bit 2

    Bit 1

    Bit 0

    Power of 2

    27

    26

    25

    24

    23

    22

    21

    20

    Decimal Value

    128

    64

    32

    16

    8

    4

    2

    1

    Example Number

    0

    1

    1

    0

    1

    1

    0

    1

    </figure>

    The example bit values comprise the binary number 01101101. To get the equivalent decimal value just add the decimal values from the table where the bit is set to 1. That is 64 + 32 + 8 + 4 + 1 = 109.

    By the time you get out to 230 you have decimal 1,073,741,824 with just 31 bits (don’t forget the 20) You’ve got a large enough number to start specifying memory and storage sizes.

    Now comes what you have been waiting for. The table below lists common designations as they are used for labeling decimal and binary values.

    <figure class="wp-block-table">

    Decimal

    Binary

    KB (Kilobyte)

    1KB = 1,000 bytes

    KiB (Kibibyte)

    1KiB = 1,024 bytes

    MB (Megabyte)

    1MB = 1,000,000 bytes

    MiB (Mebibyte)

    1MiB = 1,048,576 bytes

    GB (Gigabyte)

    1GB = 1,000,000,000 bytes

    GiB (Gibibyte)

    1 GiB (Gibibyte) = 1,073,741,824 bytes

    TB (Terabyte)

    1TB = 1,000,000,000,000

    TiB (Tebibyte)

    1TiB = 1,099,511,627,776 bytes

    </figure>

    Note that all of the quantities of bytes in the table above are expressed as decimal numbers. They are not shown as binary numbers because those numbers would be more than 30 characters long.

    Most users and programmers need not be concerned with the small differences between the binary and decimal storage size numbers. If you’re developing software or hardware that deals with data at the binary level you may need the binary numbers.

    As for what this means to your PC: Your PC will make use of the full capacity of your storage and memory devices. If you want to see the capacity of your disk drives, thumb drives, etc, the Disks utility in Fedora will show you the actual capacity of the storage device in number of bytes as a decimal number.

    There are also command line tools that can provide you with more flexibility in seeing how your storage bytes are being used. Two such command line tools are du (for files and directories) and df (for file systems). You can read about these by typing man du or man df at the command line in a terminal window.


    Photo by Franck V. on Unsplash.

    Episode 169 - What happens when leadership doesn't care about security?

    Posted by Open Source Security Podcast on November 11, 2019 01:19 AM

    Josh and Kurt talk about government security incidents. The security concerns at the government level often have real life and death consequences. What happens when the leadership knowingly disregards security policy?

    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/11981144/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes