Fedora People

New openQA tests: update installer tests and desktop app start/stop test

Posted by Adam Williamson on January 23, 2019 10:20 AM

It’s been a while since I wrote about significant developments in Fedora openQA, so today I’ll be writing about two! I wrote about one of them a bit in my last post, but that was primarily about a bug I ran into along the way, so now let’s focus on the changes themselves.

Testing of install media built from packages in updates-testing

We have long had a problem in Fedora testing that we could not always properly test installer changes. This is most significant during the period of development after a new release has branched from Rawhide, but before it is released as the new stable Fedora release (we use the name ‘Branched’ to refer to a release in this state; in a month or so, Fedora 30 will branch from Rawhide and become the current Branched release).

During most of this time, the Bodhi update system is enabled for the release. New packages built for the release do not immediately appear in any repositories, but – as with stable releases – must be submitted as “updates”, sometimes together with related packages. Once submitted as an update, the package(s) are sent to the “updates-testing” repository for the release. This repository is enabled on installed Branched systems by default (this is a difference from stable releases), so testers who have already installed Branched will receive the package(s) at this point (unless they disable the “updates-testing” repository, which some do). However, the package is still not truly a part of the release at this point. It is not included in the nightly testing composes, nor will it be included in any Beta or Final candidate composes that may be run while it is in updates-testing. That means that if the actual release media were composed while the package was still in updates-testing, it would not be a part of the release proper. Packages only become part of these composes once they pass through Bodhi and are ‘pushed stable’.

This system allows us to back out packages that turn out to be problematic, and hopefully to prevent them from destabilizing the test and release composes by not pushing them stable if they turn out to cause problems. It also means more conservative testers have the option to disable the “updates-testing” repository and avoid some destabilizing updates, though of course if all the testers did this, no-one would be finding the problems. In the last few years we have also been running several automated tests on updates (via Taskotron, openQA and the CI pipeline) and reporting results from those to Bodhi, allowing packagers to pull the update if the tests find problems.

However, there has long been a bit of a problem in this process: if the update works fine on an installed system but causes problems if included in (for example) an installer image or live image, we have no very good way to find this out. There was no system for automatically building media like this that include the updates currently in testing so they could be tested. The only way to find this sort of problem was for testers to manually create test media – a process that is not widely understood, is time consuming, and can be somewhat difficult. We also of course could not do automated testing without media to test.

We’ve looked at different ways of addressing this in the past, but ultimately none of them came to much (yet), so last year I decided to just go ahead and do something. And after a bit of a roadblock (see that last post), that something is now done!

Our openQA now has two new tests it runs on all the updates it tests. The first test – here’s an example run – builds a network install image, and the second – example run – tests it. Most importantly, any packages from the update under testing are both used in the process of building the install image (if they are relevant to that process) and included in the installer image (if they are packages which would usually be in such an image). Thus if the update breaks the production of the image, or the basic functionality of the image itself, this will be caught. This (finally) means that we have some idea whether a new anaconda, lorax, pykickstart, systemd, dbus, blivet, dracut or any one of dozens of other key packages might break the installer. If you’re a packager and you see that one of these two tests has failed for your update, we should look into that! If you’re not sure how to go about that, you can poke me, bcl, or the anaconda developers in Freenode #anaconda, and we should be able to help.

It is also possible for a human tester to download the image produced by the first test and run more in-depth tests on it manually; I haven’t yet done anything to make that possibility more visible or easier, but will try to look into ways of doing that over the next few weeks.

GNOME application start/stop testing

My colleague Lukáš Růžička has recently been looking into what we might be able to do to streamline and improve our desktop application testing, something I’d honestly been avoiding because it seemed quite intractable! After some great work by Lukáš, one major fruit of this work is now visible in Fedora openQA: a GNOME application start/stop test suite. Here’s an example run of it – note that more recent runs have a ton of failures caused by a change in GNOME, Lukáš has proposed a change to the test to address that but I have not yet reviewed it.

This big test suite just tests starting and then exiting a large number of the default installed applications on the Fedora Workstation edition, making sure they both launch and exit successfully. This is of course pretty easy for a human to do – but it’s extremely tedious and time-consuming, so it’s something we don’t do very often at all (usually only a handful of times per release cycle), meaning we may not notice that an application which perhaps we don’t commonly use has a very critical bug (like failing to launch at all) for some time.

Making an automated system like openQA do this is actually quite a lot of work, so it was a great job by Lukas to get it working. Now by monitoring the results of this test on the nightly composes closely, we should find out much more quickly if one of the tested applications is completely broken (or has gone missing entirely).

Fedora – I’m coming back…..

Posted by Paul Mellors [MooDoo] on January 23, 2019 10:04 AM

It’s been a while since I’ve done anything Fedora, but I’ve decided that my sabbatical has to end.  With this in mind, I’m going to start slowly coming back to the Fedora ecosystem, it’ll be slow to start with, but I’ve missed it, I need to come back.  I hope you’ll have me.

See you soon.

My info if you’re interested :

https://fedoraproject.org/wiki/User:Paulmellors

Mind map yourself using FreeMind and Fedora

Posted by Fedora Magazine on January 23, 2019 08:00 AM

A mind map of yourself sounds a little far-fetched at first. Is this process about neural pathways? Or telepathic communication? Not at all. Instead, a mind map of yourself is a way to describe yourself to others visually. It also shows connections among the characteristics you use to describe yourself. It’s a useful way to share information with others in a clever but also controllable way. You can use any mind map application for this purpose. This article shows you how to get started using FreeMind, available in Fedora.

Get the application

The FreeMind application has been around a while. While the UI is a bit dated and could use a refresh, it’s a powerful app that offers many options for building mind maps. And of course it’s 100% open source. There are other mind mapping apps available for Fedora and Linux users, as well. Check out this previous article that covers several mind map options.

Install FreeMind from the Fedora repositories using the Software app if you’re running Fedora Workstation. Or use this sudo command in a terminal:

$ sudo dnf install freemind

You can launch the app from the GNOME Shell Overview in Fedora Workstation. Or use the application start service your desktop environment provides. FreeMind shows you a new, blank map by default:

<figure class="wp-block-image"><figcaption>FreeMind initial (blank) mind map
</figcaption></figure>

A map consists of linked items or descriptions — nodes. When you think of something related to a node you want to capture, simply create a new node connected to it.

Mapping yourself

Click in the initial node. Replace it with your name by editing the text and hitting Enter. You’ve just started your mind map.

What would you think of if you had to fully describe yourself to someone? There are probably many things to cover. How do you spend your time? What do you enjoy? What do you dislike? What do you value? Do you have a family? All of this can be captured in nodes.

To add a node connection, select the existing node, and hit Insert, or use the “light bulb” icon for a new child node. To add another node at the same level as the new child, use Enter.

Don’t worry if you make a mistake. You can use the Delete key to remove an unwanted node. There’s no rules about content. Short nodes are best, though. They allow your mind to move quickly when creating the map. Concise nodes also let viewers scan and understand the map easily later.

This example uses nodes to explore each of these major categories:

<figure class="wp-block-image"><figcaption>Personal mind map, first level</figcaption></figure>

You could do another round of iteration for each of these areas. Let your mind freely connect ideas to generate the map. Don’t worry about “getting it right.” It’s better to get everything out of your head and onto the display. Here’s what a next-level map might look like.

<figure class="wp-block-image"><figcaption>Personal mind map, second level</figcaption></figure>

You could expand on any of these nodes in the same way. Notice how much information you can quickly understand about John Q. Public in the example.

How to use your personal mind map

This is a great way to have team or project members introduce themselves to each other. You can apply all sorts of formatting and color to the map to give it personality. These are fun to do on paper, of course. But having one on your Fedora system means you can always fix mistakes, or even make changes as you change.

Have fun exploring your personal mind map!


Photo by Daniel Hjalmarsson on Unsplash.

Cockpit 186

Posted by Cockpit Project on January 23, 2019 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 186.

Services: The Services page has been redesigned to make it easier to understand

services screenshot

Responsive System Overview page

These UI elements now adjust their layout to small and mobile browser windows. This improvement will come to more pages in the near future.

system page screenshot

Try it out

Cockpit 186 is available now:

AwesomeSlides for converting LibreOffice ODP into revealjs

Posted by Pablo Iranzo Gómez on January 22, 2019 07:51 PM

Introduction

For some time now, I wanted to put the presentations I did in the past to be available, and since I’ve added support to my blog to render revealjs slides, I wanted to also put other presentations that I did in the past, probably (or for sure) outdated, but that were sitting in my computer drive.

The presentations already got several transformations, but in the actual status they are stored as LibreOffice ODP files, that made it a bit difficult.

Some software for the conversion, did generate them as ‘screenshoots’ for each slide, this however had some cons/pros:

  • Pros:
  • Format was kept almost 100%
  • Easy to showcase with any ‘gallery’ plugin
  • Cons:
  • Text was lost, so no links, no indexation, etc

Alternatively, I could add and attach via a link to the odp file for end users to download and reproduce, but that will increase blog size (no constrains, but sounded like nonsense to me), so I continued my research for a solution or workaround to use.

The approach

Thanks to https://github.com/cliffe/AwesomeSlides, which uses perl and a set of common available libraries on any distro to do the job, I was able to convert my presentations from odp to html quite easily:

1
2
3
for file in *.odp; do
    perl convert-to-awesome.pl $file
done

This resulted in ‘master’ html files in the slides_out folder, plus a folder containing the images and other media used by the presentation.

AwesomeSlides does the conversion to ‘revealjs’ format, plus adds extra features, transitions, etc to make them fancier, but in my case I was interested in plain markdown, so the next one to the rescue has been pandoc

1
2
3
for file in *.html; do
    pandoc -t markdown $file -o $file.md
done

The end result of course is not clean at all and not directly usable by the pelican plugin to render the images, etc.

The post-processing

One of the things needed (and that I used for other slides) is to move the resulting ‘md’ file to the same folder as the images and move them into the content/presentations folder of my website source.

Once there, a set of find/replacements was required:

find replacement description
$folder {attach} Define images included as {attach} for pelican to pick them up
————- Remove underlining after titles
stangechars normalchar Some characters were lost (accents, etc), replaced by another one, to later spellcheck
\n\n\n \n\n Remove extra new lines
-⠀⠀⠀⠀⠀ -⠀ Remove extra spaces before paragraph

Other manual steps involved

  • Put ## in front of each title
  • Adjust empty slides (—- followed by another —-)
  • And a lot more :-)

The good thing, in the end, is that with some additional work, I was able to bring back ‘online’ my older presentations, now listed in the presentations category.

Hello world

Posted by The NeuroFedora Blog on January 22, 2019 07:27 PM

Hello world! The NeuroFedora blog is now live!

Please welcome Star Labs to the LVFS

Posted by Richard Hughes on January 22, 2019 02:11 PM

A few weeks ago Sean from Star Labs asked me to start the process of joining the LVFS. Star Labs is a smaller Linux-friendly OEM based just outside London not far from where I grew up. We identified three types of firmware we could update, the system firmware, the EC firmware and the SSD firmware. The first should soon be possible with the recent pledge of capsule support from AMI, and we’ve got a binary for testing now. The EC firmware will need further work, although now I can get the IT8987 into (and out of) programming mode. The SSD firmware needed a fix to fwupd (to work around a controller quirk), but with the soon-to-be released fwupd it can already be updated:

Sean also shipped me some loaner hardware that could be recovered using manufacturing tools if I broke it, which makes testing the ITE EC flashing possible. The IT89 chip is subtly different to the IT87 chip in other hardware like the Clevo reference designs, but this actually makes it easier to support as there are fewer legacy modes. This will be a blog post all of it’s own.

In playing with hardware intermittently for a few weeks, I’ve got a pretty good feel for the “Lap Top” and “Star Lite” models. There is a lot to like, the aluminium cases feel both solid and tactile (like an XPS 13) and it feels really “premium” unlike the Clevo reference hardware. Star Labs doesn’t use the Clevo platform any more, which allows it to take some other bolder system design choices. Some things I love: the LED IPS screen, USB-C charging, the trackpad and keyboard. The custom keyboard design is actually a delight to use; I actually prefer it to my Lenovo P50 and XPS 13 for key-placement and key-travel. The touchpad seems responsive, and the virtual buttons work well unlike some of the other touchpads from other Linux-friendly OEMs. The battery life seems superb, although I’ve not really done enough discharge→charge→discharge cycles to be able to measure it properly. The front-facing camera is at the top of the bezel where it belongs, which is something the XPS 13 has only just fixed in the latest models. Nobody needs to see up my nose.

There are a few things that could be improved with the hardware in my humble opinion: The shiny bezel around the touchpad is somewhat distracting on an otherwise beautifully matte chassis. There is also only a microSD slot, when all my camera cards are full sized. The RAM is soldered in, and so can’t be upgraded in the future, and the case screws are not “captive” like the new Lenovos. It also doesn’t seem to have a ThunderBolt interface which might matter if you want to use this thing docked with a bazillion things plugged in. Some of these are probably cost choices, the Lap Top is significantly cheaper than the XPS 13 developer edition I keep comparing it against in my head.

I was also curious to try the vendor-supplied customized Ubuntu install which was supplied with it. It just worked, in every way, and for those installing other operating systems like Fedora or Arch all the different distros have been pre-tested with extra notes – a really nice touch. This is where Star Labs really shine, these guys really care about Linux and it shows. I’ve been impressed with the Lab Top and I’ll be sad to return it when all the hardware is supported by fwupd and firmware releases are available on the LVFS.

So, if you’re using Star Drive hardware already then upgrade fwupd to the latest development version, enable the LVFS testing remote using fwupdmgr enable-remote lvfs-testing and tell us how the process goes. For technical reasons you need to power down the machine and power it back up rather than just doing a “warm” reboot. In a few weeks we’ll do a proper fwupd release and push the firmware to stable.

Additional properties in .editorconfig

Posted by Matěj Cepl on January 21, 2019 11:30 PM

For some inexplicable reasons vim-editorconfig stopped working with my latest build of neovim. I am not sure why and I haven’t have enough time to debug it properly. As a workaround I have temporarily (?) switched to editorconfig-vim. The former plugin is all written in VimL, so it was not problem to extend properties it supports by two more ones spell_enabled and spell_language corresponding to spell and spelllang vim options respectively. The later plugin is in Python and it is a bit more complicated, but fortunately it has an explicit hook for custom plugins. So, I could write this into special plugin/ file (no into ~/.vimrc, because commands from plugins in ~/.vim/pack are not available then yet):

function! FiletypeHook(config)
     if has_key(a:config, 'spell_enabled')
         let spell_enabled = a:config['spell_enabled']
         echom printf("EditorConfig: spell_enabled = %s",
            \ spell_enabled)
         if spell_enabled == "true"
             let &spell = 1
         else
             let &spell = 0
         endif
     endif

     if has_key(a:config, 'spell_language')
        let s:languages = map(filter(globpath(&runtimepath,
          \ 'spell/*', 1, 1),
          \ '!isdirectory(v:val)'), 'fnamemodify(v:val, '':t'')')
        echom printf("EditorConfig: s:languages = %s",
          \ s:languages)

         let spell_language = a:config['spell_language']

         " set bomb if necessary
         if spell_language[-3:] == "BOM"
             let &bomb = 1
             spell_language = spell_language[:-4]
         endif

         echom printf("EditorConfig: spell_language = %s",
           \ spell_language)
        " We need to accept even dialects of languages, e.g., en_gb
         let lang = split(spell_language, '_')[0]
         echom printf("EditorConfig: spell_language = %s",
           \ lang)
         if !empty(filter(copy(s:languages),
           \ 'stridx(v:val, lang) == 0'))
             echom printf("EditorConfig: spell_language = %s",
               \ spell_language)
             let &spelllang = spell_language
         endif
     endif

     return 0   " Return 0 to show no error happened
endfunction
call editorconfig#AddNewHook(function('FiletypeHook'))

Seems to work like charm. Comments on the code are, of course, more than welcome.

Fedora Strategy FAQ Part 1: What is Actually Changing?

Posted by Fedora Community Blog on January 21, 2019 07:49 PM

This post is part of an FAQ series about the updated strategic direction published by the Fedora Council.

What is actually changing?

This strategy is an ideal we’d like the project to work towards, but it’s also a path that we’re already on. Some parts of this aren’t a change at all, but a clarification of the general consensus over the last few years.

Project Terminology and Teams

We want “Fedora” to be the Project. For things Fedora makes, we’ll describe them as “Fedora Thingname”, like “Fedora IoT” or “the Fedora RPM Package Collection”. I’ve been saying this for a while, but I want to make it official. (Just as “Red Hat” isn’t RHEL.)

We’re going to set up a hosted Taiga instance (more on this soon). Anyone in Fedora will be able to create a project there, and every team in Fedora should. This will create a directory listing which will supplant The Subprojects wiki page and have three significant advantages:

  1. Available all in one place
  2. Searchable in a reasonable, non-wiki way
  3. Not a wiki: inherently self-updating and sortable by activity

This is the minimum required to be a Fedora Team. Teams may have more formalized membership, a charter with rules for voting, regular meetings, regular reports to FESCo or Mindshare, etc. We decided against formalizing rules for words like “Working Group” or “Subproject” and instead agreed that any Team can use those labels if they like.

Teams providing services should formalize a menu of offerings. They should describe the criteria by which those offerings are provided. I think the Council should provide some standard ideas for reasonable criteria. These could include team structure and activity level, user base, a link to current Fedora Objective, etc.

More Autonomy

We need a way for Teams to release artifacts in a self-service way. The Astronomy spin (to pick a random example) should not need to wait for release engineering to make a release. In fact, they might choose to make new releases around the cadence of their most important included software. At the same time, since the Team can do it themselves, Release Engineering might also not be on the hook for building artifacts for spins with a niche audience or that don’t meet some other criteria.

We need to allow for non-RPM building blocks somehow. I mean, not in a technical way but in a rules way. We need to allow for RPMs built in non-traditional ways (the source git idea). No solution is forced to consume these, but we shouldn’t block someone from providing them, either. For example, Silverblue may want to provide some Flatpaks that are built (in an open and transparent from-source way) straight to Flatpak rather than going through RPM. Or AI/ML may want to generate and ship container images with python wheels built specially.

 

We need to relax policies on what Solutions are allowed to do. We also want to drop the Spins/Labs naming thing — it’s confusing to people even within the project. Solution-makers should know their audiences best and should be able to make more technical choices — the things currently known as Spins should be able to offer different defaults and presets, even including enabling different module streams by default. Solutions should not require Spin SIG (or FESCo) approval on technical merit or perceived feasibility.

Overall

We need to be able to adapt to the changing world without taking 18 months to do so each time.

The post Fedora Strategy FAQ Part 1: What is Actually Changing? appeared first on Fedora Community Blog.

Fedora Toolbox — Under the hood

Posted by Debarshi Ray on January 21, 2019 07:20 PM

rpm-ostree-flatpak-silverblue

A few months ago, we had a glimpse at Fedora Toolbox setting up a seamlessly integrated RPM based environment, complete with dnf, on Fedora Silverblue. But isn’t dnf considered a persona non grata on Silverblue? How is this any different from using the existing Fedora Workstation then? What’s going on here?

Today we shall look under the covers to answer some of these questions.

The problem

The immutable nature of Silverblue makes it difficult to install arbitrary RPMs on the operating system. It’s designed to install graphical applications as Flatpaks, and that’s it. This has many advantages. For example, robust upgrades.

However, there are legitimate cases when one does want to install some random RPMs. For example, when you need things like *-devel packages, documentation, GCC, gofmt, strace, valgrind or whatever else is necessary for your development workflow. While rpm-ostree does offer a way around this, it’s painful to have to reboot every time you change the set of packages on the system, and it negates the advantages of immutability in the first place.

Containers

By this time some of you are surely thinking that containers ought to solve this somehow; and you’d be right. Fedora Toolbox uses containers to set up an elaborate chroot-like environment that’s separate from the immutable OSTree based operating system.

And once you are down to containers, Docker isn’t far away — surely this can be hacked together with Docker; and you’d be right again. Almost. You can hack it up with Docker but it wouldn’t be ideal.

The problem with Docker is that it requires root privileges. Every time you invoke the docker command, it has to be prefixed with sudo or be run as root. That’s fine if you all you want is a place to install some RPMs. It would’ve required root anyway. However, it’s annoying if you want GNOME Terminal to default to running a shell inside your RPM based development environment. You’d have to enter the root password to even get to an unprivileged shell prompt.

So, instead of using Docker, Fedora Toolbox uses something called Podman. Podman is a fully-featured container engine that aims to be a drop-in replacement for Docker. Thanks to the Open Container Initiative (or OCI) standardizing the interfaces to Docker images and runtimes, every OCI container and image can be used with either Docker or Podman.

The good thing about Podman is that can be used rootless — that is, without root privileges. So, that’s great.

Containers are weird, though

Containers are pretty widely popular these days, but not everybody who is transitioning from the current RPM based Fedora Workstation to Silverblue can be expected to set things up from first principles using nothing but the podman command line. It will surely increase the cognitive load of undergoing the transition, hindering Silverblue adoption.

Even if someone familiar with the technology is able to set things up, pitfalls abound. For example, why is the display server not working, why is the SSH agent not working, why are OpenGL and Vulkan programs not working, or why is sshfs not working, or why LLVM and LibreOffice are failing to build, etc..

Let’s be honest. The number of people who understand both container technology and the workings of a modern graphical operating system well enough to sort these problems in a jiffy is vanishingly small. I know that at least I don’t belong to that group.

Container images are optimized for non-interactive use and size, whereas we are talking about the interactive shell running in your virtual terminal emulator. For example the fedora OCI image comes with the coreutils-single RPM, which doesn’t have the same user experience as the coreutils package that we are all familiar with.

So, it’s clear that we need a pre-configured, and at times, opinionated, solution on top of Podman.

The solution

Fedora Toolbox starts with the similarly named fedora-toolbox OCI image hosted on the Fedora Container Registry. There’s one for every Fedora branch. Currently those are Fedoras 28, 29 and 30. These images are based on the fedora image, with an altered package set to offer an interactive user experience that’s similar to the one on Silverblue.

When you invoke the fedora-toolbox create command, it pulls the image from the registry, and then tailors it to the local user. It creates a user with a UID matching $UID, a home directory matching $HOME and the right group memberships; and it ensures that various bits and pieces from the host, such as the home directory, the display server, the D-Bus instances, various pieces of hardware, etc. are available inside the container. These customizations are saved as another image named fedora-toolbox-user. Finally, an OCI container, also named fedora-toolbox-user, is created out of this image.

If you are curious, run podman images and podman ps –all to verify the above.

Once the toolbox container has been created, subsequent fedora-toolbox enter commands execute the users preferred shell inside it, giving the impression of being in an alternate RPM flavoured reality on a Silverblue system.

If you are still curious, then open /usr/bin/fedora-toolbox and have a peek. It’s just a shell script, after all.

Inclusion is a necessary part of good coding

Posted by Ben Cotton on January 21, 2019 01:18 PM

Too often I see comments like “some people would rather focus on inclusion than write good code.” Not only is that a false dichotomy, but it completely misrepresents the relationship between the two. Inclusion doesn’t come at the cost of good code, it’s a necessary part of good code.

We don’t write code for the sake of writing code. We write code for people to use it in some way. This means that the code needs to work for the people. In order to do that, the people designing and implementing the technology need to consider different experiences. The best way to do that is to have people with different experiences be on the team. As my 7th grade algebra teacher was fond of reminding us: garbage in, garbage out.

But it’s not like the tech industry has a history of bad decision making. Or soap dispensers not working with dark-skinned people. Or identifying black people as gorillas. Or voice recognition not responding to female voices. What could go wrong with automatically labeling “suspicious” people?

I’ll grant that not all of these issues are with the code itself. In fact a lot of it isn’t the code, it’s the inputs given to the code. So when I talk about “good coding” here, I’m being very loose with the wording as a shorthand for the technology we produce in general. The point holds because the code doesn’t exist in a vacuum.

It’s not just about the outputs and real world effect of what we make. There’s also the matter of wanting to build the best team. Inclusion opens you up to a broader base of talent that might self-select out.

<figure class="wp-block-embed-twitter wp-block-embed is-type-rich is-provider-twitter">
<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
</figure>

Being inclusive takes effort. It sometimes requires self-examination and facing unpleasant truths. But it makes you a better person and if you don’t care about that, it makes your technology better, too.

The post Inclusion is a necessary part of good coding appeared first on Blog Fiasco.

Why did Fedora Modularity fail in 2017? A brief reflection

Posted by Justin W. Flory on January 21, 2019 08:30 AM

For the ISTE-430 Information Requirements Modelling course at the Rochester Institute of Technology, students are asked to analyze an example of a failed software project and write a short summary on why it failed. For the assignment, I evaluated the December 2017 announcement on Fedora Modularity. I thought it was an interesting example of a project that experienced initial difficulty but re-calibrated and succeeded in the end. And it is a project I am biased towards, as a Fedora user and sysadmin.

I thought sharing it on my blog might be interesting for others. Don’t read into this too much – it was a quick analysis from a single primary source and a few secondary references.

What is Fedora Modularity?

The Fedora Project is a common Linux operating system which ships software in “packages”. Since June 2015, project members built a pipeline to ship modular versions of existing packages, known as Modularity. Modularity allows someone to use different versions of the same software (e.g. a programming library) on the same system without dependency conflicts.

In a way, Fedora Modularity shifts some responsibility from the Linux distribution (as a provider of known, good combinations of co-dependent packages) to the sysadmin (as a decision-maker to use different combinations of software versions in their environment). It’s more flexible for a variety of deployment requirements. I see it as a net-positive win for the sysadmin experience since its final release.

Why did Modularity fail in 2017?

However, in this post-mortem from December 2017, a project lead (Stephen Gallagher) announced Fedora Modularity efforts are scrapped for a total redesign. Even though it was not a final release, it was regarded as a failure because of the late status of the project, since its proposal in mid-2015. The post-mortem explained the amount of effort required by all software packagers was significant and also noted the wide amount of engagement necessary from different stakeholders. New requirements and steps to packaging guidelines were not understood by the community of software packagers, and project milestones were not met because of low participation in the packager community.

Their report reveals the amount of moving parts Fedora Modularity must account for. It demonstrated a flawed understanding of user and developer needs during initial feedback from the beta’s implementation. In other words, the level of complexity for the project exceeded the amount of employee resources available to accomplish the project. The redesigned model (proposed in the reflection) pivoted by utilizing existing tools and infrastructure to support the new features, which required less changes for new software package updates. Thus, the packager community was better able to participate in providing new functionality in existing packages.

Additionally, sufficient documentation of new guidelines was unavailable to users, stemming from lack of engagement and feedback by existing users. This was later remedied with user experience testing through events like Test Days, which allowed any community member to try out new features and functionality with their own packaging workflows.

Since the project was finally implemented in late 2018, it was better received in the community than the failed launch in December 2017. Most success since the first failure came by simplifying project requirements (e.g. by leveraging how existing infrastructure was designed instead of reinventing the pipeline) and getting more user feedback on a regular basis (e.g. with community outreach events like Test Days and classroom sessions hosted live on YouTube).


Gallagher, S. (2017). Modularity is Dead, Long Live Modularity! Fedora Community Blog. Retrieved from https://communityblog.fedoraproject.org/modularity-dead-long-live-modularity/

The post Why did Fedora Modularity fail in 2017? A brief reflection appeared first on Justin W. Flory's Blog.

Build a Django RESTful API on Fedora.

Posted by Fedora Magazine on January 21, 2019 08:00 AM

With the rise of kubernetes and micro-services architecture, being able to quickly write and deploy a RESTful API service is a good skill to have. In this first part of a series of articles, you’ll learn how to use Fedora to build a RESTful application and deploy it on Openshift. Together, we’re going to build the back-end for a “To Do” application.

The APIs allow you to Create, Read, Update, and Delete (CRUD) a task. The tasks are stored in a database and we’re using the Django ORM (Object Relational Mapping) to deal with the database management.

Django App and Rest Framework setup

In a new directory, create a Python 3 virtual environment so that you can install dependencies.

$ mkdir todoapp && cd todoapp
$ python3 -m venv .venv
$ source .venv/bin/activate

After activating the virtual environment, install the dependencies.

(.venv)$ pip install djangorestframework django

Django REST Framework, or DRF, is a framework that makes it easy to create RESTful CRUD APIs. By default it gives access to useful features like browseable APIs, authentication management, serialization of data, and more.

Create the Django project and application

Create the Django project using the django-admin CLI tool provided.

(.venv) $ django-admin startproject todo_app . # Note the trailing '.'
(.venv) $ tree .
.
├── manage.py
└── todo_app
    ├── __init__.py
    ├── settings.py
    ├── urls.py
    └── wsgi.py
1 directory, 5 files

Next, create the application inside the project.

(.venv) $ cd todo_app
(.venv) $ django-admin startapp todo
(.venv) $ cd ..
(.venv) $ tree .
.
├── manage.py
└── todo_app
├── __init__.py
├── settings.py
├── todo
│ ├── admin.py
│ ├── apps.py
│ ├── __init__.py
│ ├── migrations
│ │ └── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
├── urls.py
└── wsgi.py

Now that the basic structure of the project is in place, you can enable the REST framework and the todo application. Let’s add rest_framework and todo to the list of INSTALL_APPS in the project’s settings.py.

todoapp/todo_app/settings.py
# Application definition

INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'todo_app.todo',
]

 Application Model and Database

The next step of building our application is to set up the database. By default, Django uses the SQLite database management system. Since SQLite works well and is easy to use during development, let’s keep this default setting. The second part of this series will look at how to replace SQLite with PostgreSQL to run the application in production.

The Task Model

By adding the following code to todo_app/todo/models.py, you define which properties have a task. The application defines a task with a title, description and a status. The status of a task can only be one of the three following states: Backlog, Work in Progress and Done.

from django.db import models

class Task(models.Model):
STATES = (("todo", "Backlog"), ("wip", "Work in Progress"), ("done", "Done"))
title = models.CharField(max_length=255, blank=False, unique=True)
description = models.TextField()
status = models.CharField(max_length=4, choices=STATES, default="todo")

Now create the database migration script that Django uses to update the database with changes.

(.venv) $ PYTHONPATH=. DJANGO_SETTINGS_MODULE=todo_app.settings django-admin makemigrations

Then you can apply the migration to the database.

(.venv) $ PYTHONPATH=. DJANGO_SETTINGS_MODULE=todo_app.settings django-admin migrate

This step creates a file named db.sqlite3 in the root directory of the application. This is where SQLite stores the data.

Access to the data

Creating a View

Now that you can represent and store a task in the database, you need a way to access the data.  This is where we start making use of Django REST Framework by using the ModelViewSet. The ModelViewSet provides the following actions on a data model: list, retrieve, create, update, partial update, and destroy.

Let’s add our view to todo_app/todo/views.py:

from rest_framework import viewsets

from todo_app.todo.models import Task
from todo_app.todo.serializers import TaskSerializer


class TaskViewSet(viewsets.ModelViewSet):
queryset = Task.objects.all()
serializer_class = TaskSerializer

Creating a Serializer

As you can see, the TaskViewSet is using a Serializer. In DRF, serializers convert the data modeled in the application models to a native Python datatype. This datatype can be later easily rendered into JSON or XML, for example. Serializers are also used to deserialize JSON or other content types into the data structure defined in the model.

Let’s add our TaskSerializer object by creating a new file in the project todo_app/todo/serializers.py:

from rest_framework.serializers import ModelSerializer
from todo_app.todo.models import Task


class TaskSerializer(ModelSerializer):
class Meta:
model = Task
fields = "__all__"

We’re using the generic ModelSerializer from DRF, to automatically create a serializer with the fields that correspond to our Task model.

Now that we have a data model a view and way to serialize/deserialize data, we need to map our view actions to URLs. That way we can use HTTP methods to manipulate our data.

Creating a Router

Here again we’re using the power of the Django REST Framework with the DefaultRouter. The DRF DefaultRouter takes care of mapping actions to HTTP Method and URLs.

Before we see a better example of what the DefaultRouter does for us, let’s add a new URL to access the view we have created earlier. Add the following to todo_app/urls.py:

from django.contrib import admin
from django.conf.urls import url, include

from rest_framework.routers import DefaultRouter

from todo_app.todo.views import TaskViewSet

router = DefaultRouter()
router.register(r"todo", TaskViewSet)

urlpatterns = [
url(r"admin/", admin.site.urls),
url(r"^api/", include((router.urls, "todo"))),
]

As you can see, we’re registering our TaskViewSet to the DefaultRouter. Then later, we’re mapping all the router URLs to the /api endpoint. This way, DRF takes care of mapping the URLs and HTTP method to our view actions (list, retrieve, create, update, destroy).

For example, accessing the api/todo endpoint with a GET HTTP request calls the list action of our view. Doing the same but using a POST HTTP request calls the create action.

To get a better grasp of this, let’s run the application and start using our API.

Running the application

We can run the application using the development server provided by Django. This server should only be used during development. We’ll see in the second part of this tutorial how to use a web server better suited for production.

(.venv)$ PYTHONPATH=. DJANGO_SETTINGS_MODULE=todo_app.settings django-admin runserver
Django version 2.1.5, using settings 'todo_app.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

Now we can access the application at the following URL: http://127.0.0.1:8000/api/

<figure class="wp-block-image"></figure>

DRF provides an interface to the view actions, for example listing or creating tasks, using the following URL: http://127.0.0.1:8000/api/todo

<figure class="wp-block-image"></figure>

Or updating/deleting an existing tasks with this URL: http://127.0.0.1:8000/api/todo/1

<figure class="wp-block-image"></figure>

Conclusion

In this article you’ve learned how to create a basic RESTful API using the Django REST Framework. In the second part of this series, we’ll update this application to use the PostgreSQL database management system, and deploy it in OpenShift.

The source code of the application is available on GitHub.


Automatizar con Ansible es la clave

Posted by Alvaro Castillo on January 21, 2019 03:25 AM

¿Qué pasa si eres camararero(a) y tienes que llenar 100 copas de agua en 5 minutos? ¿Cómo podrías solventar este problema sin que se derrame agua y que te de tiempo para rellenar estas 100 copas? La respuesta no es darse prisa, la respuesta se llama ANSIBLE.

Ansible es una plataforma que te permite automatizar y también ejecutar comandos ad-hoc (a cualquier máquina), gestionar plataformas de Virtualización como VMware Sphere, plataformas de orquestación de contenedores tipo Kubernetes, dis...

That missing paragraph

Posted by Kushal Das on January 21, 2019 01:30 AM

In my last blog post, I wrote about a missing paragraph. I did not keep that text anywhere, I just deleted it while reviewing the post. Later Jason asked me in the comments to actually post that paragraph too.

So, I will write about it. 2018 was an amazing year, all told;, good, great, and terrible moments all together. Things were certain highs , and a few really low moments. Some things take time to heal, some moments make a life long impact.

The second part of 2018 went downhill at a pretty alarming rate, personally. Just after coming back from PyCon US 2018, from the end of May to the beginning of December, within 6 months we lost 4 family members. On the night of 30th May, my uncle called, telling me that my dad was admitted to the hospital, and the doctor wanted to talk to me. He told me to come back home as soon as possible. There was a very real chance that I wouldn’t be able to talk to him again. Anwesha and I, managed to reach Durgapur by 9AM and dad passed away within a few hours. From the time of that phone call, my brain suddenly became quite detached, very calm and thinking about next steps. Things to be handled, official documents to be taken care of, what needs to be done next.

I felt a few times that I’dburst into tears, but, the next thing that sprang to mind was that if I started crying, that would affect my mother and rest of the family too. Somehow, I managed not to cry and every time I got emotionally overwhelmed, I started thinking about next logical steps. I actually made sure, I did not talk about the whole incident much, until recently after things settled down. I also spent time in my village and then in Kolkata.

In the next 4 months, there have been 3 more deaths. Every time the news came, I did not show any reaction, but, it hurt.

Our education system is what supposed to help us grow in life. But, I feel it is more likely, that school is just training for the society to work cohesively and to make sure that the machines are well oiled. Nothing prepares us to deal with real life incidents. Moreover, death is a taboo subject with most of us.

Coming back to the effect of these demises, for a moment it created a real panic in my brain. What if I just vanish tomorrow? In my mind, our physical bodies are some amazing complex robots / programs. When one fails, the rest of them try to cope , try to fill in the gaps. But, the nearby endpoints never stay the same. I am working as usual, but, somehow my behavior has changed. I know that I have a long lasting problem with emails, but, that has grown a little out of hand in the last 5 months. I am putting in a lot of extra effort to reply to the emails I actually managed to notice. Before that, I was opening the editor to reply, but my mind blanked, and I could not type anything.

I don’t quite know how to end the post. The lines above are almost like a stream of consciousness in my mind and I don’t even know if they make sense in the order I put them in. But, at the same time, it makes sense to write it down. At the end of the day, we are all human, we make mistakes, we all have emotions, and often times it is okay to let it out.

In a future post, I will surely write another post talking about the changes I am bringing in my life to cope.

Episode 130 - Chat with Snyk co-founder Danny Grander

Posted by Open Source Security Podcast on January 21, 2019 01:12 AM
Josh and Kurt talk to Danny Grander one of the co-founders of Snyk about Zip Slip, what it is, how to fix it, and how they disclosed everything. We also touch on plenty of other open source security topics as Danny is involved in many aspects of open source security.



<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/8328605/height/90/theme/custom/thumbnail/yes/preload/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


    Optimizating Conway

    Posted by Toshio Kuratomi on January 20, 2019 10:42 PM

    Conway’s Game of Life seems to be a common programming exercise. I had to program it in Pascal when in High School and in C in an intro college programming course. I remember in college, since I had already programmed it before, that I wanted to optimize the algorithm. However, a combination of writing in C and having only a week to work on it didn’t leave me with enough time to implement anything fancy.

    A couple years later, I hiked the Appalachian Trail. Seven months away from computers, just hiking day in and day out. One of the things I found myself contemplating when walking up and down hills all day was that pesky Game of Life algorithm and ways that I could improve it.

    Fast forward through twenty intervening years of life and experience with a few other programming languages to last weekend. I needed a fun programming exercise to raise my spirits so I looked up the rules to Conway’s Game of Life, sat down with vim and python, and implemented a few versions to test out some of the ideas I’d had kicking around in my head for a quarter century.

    This blog post will only contain a few snippets of code to illustrate the differences between each approach.  Full code can be checked out from this github repository.

    The Naive Approach: Tracking the whole board

    The naive branch is an approximation of how I would have first coded Conway’s Game of Life way back in that high school programming course. The grid of cells is what I would have called a two dimensional array in my Pascal and C days. In Python, I’ve more often heard it called a list of lists. Each entry in the outer list is a row on the grid which are each represented by another list. Each entry in the inner lists are cells in that row of the grid. If a cell is populated, then the list entry contains True. If not, then the list entry contains False.

    One populated cell surrounded by empty cells would look like this:

    board = [
             [False, False, False],
             [False, True, False],
             [False, False, False],
            ]
    

    Looking up an individual cell’s status is a matter of looking up an index in two lists: First the y-index in the outer list and then the x-index in an inner list:

    # Is there a populated cell at x-axis 0, y-axis 1?
    if board[0][1] is True:
        pass
    

    Checking for changes is done by looping through every cell on the Board, and checking whether each cell’s neighbors made the cell fit a rule to populate or depopulate the cell on the next iteration.

    for y_idx, row in enumerate(board):
        for x_idx, cell in enumerate(row):
            if cell:
                if not check_will_live((x_idx, y_idx), board, max_x, max_y):
                    next_board[y_idx][x_idx] = False
            else:
                if check_new_life((x_idx, y_idx), board, max_x, max_y):
                    next_board[y_idx][x_idx] = True
    

    This is a simple mapping of the two-dimensional grid that Conway’s takes place on into a computer data structure and then a literal translation of Conway’s ruleset onto those cells. However, it seems dreadfully inefficient. Even in college I could see that there should be easy ways to speed this up; I just needed the time to implement them.

    Intermediate: Better checking

    The intermediate branch rectifies inefficiencies with checking of the next generation cells. The naive branch checks every single cell that is present in the grid. However, thinking about most Conway setups, most of the cells are blank. If we find a way to ignore most of the blank cells, then it would save us a lot of work. We can’t ignore all blank cells, though; if a blank cell has exactly three populated neighbors then the blank cell will become populated in the next generation.

    The key to satisfying both of those is to realize that all the cells we’re going to need to change will either be populated (in which case, they could die and become empty in the next generation) or they will be a neighbor of a populated cell (in which case, they may become populated next generation). So we can loop through our board and ignore all of the unpopulated cells at first. If we find a populated cell, then we must both check that cell to see if it will die and also check its empty neighbors to see if they will be filled in the next generation.

    The major change to implement that is here:

           checked_cells = set()
           # We still loop through every cell in the board but now
           # the toplevel code to do something if the cell is empty
           # has been removed.
           for y_idx, row in enumerate(board):
                for x_idx, cell in enumerate(row):
                    if cell:
                        if not check_will_live((x_idx, y_idx), board, max_x, max_y):
                            next_board[y_idx][x_idx] = False
                        # Instead, inside of the conditional block to
                        # process when a cell is populated, there's
                        # a new loop to check all of the neighbors of
                        # a populated cell.
                        for neighbor in (n for n in find_neighbors((x_idx, y_idx), max_x, max_y)
                                         if n not in checked_cells):
                            # If the cell is empty, then we check whether
                            # it should be populated in the next generation
                            if not board[neighbor[1]][neighbor[0]]:
                                checked_cells.add(neighbor)
                                if check_new_life(neighbor, board, max_x, max_y):
                                    next_board[neighbor[1]][neighbor[0]] = True
    

    Observant readers might also notice that I’ve added a checked_cells set. This tracks which empty cells we’ve already examined to see if they will be populated next generation. Making use of that means that we will only check a specific empty cell once per generation no matter how many populated cells it’s a neighbor of.

    These optimizations to the checking code made things about 6x as fast as the naive approach.

    Gridless: Only tracking populated cells

    The principle behind the intermediate branch of only operating on populated cells and their neighbors seemed like it should be applicable to the data structure I was storing the grid in as well as the checks. Instead of using fixed length arrays to store both the populated and empty portions of the grid, I figured it should be possible to simply store the populated portions of the grid and then use those for all of our operations.

    However, C is a very anemic language when it comes to built in data structures.
    if I was going to do that in my college class, I would have had to implement a linked list or a hash map data structure before I even got to the point where I could implement the rules of Conway’s Game of Life. Today, working in Python with it’s built in data types, it was very quick to implement a data structure of only the populated cells.

    For the gridless branch, I replaced the 2d array with a set. The set contained tuples of (x-coordinate, y-coordinate) which defined the populated cells. One populated cell surrounded by empty cells would look like this:

    board = set((
                 (1,1),
               ))
    

    Using a set had all sorts of advantages:

    • Setup became a matter of adding tuples representing just the populated cells to the set instead of creating the list of lists and putting True or False into every one of the list’s cells:
          - board = []
          - for y in range(0, max_y):
          -     for x in range(0, max_x):
          -         board[x][y] = (x, y) in initial_dataset
          + board = set()
          + for x, y in initial_dataset:
          +     board.add((x, y))
      
    • For most cases, the set will be smaller than the list of lists as it only has to store entries for the populated cells, not for all of the cells in the grid. Some patterns may be larger than the list of lists (because the overhead of the set is more than the overhead for a list of lists) but those patterns are largely unstable; after a few generations, the number of depopulated cells will increase to a level where the set will be smaller.
    • Displaying the grid only has to loop through the populated cells instead of looping through every entry in the list of lists to find the populated cells:

          - for y in range(0, max_y):
          -     for x in range(0, max_x):
          -         if board[x][y]:
          -             screen.addstr(y, x, ' ', curses.A_REVERSE)
          + for (x, y) in board:
          +     screen.addstr(y, x, ' ', curses.A_REVERSE)
      
    • Instead of copying the old board and then changing the status of cells each generation, it is now feasible to simply populate a new set with populated cells every generation. This is because the set() is empty except for the populated cells whereas the list of lists would need to have both the populated and the empty cells set to either True or False.

               - next_board = copy.deepcopy(board)
               + next_board = set()
      
    • Similar to the display loop, the main loop which checks what happens to the cells can now just loop through the populated cells instead of having to search the entire grid for them:

              # Perform checks and update the board
              for cell in board:
                  if check_will_live(cell, board, max_x, max_y):
                      next_board.add(cell)
                  babies = check_new_life(cell, board, max_x, max_y)
                  next_board.update(babies)
              board = next_board
      
    • Checking whether a cell is populated now becomes a simple containment check on a set which is faster than looking up the list indexes in the 2d array:

      - if board[cell[0]][cell[1]]:
      + if cell in board:
      

    Gridless made the program about 3x faster than intermediate, or about 20x faster than naive.

    Master: Only checking cell changes once per iteration

    Despite being 3x faster than intermediate, gridless was doing some extra work. The code in the master branch attempts to correct those.

    The most important change was that empty cells in gridless were being checked for each populated cell that was its neighbor. Adding a checked_cells set like the intermediate branch had to keep track of that ensures that we only check whether an empty cell should be populated in the next generation one time:

            checked_cells = set()
            for cell in board:
                if check_will_live(cell, board, max_x, max_y):
                    next_board.add(cell)
                checked_cells.add(cell)
                # Pass checked_cells into check_new_life so that
                # checking skips empty neighbors which have already
                # been checked this generation
                babies, barren = check_new_life(cell, board, checked_cells, max_x, max_y)
                checked_cells.update(babies)
                checked_cells.update(barren)
    

    The other, but relatively small, optimization was to use Python’s builtin least-recently-used cache decorator on the find_neighbors function. This allowed us to skip computing the set of neighboring cells when those cells were requested soon after each other. In the set-based code, finding_neighbors is called for the same cell back to back quite frequently so this did have a noticable impact.

    + @functools.lru_cache()
      def find_neighbors(cell, max_x, max_y):
    

    These changes sped up the master branch an additional 30% over what gridless had achieved or nearly 30x as fast as the naive implementation that we started with.

    What have I learned?

    • I really made these optimizations to Conway’s Game of Life purely to see whether any of the ideas I had had about it in the past 25 years would make a difference. It is pleasing to see that they did.
    • I think that choice of programming language made a huge difference in how I approached this problem. Even allowing that I was a less experienced programmer 25 years ago, I think that the cost in programmer time to implement sets and other advanced data structures in C would likely have taken long enough that I wouldn’t have finished the fastest of these designs as a one week class assignment using C. Implementing the same ideas in Python, where the data structures exist as part of the language, took less than a day.
    • I had originally envisioned using a linked list or set instead of a 2d array as purely a memory optimization. After I started coding it, it quickly became clear that only saving the populated cells also reduced the number of cells that would have to be searched through. I’d like to say I saw that from the beginning but it was just serendipity.

    Un nouveau logo pour Fedora ?

    Posted by Charles-Antoine Couret on January 20, 2019 10:17 PM

    L'équipe design de Fedora est en train de travailler sur un changement du logo de Fedora. Et l'équipe a proposé deux possibilités et souhaite des retours constructifs, en anglais uniquement, pour éventuellement peaufiner ces idées.

    Si vous lisez l'anglais, je vous conseille de lire cet excellent article qui présente le sujet. Ou si vous souhaitez voir l'ensemble des tests intermédiaires. Je me contenterai de résumer l'essentiel.

    Déjà il y a eu 2 versions du logo de Fedora comme vous pouvez le voir ci-dessous. Ce n'est donc pas un changement inédit même si le dernier changement date un peu, à savoir vers l'année 2005.

    Premier logo :

    Fedora_Core.png

    Second et actuel logo :

    fedora-2005.png

    Pourquoi ce changement ?

    Logo complexe à travailler

    Tout d'abord il y a un problème dans le rendu. Le logo actuel contient plusieurs couleurs, ce qui complexifie la réalisation de goodies ou les rend plus chers suivant le prestataire. C'est un élément important pour la réalisation de la communication autour du projet.

    Fedora-décomposition.png

    Ensuite, cela rend le logo plus difficilement visible en cas de fond foncé, en particulier avec un fond bleu. Cela est particulièrement le cas pour la réalisation de fonds d'écran ou de pochettes CD. Pour les pochettes CD, il n'était pas rare que l'équipe design ruse un peu en utilisant un dégradé de bleu et de positionner le logo de Fedora sur la partie claire. Mais pour le rendu d'un site web, il est plus délicat de s'assurer de la position du logo par rapport à la clarté du bleu du fond de page, suivant la taille de l'écran du visiteur.

    Jacquette-F12.png

    De par la composition du logo, un texte + la bulle avec le fameux F, il est difficile de centrer les éléments et de calculer les espacements entre les éléments, que ce soit pour un centrage vertical comme horizontal.

    Enfin, la police choisie à l'époque souffre d'un défaut. Le a final ressemble trop à un o ce qui gêne bien entendu la communication.

    Confusion possible

    La bulle de Fedora avec son F ressemble trop au logo de Facebook. Si cela peut prêter à sourire, les logos étant quand même différents, il est en effet courant (pour l'avoir vécu comme d'autres ambassadeurs) que les personnes qui ne sont pas du milieu confondent les deux. Et en effet c'est une remarque apparemment récurrente depuis 2009/2010 quand le réseau social a commencé à se répandre.

    Question de cohérence, pour la liberté !

    Fedora se veut être une distribution libre. C'est un élément important du projet. Mais jusqu'ici la police choisie pour former le texte du logo n'était pas libre. C'est en effet la version 2005 de Bryant.

    Cela était justifié car à l'époque il y avait assez peu de polices libres de qualité mais depuis les temps ont changé. Red Hat, Google et d'autres entreprises comme amateurs ont travaillé sur la question et le choix aujourd'hui est bien plus large.

    Pour respecter donc les principes mêmes du projet, abandonner une police propriétaire semble évident.

    Cheminement

    Le travail a été amorcé suite à une discussion au sein du Conseil en octobre 2018. Qui a mené à l'ouverture d'un ticket auprès de l'équipe design.

    Il y a eu pas mal d'essais et de réflexion en jouant sur le logo. Jouer sur le F, le symbole infini, sur la perspective ou encore en modifiant la bulle.

    Pourquoi donner notre avis ?

    Ce n'est pas une décision qui a été prise à la légère, changer un logo a un gros impact. Il faudra en effet changer toutes les référence de ce logo au fur et à mesure. Sur le site du projet, sur les sites non officiels mais liés à Fedora comme fedora-fr.org bien entendu.

    Mais à cause de l'inertie du logo actuel, adopter le prochain prendra du temps. Que ce soit dans les sites d'actus, dans les goodies employés et distribués, sur les différents sites où Fedora est mentionné comme Wikipédia, etc. Il est donc impératif que ce changement ne donne pas lieu à un changement peu de temps après pour corriger certains éléments.

    Et bien entendu il est important que la communauté de Fedora soit à l'aise avec ce nouveau logo. Pour que son adoption soit une réussite. Cela évite aussi que changement paraisse comme imposé par les dirigeants.

    Cela donne quoi ?

    L'équipe a proposé deux illustrations pour montrer les différentes déclinaisons du logo et donner un exemple d'usage.

    Dans les deux cas, la police retenue est Confortaa de Google Fonts. Elle a été légèrement modifiée pour l'occasion.

    Voici la proposition 1 :

    Proposition1.png

    Et la proposition 2 :

    Proposition2.png

    Qu'en pensez-vous ?

    Fixing pelican revealjs plugin

    Posted by Pablo Iranzo Gómez on January 20, 2019 10:12 PM

    Introduction

    After my recent talk about blog-o-matic, I was trying to upload somewhere the slides I used.

    Since some time ago I started using Reveal-MD, so I could use MarkDown to create and show slides, but wanted also a way to upload them for consumption.

    Pelican-revealmd plugin seemed to be the answer.

    It does use pypandoc library and ‘pandoc’ for doing the conversion.

    The problems found

    After some test, it had 3 issues:

    • Images, specified with whatever pelican formatting, where rendered not alongside the html
    • Code blocks where not properly shown
    • Title was shown as ‘Untitled’ in the generated html

    For the 1st one, after reaching on #pelican on Freenode, it was clear that images should be placed alongside the html, and that required using ‘{static}’, but pandoc was escaping it.

    The fixes

    The first patch was to use a replace function on the text to move back { and } to the character that pelican will recognize and interpret.

    Second patch was for the Title, initially, I did use python Beautiful Soup to replace the title tag on the HTML, it took a while, but it worked.

    For 3rd one, there was no clear approach, as the html rendered was working, but not shown after using pelican. When also tried to use a newer pandoc version, the results were even worse.

    Finally, I disected what I wanted from pypandoc module, and instead of using it, went to directly use the same shell command I was using.

    As the templates for the rendered the pages should be similar, I moved to use the ‘non-standalone’ version of pandoc conversion, hence, instead of generating full html, I could reuse headers, css loading, etc and just put the content revelant for the slides, and at the same time, reuse article metadata like article.title, author, date to fill some values in the rendered html.

    This also, rendered in two outcomes, 2nd patch was no longer needed in that form, and some other dependencies were removed (no more pypandoc, no more BeautifulSoup, etc)

    The new version of the plugin has been contributed via PR, but while it is being accepted by original author, you can find the relevant version in the ‘master’ branch of https://github.com/iranzo/pelican-revealmd.

    Outcomes

    This of course, has brought some outcomes:

    • My blog can now render my reveal-md slides stored as filename.revealjs
    • I’ve learned a bit on how Pelican plugins work
    • Blog-o-Matic has been updated to include that support too, out of the box

    Enjoy!

    [draft] Content ideas for business blogs

    Posted by Sarup Banskota on January 19, 2019 12:00 AM

    1. Illustrate micro-efficiencies you’ve discovered

    Your tech team discovers them every day.
    It doesn’t always need to be an engineering feat.
    If you can process one type of complaint faster, share it.
    If any team has major improvements on any KPI, share how.

    2. Simplify complex, non-core business processes

    If you sell finance tools, explain regulatory hurdles.
    If you sell electronic gear, explain shipping related gotchas.
    Protip - You can repurpose well-performing Quora answers.

    3. List major business lessons you’ve learned

    Lessons learned through observing customer behaviour.
    Or from controversial decisions, especially those that didn’t work out.

    4. Restore what has worked in the past

    Use BuzzSumo to find trending articles against a keyword.
    Look for older articles that performed well, then repurpose.
    Personalise it, but don’t change its format.

    5. Engage people through 4-part email exercises

    Break a large lesson into 4 sub-parts.
    Share each one with an assignment for the reader.
    Release lessons with exactly a day’s interval apart.
    Include assignments that encourage users to invest time.
    Design rewards for these assignments, such as access to derived insights.

    6. Showcase unintended use-cases

    This usually manifests in at least two ways.
    1️⃣ Customers use your product in ways you didn’t anticipate.
    2️⃣ Some feature on your product is used less than expected.
    Both make good candidates for content.

    7. Write about your most difficult business experience

    Maybe you were sued.
    Maybe you had to admit a big lie.
    Maybe you had to let go of half your team.
    Maybe you had to decide against launching a big feature.
    1️⃣ Explain what happened.
    2️⃣ Explain why it made you stronger.
    3️⃣ Explain how you’re preventing it from happening again.

    8. Hunt forums for common struggles and solve them

    Hang out in forums from your industry.
    Look for what many people aren’t able to do.
    Pay close attention if it appears to worry them.
    Pay more attention if they haven’t been responded to.
    For one, that’s a good indicator of a problem that needs solving.
    If it’s solved, write out and illustrate the solution.

    9. Make industry predictions

    What types of jobs do you see going obsolete?
    What types of jobs do you see getting created?
    What regulations do you see changing in x years?
    What strategies do you see big players taking this decade?
    Nobody will remember if you get it wrong.
    If you get it right, talk about it.

    10. Tools that help achieve micro-efficiencies]()

    You can market it as a “lesser-known secret of tool X”.

    11. Curate resources, then get sources to share

    Write about people you admire, then tell them.
    Write about tools you use and love, then tell them.
    Write about resources you learned from, then tell them.
    It’s a win-win.

    12. Interview makers from your curation

    You wrote about tools, now interview their makers.
    You wrote about resources, now interview the teachers.
    You wrote about people, now interview them.
    It’s a win-win. https://nomadlist.com/blog/maptia https://www.shopify.com.sg/blog/5451862-4-lessons-learned-while-building-a-shopify-store

    13. Interview customer-facing team members

    That’s usually the support team.
    Show a personal side to those your customers already interact with.
    Also allows sharing roadmap-oriented FAQs or common recent concerns.

    14. Elaborate on your frequentest asked question

    Your support team knows that question any day.
    Their answers are already on your CRM, in several flavours.
    Now stitch together all past responses and publish a coherent story.

    Viendo el tiempo en tu terminal

    Posted by Alvaro Castillo on January 18, 2019 09:20 PM

    ¿Te gustaría ver el tiempo en tu terminal? No te gustan los plugins de los entornos de escritorio o usas un gestor de ventanas tipo i3wm que no tiene ningún icono en la bandeja de notificación de i3status.

    Bueno pues no pasa nada, puedes ver el tiempo que hará en tu ciudad con tan solo ejecutar un curl al dominio wttr.in y por GeoIP te dará la climatología de tu lugar.

    ¿Qué puedes hacer?

    • Información de vuelta traducida a un idioma en concreto
    • Definir los días que queramos que nos...

    FPgM report: 2019-03

    Posted by Fedora Community Blog on January 18, 2019 06:50 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week.

    I’ve set up weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

    Announcements

    Fedora 30 Status

    Fedora 30 Change Proposal deadlines are approaching

    • Change proposals for Self-Contained Changes are due 2019-01-29.

    Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

    Fedora 30 includes a Change that will remove glibc langpacks from the buildroot. See the devel mailing list for more information and impacted packages.

    Changes

    Announced

    Submitted to FESCo

    Approved by FESCo

    The post FPgM report: 2019-03 appeared first on Fedora Community Blog.

    SUSE Open Build Service cheat sheet

    Posted by William Brown on January 18, 2019 02:00 PM

    SUSE Open Build Service cheat sheet

    Part of starting at SUSE has meant that I get to learn about Open Build Service. I’ve known that the project existed for a long time but I have never had a chance to use it. So far I’m thoroughly impressed by how it works and the features it offers.

    As A Consumer

    The best part of OBS is that it’s trivial on OpenSUSE to consume content from it. Zypper can add projects with the command:

    zypper ar obs://<project name> <repo nickname>
    zypper ar obs://network:ldap network:ldap
    

    I like to give the repo nickname (your choice) to be the same as the project name so I know what I have enabled. Once you run this you can easily consume content from OBS.

    Package Management

    As someone who has started to contribute to the suse 389-ds package, I’ve been slowly learning how this work flow works. OBS similar to GitHub/Lab allows a branching and request model.

    On OpenSUSE you will want to use the osc tool for your workflow:

    zypper in osc
    

    You can branch from an existing project to make changes with:

    osc branch <project> <package>
    osc branch network:ldap 389-ds
    

    This will branch the project to my home namespace. For me this will land in “home:firstyear:branches:network:ldap”. Now I can checkout the content on to my machine to work on it.

    osc co <project>
    osc co home:firstyear:branches:network:ldap
    

    This will create the folder “home:…:ldap” in the current working directory.

    From here you can now work on the project. Some useful commands are:

    Add new files to the project (patches, new source tarballs etc).

    osc add <path to file>
    osc add feature.patch
    osc add new-source.tar.xz
    

    Edit the change log of the project (I think this is used in release notes?)

    osc vc
    

    Build your changes locally matching the system you are on. Packages normally build on all/most OpenSUSE versions and architectures, this will build just for your local system and arch.

    osc build
    

    Commit your changes to the OBS server, where a complete build will be triggered:

    osc commit
    

    View the results of the last commit:

    osc results
    

    Enable people to use your branch/project as a repository. You edit the project metadata and enable repo publishing:

    osc meta prj -e <name of project>
    osc meta prj -e home:firstyear:branches:network:ldap
    
    # When your editor opens, change this section to enabled (disabled by default):
    <publish>
      <enabled />
    </publish>
    

    NOTE: In some cases if you have the package already installed, and you add the repo/update it won’t install from your repo. This is because in SUSE packages have a notion of “vendoring”. They continue to update from the same repo as they were originally installed from. So if you want to change this you use:

    zypper [d]up --from <repo name>
    

    You can then create a “request” to merge your branch changes back to the project origin. This is:

    osc sr
    

    So far this is as far as I have gotten with OBS, but I already appreciate how great this work flow is for package maintainers, reviewers and consumers. It’s a pleasure to work with software this well built.

    Structuring Rust Transactions

    Posted by William Brown on January 18, 2019 02:00 PM

    Structuring Rust Transactions

    I’ve been working on a database-related project in Rust recently, which takes advantage of my concurrently readable datastructures. However I ran into a problem of how to structure Read/Write transaction structures that shared the reader code, and container multiple inner read/write types.

    Some Constraints

    To be clear, there are some constraints. A “parent” write, will only ever contain write transaction guards, and a read will only ever contain read transaction guards. This means we aren’t going to hit any deadlocks in the code. Rust can’t protect us from mis-ording locks. An additional requirement is that readers and a single write must be able to proceed simultaneously - but having a rwlock style writer or readers behaviour would still work here.

    Some Background

    To simplify this, imagine we have two concurrently readable datastructures. We’ll call them db_a and db_b.

    struct db_a { ... }
    
    struct db_b { ... }
    

    Now, each of db_a and db_b has their own way to protect their inner content, but they’ll return a DBWriteGuard or DBReadGuard when we call db_a.read()/write() respectively.

    impl db_a {
        pub fn read(&self) -> DBReadGuard {
            ...
        }
    
        pub fn write(&self) -> DBWriteGuard {
            ...
        }
    }
    

    Now we make a “parent” wrapper transaction such as:

    struct server {
        a: db_a,
        b: db_b,
    }
    
    struct server_read {
        a: DBReadGuard,
        b: DBReadGuard,
    }
    
    struct server_write {
        a: DBWriteGuard,
        b: DBWriteGuard,
    }
    
    impl server {
        pub fn read(&self) -> server_read {
            server_read {
                self.a.read(),
                self.b.read(),
            }
        }
    
        pub fn write(&self) -> server_write {
            server_read {
                self.a.write(),
                self.b.write(),
            }
        }
    }
    

    The Problem

    Now the problem is that on my server_read and server_write I want to implement a function for “search” that uses the same code. Search or a read or write should behave identically! I wanted to also avoid the use of macros as the can hide issues while stepping in a debugger like LLDB/GDB.

    Often the answer with rust is “traits”, to create an interface that types adhere to. Rust also allows default trait implementations, which sounds like it could be a solution here.

    pub trait server_read_trait {
        fn search(&self) -> SomeResult {
            let result_a = self.a.search(...);
            let result_b = self.b.search(...);
            SomeResult(result_a, result_b)
        }
    }
    

    In this case, the issue is that &self in a trait is not aware of the fields in the struct - traits don’t define that fields must exist, so the compiler can’t assume they exist at all.

    Second, the type of self.a/b is unknown to the trait - because in a read it’s a “a: DBReadGuard”, and for a write it’s “a: DBWriteGuard”.

    The first problem can be solved by using a get_field type in the trait. Rust will also compile this out as an inline, so the correct thing for the type system is also the optimal thing at run time. So we’ll update this to:

    pub trait server_read_trait {
        fn get_a(&self) -> ???;
    
        fn get_b(&self) -> ???;
    
        fn search(&self) -> SomeResult {
            let result_a = self.get_a().search(...); // note the change from self.a to self.get_a()
            let result_b = self.get_b().search(...);
            SomeResult(result_a, result_b)
        }
    }
    
    impl server_read_trait for server_read {
        fn get_a(&self) -> &DBReadGuard {
            &self.a
        }
        // get_b is similar, so ommitted
    }
    
    impl server_read_trait for server_write {
        fn get_a(&self) -> &DBWriteGuard {
            &self.a
        }
        // get_b is similar, so ommitted
    }
    

    So now we have the second problem remaining: for the server_write we have DBWriteGuard, and read we have a DBReadGuard. There was a much longer experimentation process, but eventually the answer was simpler than I was expecting. Rust allows traits to have Self types that define traits themself.

    So provided that DBReadGuard and DBWriteGuard both implement “DBReadTrait”, then we can have the server_read_trait have a self type that enforces this. It looks something like:

    pub trait DBReadTrait {
        fn search(&self) -> ...;
    }
    
    impl DBReadTrait for DBReadGuard {
        fn search(&self) -> ... { ... }
    }
    
    impl DBReadTrait for DBWriteGuard {
        fn search(&self) -> ... { ... }
    }
    
    pub trait server_read_trait {
        type GuardType: DBReadTrait; // Say that GuardType must implement DBReadTrait
    
        fn get_a(&self) -> &Self::GuardType; // implementors must return that type implementing the trait.
    
        fn get_b(&self) -> &Self::GuardType;
    
        fn search(&self) -> SomeResult {
            let result_a = self.get_a().search(...);
            let result_b = self.get_b().search(...);
            SomeResult(result_a, result_b)
        }
    }
    
    impl server_read_trait for server_read {
        fn get_a(&self) -> &DBReadGuard {
            &self.a
        }
        // get_b is similar, so ommitted
    }
    
    impl server_read_trait for server_write {
        fn get_a(&self) -> &DBWriteGuard {
            &self.a
        }
        // get_b is similar, so ommitted
    }
    

    This works! We now have a way to write a single “search” type for our server read and write types. In my case, the DBReadTrait also uses a similar technique to define a search type shared between the DBReadGuard and DBWriteGuard.

    How Do You Fedora: Journey into 2019

    Posted by Fedora Magazine on January 18, 2019 08:00 AM

    Fedora had an amazing 2018. The distribution saw many improvements with the introduction of Fedora 28 and Fedora 29. Fedora 28 included third party repositories, making it easy to get software like the Steam client, Google Chrome and Nvidia’s proprietary drivers. Fedora 29 brought support for automatic updates for Flatpack.

    One of the four foundations of Fedora is Friends. Here at the Magazine we’re looking back at 2018, and ahead to 2019, from the perspective of several members of the Fedora community. This article focuses on what each of them did last year, and what they’re looking forward to this year.

    Fedora in 2018

    Radka Janekova attended five events in 2018. She went to FOSDEM as a Fedora Ambassador, gave two presentations at devconf.cz and three presentation on dotnet in Fedora. Janekova starting using DaVinci Resolve in 2018: “DaVinci Resolve which is very Linux friendly video editor.” She did note one drawback, saying, “It may not be entirely open source though!”

    Julita Inca has been to many places in the world in 2018. “I took part of the Fedora 29 Release Party in Poland where I shared my experiences of being an Ambassador of Fedora these years in Peru.” She is currently located in the University of Edinburgh. “I am focusing in getting a Master in High Performance Computing in the University of Edinburgh using ARCHER that has CentOS as Operating System.” As part of her masters degree she is using a lot of new software. “I am learning new software for parallel programming I learned openMP and MPI.” To profile code in C and Fortran she is using Intel’s Vtune

    Jose Bonilla went to a DevOps event hosted by a company called Rancher. Rancher is an open source company that provides a container orchestration framework which can be hosted in a variety of ways, including in the cloud or self-hosted. “I went to this event because I wished to gain more insight into how I can use Fedora containerization in my organization and to teach students how to manage applications and services.” This event showed that the power of open source is less focus on competition and more on completion. “There were several open source projects at this event working completely in tandem without ever having this as a goal. The companies at this event were Google, Rancher, Gitlab and Aqua.” Jose used a variety of open source applications in 2018. “I used Cockpit, Portainer and Rancher OS. Portainer and Rancher are both services that manage dockers containers. Which only proves the utility of containers. I believe this to be the future of compute environments.” He is also working on tools for data analytics. “I am improving on my knowledge of Elasticsearch and the Elastic Stack — Kibana, which is an extraordinarily powerful open source set of tools for data analytics.”

    Carlos Enrique Castro León has not been to a Fedora event in Peru, but listens to Red Hat Command Line Hero. “I really like to listen to him since I can meet people related to free code.” Last year he started using Kdenlive and Inkscape. “I like them because there is a large community in Spanish that can help me.”

    Akinsola Akinwale started using VSCode, Calligra and Qt5 Designer in 2018. He uses VScode for Python development. For editing documents and spreadsheets he uses Calligra. “I love Vscode for its embedded VIM , terminal & easy of use.” He started using Calligra just for a change of pace. He likes the flexibility of Qt5 designed for creating graphical user interfaces instead of coding it all in Vscode.

    Kevin Fenzi went to several Fedora events in 2018. He enjoyed all of them, but liked Flock in Dresden the best of them all. “At Flock in Dresden I got a chance to talk face to face with many other Fedora contributors that I only talk to via IRC or email the rest of the time. The organizers did an awesome job, the venue was great and it was all around just a great time. There were some talks that made me think, and others that made me excited to see what would happen with them in the coming year. Also, the chance to have high bandwith talks really helped move some ideas along to reality.” There were two applications Kevin started using in 2018. “First, after many years of use, I realized it was time to move on from using rdiff-backups for my backups. It’s a great tool, but it’s in python2 and very inactive upstream. After looking around I settled on borg backup and have been happily using that since. It has a few rough edges (it needs lots of cache files to do really fast backups, etc) but it has a very active community and seems to work pretty nicely.” The other application that Kevin started using in OpenShift. “Secondly, 2018 was the year I really dug into OpenShift. I understand now much more about how it works and how things are connected and how to manage and upgrade it. In 2019 we hope to move a bunch of things over to our OpenShift cluster. The OpenShift team is really doing a great job of making something that deploys and upgrades easily and are adding great features all the time (most recently the admin console, which is great to watch what your cluster is doing!).”

    Fedora in 2019

    Radka plans to do similar presentations in 2019. “At FOSDEM this time I’ll be presenting a story of an open source project eating servers with C#.” Janekova targets pre-university students in an effort to encourage young women to get involved in technology. “I really want to help dotnet and C# grow in the open source world, and I also want to educate the next generation a little bit better in terms of what women can or can not do.”

    Julita plans on holding two events in 2019. “I can promote the use of Fedora and GNOME in Edinburgh University.” When she returns to Peru she plans on holding a conference on writing parallel code on Fedora and Gnome.

    Jose plans on continuing to push open source initiatives such as cloud and container infrastructures. He will also continue teaching advanced Unix systems administration. “I am now helping a new generation of Red Hat Certified Professionals seek their place in the world of open source. It is indeed a joy when a student mentions they have obtained their certification because of what they were exposed to in my class.” He also plans on spending some more time with his art again.

    Carlos would like to write for Fedora Magazine and help bring the magazine to the Latin American community. “I would like to contribute to Fedora Magazine. If possible I would like to help with the magazine in Spanish.”

    Akinsola wants to hold a Fedora a release part in 2019. “I want make many people aware of Fedora, make them aware they can be part of the release and it is easy to do.” He would also like to ensure that new Fedora users have an easy time of adapting to their new OS.

    Kevin is planning is excited about 2019 being a time of great change for Fedora. “In 2019 I am looking forward to seeing what and how we retool things to allow for lifecycle changes and more self service deliverables. I think it’s going to be a ton of work, but I am hopeful we will come out of it with a much better structure to carry us forward to the next period of Fedora success.” Kevin also had some words of appreciation for everyone in the Fedora community. “I’d like to thank everyone in the Fedora community for all their hard work on Fedora, it wouldn’t exist without the vibrant community we have.”


    Photo by Perry Grone on Unsplash.

    AdamW’s Debugging Adventures: The Mysterious Disappearing /proc

    Posted by Adam Williamson on January 18, 2019 03:15 AM

    Yep, folks, it’s that time again – time for one of old Grandpa Adam’s tall tales of root causing adventure…

    There’s a sort of catch-22 situation in Fedora that has been a personal bugbear for a very long time. It mainly affects Branched releases – each new Fedora release, when it has branched from Rawhide, but before it has been released. During this period the Bodhi update system is in effect, meaning all new packages have to go through Bodhi review before they are included in the composes for the release. This means, in theory, we should be able to make sure nothing really broken lands in the release. However, there’s a big class of really important updates we have never been able to test properly at all: updates that affect the installer.

    The catch-22 is this – release engineering only builds install media from the ‘stable’ package set, those packages that have gone through review. So if a package under review breaks the installer, we can’t test whether it breaks the installer unless we push it stable. Well, you can, but it’s quite difficult – you have to learn how to build an installer image yourself, then build one containing the packages from the update and test it. I can do that, but most other people aren’t going to bother.

    I’ve filed bugs and talked to people about ways to resolve this multiple times over many years, but a few months back I just got sick of the problem and decided to fix it myself. So I wrote an openQA update test which automates the process: it builds an installer image, with the packages from the update available to the installer image build tool. I also included a subsequent test which takes that image and runs an install with it. Since I already had the process for doing this manually down pat, it wasn’t actually very difficult.

    Only…when I deployed the test to the openQA staging instance and actually tried it out, I found the installer image build would frequently fail in a rather strange way.

    The installer image build process works (more or less) by creating a temporary directory, installing a bunch of packages to it (using dnf’s feature of installing to an alternative ‘root’), fiddling around with that environment a bit more, creating a disk image whose root is that temporary directory, then fiddling with the image a bit to make it into a bootable ISO. (HANDWAVE HANDWAVE). However, I was finding it would commonly fail in the ‘fiddling around with the environment’ stage, because somehow some parts of the environment had disappeared. Specifically, it’d show this error:

    FileNoteFoundError: [Errno 2] No such file or directory: '/var/tmp/lorax.q8xfvc0p/installroot//proc/modules'
    

    lorax was, at that point, trying to touch that directory (never mind why). That’s the /proc/modules inside the temporary root, basically. The question was, why was it disappearing? And why had neither myself nor bcl (the lorax maintainer) seen it happening previously in manual use, or in official composes?

    I tried reproducing it in a virtual machine…and failed. Then I tried again, and succeeded. Then I ran the command again…and it worked! That pattern turned out to repeat: I could usually get it to happen the first time I tried it in a VM, but any subsequent attempts in the same VM succeeded.

    So this was seeming really pretty mysterious. Brian couldn’t get it to happen at all.

    At this point I wrote a dumb, short Python script which just constantly monitored the disappearing location and told me when it appeared and disappeared. I hacked up the openQA test to run this script, and upload the result. Using the timestamps, I was able to figure out exactly what bit of lorax was running when the directory suddenly disappeared. But…I couldn’t immediately see why anything in that chunk of lorax would wind up deleting the directory.

    At this point, other work became more important, and I wound up leaving this on the back burner for a couple of months. Then I came back to it a couple days ago. I picked back up where I left off, and did a custom build of lorax with some debug logging statements strewn around the relevant section, to figure out really precisely where we were when things went wrong. But this turned out to be a bit of a brick wall, because it turned out that at the time the directory disappeared, lorax was just…running mksquashfs. And I could not figure out any plausible reason at all why a run of mksquashfs would cause the directory to vanish.

    After a bit, though, the thought struck me – maybe it’s not lorax itself wiping the directory out at all! Maybe something else is doing it. So I thought to look at the system logs. And lo and behold, I found my smoking gun. At the exact time my script logged that the directory had disappeared, this message appeared in the system log:

    Jan 18 01:57:30 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com systemd[1]: Starting Cleanup of Temporary Directories...
    

    now, remember our problem directory is in /var/tmp. So this smells very suspicious indeed! So I figured out what that service actually is – to do this, you just grep for the description (“Cleanup of Temporary Directories”) in /usr/lib/systemd/system – and it turned out to be /usr/lib/systemd/system/systemd-tmpfiles-clean.service, which is part of systemd’s systemd-tmpfiles mechanism, which you can read up on in great detail in man systemd-tmpfiles and man tmpfiles.d.

    I had run into it a few times before, so I had a vague idea what I was dealing with and what to look for. It’s basically a mechanism for managing temporary files and directories: you can write config snippets which systemd will read and do stuff like creating expected temporary files or directories on boot (this lets packages manage temporary directories without doing it themselves in scriptlets). I poked through the docs again and, sure enough, it turns out another thing the system can do is delete temporary files that reach a certain age:

    Age
    The date field, when set, is used to decide what files to delete when cleaning. If a file or directory is
    older than the current time minus the age field, it is deleted. The field format is a series of integers
    each followed by one of the following suffixes for the respective time units: s, m or min, h, d, w, ms, and
    us, meaning seconds, minutes, hours, days, weeks, milliseconds, and microseconds, respectively. Full names
    of the time units can be used too.
    

    This systemd-tmpfiles-clean.service does that job. So I went looking for tmpfiles.d snippets that cover /var/tmp, and sure enough, found one, in Fedora’s stock config file /usr/lib/tmpfiles.d/tmp.conf:

    q /var/tmp 1777 root root 30d
    

    The 30d there is the ‘age’ field. So this tells the tmpfiles mechanism that it’s fine to wipe anything under /var/tmp which is older than 30 days.

    Of course, naively we might think our directory won’t be older than 30 days – after all, we only just ran lorax! But remember, lorax installs packages into this temporary directory, and files and directories in packages get some of their time attributes from the package. So we (at this point, Brian and I were chatting about the problem as I poked it) looked into how systemd-tmpfiles defines age, precisely:

    The age of a file system entry is determined from its last modification timestamp (mtime), its last access
    timestamp (atime), and (except for directories) its last status change timestamp (ctime). Any of these three
    (or two) values will prevent cleanup if it is more recent than the current time minus the age field.
    

    So since our thing is a directory, its mtime and atime are relevant. So Brian and I both looked into those. He did it manually, while I hacked up my check script to also print the mtime and atime of the directory when it existed. And sure enough, it turned out these were several months in the past – they were obviously related to the date the filesystem package (from which /proc/modules comes) was built. They were certainly longer than 30 days ago.

    Finally, I looked into what was actually running systemd-tmpfiles-clean.service; it’s run on a timer, systemd-tmpfiles-clean.timer. That timer is set to run the service 15 minutes after the system boots, and every day thereafter.

    So all of this hooked up nicely into a convincing story. openQA kept running into this problem because it always runs the test in a freshly-booted VM – that ’15 minutes after boot’ was turning out to be right in the middle of the image creation. My manual reproductions were failing on the first try for the same reason – but then succeeding on the second and subsequent tries because the cleaner would not run again until the next day. And Brian and I had either never or rarely seen this when we ran lorax manually for one reason or another because it was pretty unlikely the “once a day” timer would happen to wake up and run just when we had lorax running (and if it did happen, we’d try again, and when it worked, we’d figure it was just some weird transient failure). The problem likely never happens in official composes, I think, because the tmpfiles timer isn’t active at all in the environment lorax gets run in (haven’t double-checked this, though).

    Brian now gets to deal with the thorny problem of trying to fix this somehow on the lorax side (so the tmpfiles cleanup won’t remove bits of the temporary tree even if it does run while lorax is running). Now I know what’s going on, it was easy enough to work around in the openQA test – I just have the test do systemctl stop systemd-tmpfiles-clean.timer before running the image build.

    Instala otras versiones de software en CentOS

    Posted by Alvaro Castillo on January 17, 2019 09:10 PM

    A veces se hace un poco complicado la opción de CentOS como servidor si no lo sabemos administrar. Ya que si nosotros queremos instalar nuevas versiones de PHP, MariaDB, MongoDB o que se yo, te resultará prácticamente imposible a menos que compiles el software directamente o bien hagas uso de repositorios externos tipo Remi, Webtatic en el que ya tienes que confiar que eso vaya bien.

    Sin embargo, CentOS ha seguido la filosofía que tiene Redhat de tener un repositorio llamado "CentOS Softw...

    Capture Raspberry Pi kernel crashes

    Posted by Juan Orti Alcaine on January 17, 2019 07:10 PM

    I’m experiencing kernel panics in a headless Raspberry Pi with Fedora 29 Server and need a way to capture what is happening.

    First I tried to enable kdump, but this doesn’t seem possible. If someone has done it, I’d like to hear.

    What I’m using now, is enabling netconsole to log all the kernel messages over the network to a rsyslog server. This is the config in the Pi:

    /etc/modules-load.d/netconsole.conf:

    netconsole

    /etc/modprobe.d/netconsole.conf:

    options netconsole netconsole=4444@10.0.0.1/eth0,20514@10.0.0.2/00:11:22:33:44:55

    From the netconsole documentation:

     netconsole=[+][src-port]@[src-ip]/[],[tgt-port]@/[tgt-macaddr]
    
       where
            +             if present, enable extended console support
            src-port      source for UDP packets (defaults to 6665)
            src-ip        source IP to use (interface address)
            dev           network interface (eth0)
            tgt-port      port for logging agent (6666)
            tgt-ip        IP address for logging agent
            tgt-macaddr   ethernet MAC address for logging agent (broadcast)
    

    And in the rsyslog server:

    /etc/rsyslog.d/pi-remote.conf:

    $ModLoad imudp
    $RuleSet remote
    
    if $fromhost-ip=='10.0.0.1' then /var/log/remote/pi-netconsole.log
    & stop
    
    $InputUDPServerBindRuleset remote
    $UDPServerRun 20514
    
    $RuleSet RSYSLOG_DefaultRuleset

    /etc/logrotate.d/remote-netconsole:

    /var/log/remote/*.log
    {
            copytruncate
            rotate 30
            daily
            missingok
            dateext
            notifempty
            delaycompress
            compress
            maxage 31
            postrotate
                    /usr/bin/systemctl kill -s HUP rsyslog.service >/dev/null 2>&1 || true
            endscript
    }
    # firewall-cmd --add-port=20514/udp
    # firewall-cmd --add-port=20514/udp --permanent

     
    I’ve used this documentation:
    https://fedoraproject.org/wiki/Netconsole
    https://www.kernel.org/doc/Documentation/networking/netconsole.txt
    https://michael.stapelberg.de/posts/2013-09-16-remote_syslog/

    Kiwi TCMS is going to FOSDEM 2019

    Posted by Kiwi TCMS on January 17, 2019 03:30 PM

    Hello testers, Kiwi TCMS is going to FOSDEM this year. This is where you can find us:

    Kiwi TCMS sticker

    We would like to meet with all of you and talk about test management and test process organization. In case you are stuck for crazy ideas checkout our project mission for inspiration.

    Be part of the community

    We are turning 10 years old and we have presents for you! You will have to perform a small challenge and you can get your hands(errr, feet) on a pair of these:

    Kiwi TCMS socks

    Here's what else you can do to help us:

    Happy testing!

    Announce: Entangle “Sodium“ release 2.0 – an app for tethered camera control & capture

    Posted by Daniel Berrange on January 16, 2019 09:44 PM

    I am pleased to announce a new release 2.0 of Entangle is available for download from the usual location:

      https://entangle-photo.org/download/
    

    This release is largely bug fixes with a couple of small features

    • Require gobject introspection >= 1.54
    • Require GTK3 >= 3.22
    • Fix dependency on libraw
    • Fix variable name in photobox plugin
    • Document some missing keyboard shortcuts
    • Fix upper bound in histogram to display clipped pixel
    • Refresh translations
    • Option to highlight over exposed pixels in red
    • Disable noisy compiler warning
    • Remove use of deprecated application menu concept
    • Fix image redraw when changing some settings
    • Update mailing list address in appdaat
    • Add more fields to appdata content
    • Fix refrence counting during window close
    • Use correct API for destroying top level windows
    • Fix unmounting of cameras with newer gvfs URI naming scheme
    • Avoid out of bounds read of property values
    • Fix many memory leaks
    • Workaround for combo boxes not displaying on Wayland
    • Fix race condition in building enums
    • Fix setting of gschema directory during startup
    • Set env to ensure plugins can find introspection typelib Requires

    NOTICE: Epylog has been retired for Fedora Rawhide/30

    Posted by Stephen Smoogen on January 16, 2019 07:44 PM
    Epylog is a log analysis code written by Konstantin ("Icon") Ryabitsev, when he was working Duke University in the early 2000's. It was moved to FedoraHosted and then never got moved to other hosting afterwords. The code is written in early python2 syntax (maybe 2.2) and has been hacked to work with newer versions over time but has not seen any major development since 2008. I have been sort of looking after the package in Fedora with the hopes of a 'rewrite for Python3' that never got done by me. [This is on me as I have been licking the cookie here.]

    Because it requires a lot of work, and Python 2's End of Life is coming up in a year, I retired it from rawhide so that it would not branch to Fedora 30. I would recommend that users of epylog look for newer replacements (we in Fedora infrastructure will be doing so and I will post any recommendations as time goes by).

    Detecta cambios en tus archivos con aide

    Posted by Alvaro Castillo on January 16, 2019 07:20 PM

    Aide es un sistema avanzado de detección de intrusión que nos permite visualizar cambios en los archivos. Si una persona accede de forma ilegal a nuestro servidor y modifica un archivo que no tiene que tocar, este sistema de intrusión te lo detecta mediante el hash del archivo.

    También permite revisar nuevos archivos creados, eliminados o modificados. Al realizar un escaneo a los archivos puede devolver diversos códigos de salida como errores de escritura, argumentos inválidos, funciones incom...

    Fedora 29-20190115 updated Live isos released

    Posted by Ben Williams on January 16, 2019 06:30 PM

    The Fedora Respins SIG is pleased to announce the latest release of Updated F29-20190115 Live ISOs, carrying the 4.19.14-300 kernel.

    This set of updated isos will save considerable amounts  of updates after install.  ((for new installs.)(New installs of Workstation have 1.2GB of updates).

    This Set also includes a One-Off Build of the Security Lab.

    We would also like to thank Fedora- QA  for running the following Tests on our ISOs.:


    https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=29&build=FedoraRespin-29-updates/20190115.0&groupid=1

    These can be found at  http://tinyurl.com/Live-respins .We would also like to thank the following irc nicks for helping test these isos: linuxmodder,Southern_Gentlem, Fred, adingman.

    As always we are always needing Testers to help with our respins. We have a new Badge for People whom help test.  See us in #fedora-respins on Freenode IRC.

    Project automation with Travis, GitHub Pages and Quay

    Posted by Pablo Iranzo Gómez on January 16, 2019 03:00 PM
    <section class="slide level2" id="project-hosting-and-automation">

    Project hosting and automation

    /me: Pablo Iranzo Gómez ( https://iranzo.github.io )

    </section> <section class="slide level2" id="you-got-a-shinny-new-project-what-now">

    You got a shinny new project? What now?

    • Code usually also requires a webpage
      • Documentation, General Information, Developer information, etc
    • Web costs money
      • Hosting, Domain, Maintenance, etc
    </section> <section class="slide level2" id="some-philosophy">

    Some Philosophy

    Empty your mind, be shapeless, formless, like water. Now you put water in a cup, it becomes the cup, you put water into a bottle, it becomes the bottle You put water in a teacup, it becomes the teapot Now water can flow or it can crash. Be water my friend

    Note: Automation: Be lazy, have someone else doing it for you.

    </section> <section class="slide level2" id="git-hub-gitlab">

    Git Hub / Gitlab

    • Lot of source code is hosted at github or gitlab, but it’s a code repository.
    • We want a website!!
    </section> <section class="slide level2" id="pages-come-to-play">

    Pages come to play

    Both provide a ‘static’ webserver to be used for your projects for free.

    G(H/L) serve from a branch in your repo (usually ‘yourusername.github.io’ repo)

    You can buy a domain and point it to your website.

    </section> <section class="slide level2" id="static-doesnt-mean-end-of-fun">

    Static doesn’t mean end of fun

    There are many ‘static’ content generators that provide rich features:

    • styles
    • links
    • image resizing
    • even ‘search’
    </section> <section class="slide level2" id="some-static-generators">

    Some static generators

    Importance of language is for developing ‘plugins’, not content.

    • Jekyll (Ruby)
    • Pelican (Python)

    They ‘render’ markdown into html

    </section> <section class="slide level2" id="theres-even-more-fun">

    There’s even more fun

    • Github provides Jekyll support
    • Github, Gitlab, etc allow to plug in third-party CI

    Think about endless possibilities!!!

    </section> <section class="slide level2" id="some-food-for-thought">

    Some food for thought

    • Repos have branches
    • Repos can have automation
    • External automation like Travis CI can do things for you

    Note: We’ve all the pieces to push a new markdown file and have it triggering a website update and publish

    </section> <section class="slide level2" id="is-a-static-webpage-ugly">

    Is a static webpage ugly?

    • There are lot of templates http://www.pelicanthemes.com
    • Each theme have different feature set
    • Choose wisely! (Small screens, html5, etc)
      • If not, changing themes is quite easy: update, and ‘render’ using new one.
    </section> <section class="slide level2">

    ** DEMO on Pelican + Theme **

    </section> <section class="slide level2" id="travis-ci.org">

    Travis-ci.org

    Automation for projects:

    • Free for OpenSource projects
    • Configured via .travis.yml
    • Some other settings via Web UI (env vars, etc)
    </section> <section class="slide level2">

    Example (setup environment)

    </section> <section class="slide level2" id="example-continuation-actions">

    Example continuation (actions!)

    </section> <section class="slide level2" id="publish-to-remote-repo">

    Publish to remote repo

    </section> <section class="slide level2" id="fancy-things">

    Fancy things

    • Build one repo and deploy to another branch/repo
    • Upload pypi package
    • Call triggers
    • etc
    </section> <section class="slide level2" id="real-world-use-cases">

    Real world use cases

    • Run ‘tox’ for UT’s
    • Test latest theme and plugins render
    • Render documentation website on docs update
    • Render latest CV
    • Build and publish container
    </section> <section class="slide level2" id="wrap-up">

    Wrap up

    Ok, automation is ready, our project validates commits, PR’s, website generation…

    What else?

    </section> <section class="slide level2" id="containers">

    Containers!!

    </section> <section class="slide level2" id="container-creation---quay">

    Container creation - Quay

    Quay and DockerHub allow to automate build on branch commit

    </section> <section class="slide level2" id="docker-hub">

    Docker Hub

    On each commit, a new container will be built

    </section> <section class="slide level2" id="you-said-to-be-water">

    You said to be water…

    Yes!

    Try https://github.com/iranzo/blog-o-matic/

    Fork to your repo and get: - minimal setup steps (github token, travis-ci activation) - Automated setup of Pelican + Elegant theme via travis-ci job that builds on each commit. - Ready to be submitted to search engines via sitemap, and web claiming

    </section> <section class="slide level2" id="questions">

    Questions?

    Pablo.Iranzo@gmail.com https://iranzo.github.io

    </section>

    Fedora Classroom: Getting started with L10N

    Posted by Fedora Magazine on January 16, 2019 08:00 AM

    Fedora Classroom sessions continue with an introductory session on Fedora Localization (L10N). The general schedule for sessions is available on the wiki, along with resources and recordings from previous sessions. Read on for more details about the upcoming L10N Classroom session next week.

    Topic: Getting Started with L10N

    The goal of the Fedora Localization Project (FLP) is to bring everything around Fedora (the Software, Documentation, Websites, and culture) closer to local communities (countries, languages and in general cultural groups).  The session is aimed at beginners. Here is the agenda:

    • What is L10N?
    • Difference between Translation and Localization
    • Overview: How does L10N work?
    • Fedora structure and peculiarities related to L10N
    • Ways to join, help, and contribute
    • Further information with references and links

    When and where

    Instructor

    Silvia Sánchez has been a Fedora community member for a number of years. She currently focuses her contributions on QA, translation, wiki editing, and the Ambassadors teams among others. She has a varied background, having studied systems, programming, design, and photography. She speaks, reads, and writes Spanish, English, and German and further, also reads Portuguese, French, and Italian. In her free time, Silvia enjoys forest walks, art, and writing fiction.

    Share Your Doc una versión minimalista de Pastebin

    Posted by Alvaro Castillo on January 15, 2019 07:40 PM

    Hace unos días acabé una pequeña herramienta Web llamada Share Your Doc, que permite compartir código fuente, mensajes, scripts...etc via Web como si fuese un típico servicio de Pastebin, Fpaste como seguramente conocerás.

    Sin embargo, lo bueno que tiene este, es que trabaja conjuntamente con el sistema operativo, no requiere de ningún método para validarse de usuarios, ni tampoco hace uso de conexiones FTP. Simplemente, añades tu código, creas el token y a correr.

    Es una herramienta...

    Security isn’t a feature

    Posted by Josh Bressers on January 15, 2019 03:48 PM

    As CES draws to a close, I’ve seen more than one security person complain that nobody at the show was talking about security. There were an incredible number of consumer devices unveiled, no doubt there is no security in any of them. I think we get caught up in the security world sometimes so we forget that the VAST majority of people don’t care if something has zero security. People want interesting features that amuse them or make their lives easier. Security is rarely either of these, generally it makes their lives worse so it’s an anti-feature to many.

    Now the first thing many security people think goes something like this “if there’s no security they’ll be sorry when their lightbulb steals their wallet and dumps the milk on the floor!!!” The reality is that argument will convince nobody, it’s not even very funny so they’re laughing at us, not with us. Our thoughts by very nature blame all the wrong people and we try to scare them into listening to us. It’s never worked. Ever. That one time you think it worked they were only pretended to care so you would go away.

    So it brings us to the idea that security isn’t a feature. Turning your lights on is a feature. Cooking you dinner is a feature. Driving your car is a feature. Not bursting into flames is not a feature. Well it sort of is, but nobody talks about it. Security is a lot like the bursting into flames thing. Security really is about something not happening, things not happening is the fundamental  problem we have when we try to talk about all this. You can’t build a plausible story around an event that may or may not happen. Trying to build a narrative around something that may or may not happen is incredibly confusing. This isn’t how feature work, features do positive things, they don’t not do negative things (I don’t even know if that’s right). Security isn’t a feature.

    So the question you should be asking then is how do we make products being created contain more of this thing we keep calling security. The reality is we can’t make this happen given our current strategies. There are two ways products will be produced that are less insecure (see what I did there). Either the market demands it, which given the current trends isn’t happening anytime soon. People just don’t care about security. The second way is a government creates regulations that demand it. Given the current state of the world’s governments, I’m not confident that will happen either.

    Let’s look at market demand first. If consumers decide that buying products that are horribly insecure is bad, they could start buying products with more built in security. But even the security industry can’t define what that really means. How can you measure which product has the best security? Consumers don’t have a way to know which products are safer. How to measure security could be a multi-year blog series so I won’t get into the details today.

    What if the government regulates security? We sort of end up in a similar place to consumer demand. How do we define security? It’s a bit like defining safety I suppose. We’re a hundred years into safety regulations and still get a lot wrong and I don’t think anyone would argue defining safety is much easier than defining security. Security regulation would probably follow a similar path. It will be decades before things could be good enough to create real change. It’s very possible by then the machines will have taken over (that’s the secret third way security gets fixed, perhaps a post for another day).

    So here we are again, things seem a bit doom and gloom. That’s not the intention of this post. The real purpose is to point out we have to change the way we talk about security. Yelling at vendors for building insecure devices isn’t going to ever work. We could possibly talk to consumers in a way that resonates with them, but does anyone buy the stove that promises to burst into flames the least? Nobody would ever use that as a marketing strategy. I bet it would have the opposite effect, a bit like our current behaviors and talking points I suppose.

    Complaining that companies don’t take security seriously hasn’t ever worked and never will work. They need an incentive to care, us complaining isn’t an incentive. Stay tuned for some ideas on how to frame these conversations and who the audience needs to be.

    OpenClass: Kontinuirana integracija i isporuka u razvoju softvera

    Posted by HULK Rijeka on January 15, 2019 12:44 PM

    Riječka podružnica Hrvatske udruge Linux korisnika i Odjel za informatiku Sveučilišta u Rijeci pozivaju vas na OpenClass koji će se održati četvrtak, 17. siječnja 2019. u 17 sati, u zgradi Sveučilišnih odjela, prostorija O-028. Naslov:

    Kontinuirana integracija i isporuka u razvoju softvera

    Predavač je Kristijan Lenković, bivši student Odjela za informatiku i voditelj softverskog razvojnog tima u tvrtci Coadria/iOLAP u Rijeci.

    Sažetak

    Razgovarat će se o kontinuiranoj integraciji i isporuci kao dijelu životnog ciklusa razvoja modernog softvera, a poseban naglasak bit će stavljen na razvoj web aplikacija. Cilj ove metodologije je stabilan, učinkovit, siguran i brz razvoj s prethodno definiranom infrastrukturom i okruženjem na kojem će se aplikacija pokretati te značajno smanjenje visokih troškova, vremena i rizika prilikom isporuke softvera na produkcijsko okruženje.

    Nadamo se vašem dolasku!


    Contribute at the Fedora Test Day for kernel 4.20

    Posted by Fedora Magazine on January 14, 2019 06:49 PM

    The kernel team is working on final integration for kernel 4.20. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test day for Tuesday, January 15, 2019. Refer to the wiki page for links to the test images you’ll need to participate.

    How do test days work?

    A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

    To contribute, you only need to be able to do the following things:

    • Download test materials, which include some large files
    • Read and follow directions step by step

    The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

    Happy testing, and we hope to see you on test day.


    Updating release schedule tasks

    Posted by Fedora Community Blog on January 14, 2019 04:03 PM

    One thing that I noticed as I got settled in to this role last summer is that the Fedora release schedule tasks look a lot like they did when I first started contributing almost a decade ago. That’s not necessarily a bad thing — if it’s not broke, don’t fix it. but I suspect it’s less because we’re still getting releases out in the same way we did 10 years ago and more because we haven’t captured when reality has drifted from the schedule.

    As I start putting together a draft of the Fedora 31 release schedule, I want to take the opportunity to re-converge on reality. Last week, I sent an email to all of the teams that have a schedule in the main release schedule requesting updates to the tasks they have.

    I’m putting the question to the larger community now. What tasks should be added, changed, or removed from the schedules? Are there teams that should be specifically called out in the release schedule? How can our release schedules better serve the community? I’m open to your feedback via email or as an issue on the schedule Pagure repo.

    The post Updating release schedule tasks appeared first on Fedora Community Blog.

    Home Automation I

    Posted by Zamir SUN on January 14, 2019 02:29 PM

    I’ve been thinking about to automating my living for quite some time. Basically, my requirements are:

    • I can control the power of some home appliances no matter which platform I am using, be it Linux or Android or even iOS. And it can be controlled with customized rules.
    • I can power on my workstation without using WOL.

    For the first requirement I’ve purchased some so-called smart power strip in the early days. But unfortunately it has a lot of limitations and they are not really ‘smart’. Most of them only work with a timer. So I’ve been looking for alternatives.

    Well for the second requirement, I’ve been thinking about adding a relay to control the power button. However after some discussion with Shankerwangmiao and z4yx, they told me they already have made some product level prototype, and Shankerwangmiao kindly offered me the PCIe adapter they made for free. They call it IPMI_TU.

    During the new year holiday, I decide to spend more time into the first requirement. And Sonoff comes up to me. Their products use an app called EWeLink which seems to have more features than the ones I have. After some research, I know Sonoff products are equipped with a SoC called ESP8266, which is very popular recently in the so-called ‘IoT’ area. I even found an open source firmware for a series of Sonoff products called Sonoff Tasmota which is appealing to me. The Sonoff Tasmota firmware supports controlling using MQTT, which is a plus as I can make customized rules anywhere and just make the MQTT call when the rules met. The Sonoff Tasmota firmware also works on many other ESP8266 based smart plug, so I checked their list and finally come up with one of the smaller sized variant and a Sonoff Basic smart switch to control my light.

    Now it’s the time to flash them. I think it is easy, but in fact, it really takes me some time.

    In order to learn about the Arduino IDE and ESP8266 flashing, I purchased a NodeMCU board in advance. The Arduino IDE is available in Fedora so I only need to do dnf install -y arduino. But this is just the beginning. To make a long story short, I need to install a bunch of Arduino libraries which is not a problem first, but result in I need to choose some older version library to workaround bugs in newer ones.

    So here are some notes when I try to flash the Huafan smart plug I purchased.

    • Crystal Frequency needs to be changed to 40MHz
    • Choose v1.4 Prebuild for IwIP Variant to workaround some bug
    • Always connect GPIO0 to ground before power the ESP8266 on for flashing. It can be disconnected after powering up, but it won’t work if you power the ESP8266 on then connect GPIO0.

    The first is pretty straight forward, but the other 2 notes really took me a long debugging time before I know the expected way.

    Thanks for imi415’s suggestions on narrowing the WiFi problem down.

    And for flashing the Sonoff Basic switch:

    • Remember to change Crystal Frequency back to 26MHz
    • The IwIP Variant still need to be v1.4 Prebuild
    • Disable some unessential features if you don’t need by editing my_user_config.h

    That’s pretty much of it for the two.

    Then it comes to the IPMI_TU. IPMI_TU is not ESP8266 based. Instead, they use STM32F103 which is an ARM Cortex-M3 MCU, and a WIZnet W5500 ethernet controller. To flash an STM32 a tool called stlink is needed, which is also available in Fedora as stlink or stlink-gui.

    Since IPMI_TU originally designed for other use cases, they use Protocol Buffers as the data serialization protocol in their firmware. This is overkill for my use case, so I replaced the control function with a plain text one.

    When it comes to flashing, I did something wrong in the beginning, and after that, I cannot flash it again using the open source variant of stlink. I figured out a way to re-flash its bootloader and used the official STM32 flashing tool to flash. Luckily after that, the STM32 is back to normal.

    One more thing to note, the firmware of IPMI_TU uses the DHCP log server option as a hackerish way determine the MQTT, so changes to the DHCP configuration is needed.

    Now the firmware part is done. I’ll write my experience about the server side later on.

    PHP with the NGINX unit application server

    Posted by Remi Collet on January 14, 2019 02:21 PM

    Official web site: NGINX Unit

    The official repository, for RHEL and CentOS, provides the PHP module PHP version (5.3 / 5.4) in official repository.

    My repository provides various versions of the module, as base packages (unit-php) and as Software Collections (php##-unit-php).

    Here is small test tutorial, to create 1 application per available PHP version.

     

    1. Official repository installation

    For now, the packages are only available for CentOS / RHEL 6 and 7.

    Create the repository configuration, read CentOS Packages in official documentation.

    [unit]
    name=unit repo
    baseurl=https://packages.nginx.org/unit/centos/$releasever/$basearch/
    gpgcheck=0
    enabled=1
    

    Edite: the main unit package is now also available in my repository, so the official upstream repository is no more mandatory. It includes changes from PR #212 et #215 under review.

    2. Remi repository installation

    # yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    # yum install http://rpms.remirepo.net/enterprise/remi-release-7.rpm

    3. Server and modules installation

    Install theNGINX unit server and various PHP modules. The unit-php package provides the module for the system default php.

    # yum install unit unit-php php56-unit-php php71-unit-php php72-unit-php php73-unit-php

    4. Test configuration

    4.1 Preparation

    This configuration create a listener for each PHP version,listening on a different port(8300, 8356, ...) and an application serving the usual web application directory.

    Download the unit.config file:

    {
    	"applications": {
    		"exphp": {
    			"type": "php",
    			"user": "nobody",
    			"processes": 2,
    			"root": "/var/www/html",
    			"index": "index.php"
    		},
    		"exphp56": {
    			"type": "php 5.6",
    			"user": "nobody",
    			"processes": 2,
    			"root": "/var/www/html",
    			"index": "index.php"
    		},
    		"exphp71": {
    			"type": "php 7.1",
    			"user": "nobody",
    			"processes": 2,
    			"root": "/var/www/html",
    			"index": "index.php"
    		},
    		"exphp72": {
    			"type": "php 7.2",
    			"user": "nobody",
    			"processes": 2,
    			"root": "/var/www/html",
    			"index": "index.php"
    		},
    		"exphp73": {
    			"type": "php 7.3",
    			"user": "nobody",
    			"processes": 2,
    			"root": "/var/www/html",
    			"index": "index.php"
    		}
    	},
    	"listeners": {
    		"*:8300": {
    			"application": "exphp"
    		},
    		"*:8356": {
    			"application": "exphp56"
    		},
    		"*:8371": {
    			"application": "exphp71"
    		},
    		"*:8372": {
    			"application": "exphp72"
    		},
    		"*:8373": {
    			"application": "exphp73"
    		}
    	}
    }
    

    4.2 Run the service:

    # systemctl start unit

    4.3 Configuration

    Configuration is managed through a REST API:

    # curl -X PUT --data-binary @unit.config --unix-socket /var/run/unit/control.sock http://localhost/config
    {
        "success": "Reconfiguration done."
    }

    And to check running configuration:

    # curl --unix-socket /var/run/unit/control.sock http://localhost/config

    5 Usage

    You can access the application on each new port:

    • http://localhost:8300/ for default PHP
    • http://localhost:8356/ for PHP version 5.6
    • http://localhost:8372/ for PHP version 7.2
    • etc

    The phpinfo page will display language informations, to be noticed, in this case, Serveur API is unit.

    6. Conclusion

    As this is a application server, we'll probably plug it behing a web frontal (Apache HHTP server or NGINX).

    This project seems interesting, but is quite young (the first version 1.2 available on github was released on june 2018); will see what the user feedback will be.

    Current version is 1.7.

    Edited: I think that this tool is particularly interesting to manage various languages and various version, so ideal with Software Collections.

    How to Build a Netboot Server, Part 4

    Posted by Fedora Magazine on January 14, 2019 08:00 AM

    One significant limitation of the netboot server built in this series is the operating system image being served is read-only. Some use cases may require the end user to modify the image. For example, an instructor may want to have the students install and configure software packages like MariaDB and Node.js as part of their course walk-through.

    An added benefit of writable netboot images is the end user’s “personalized” operating system can follow them to different workstations they may use at later times.

    Change the Bootmenu Application to use HTTPS

    Create a self-signed certificate for the bootmenu application:

    $ sudo -i
    # MY_NAME=$(</etc/hostname)
    # MY_TLSD=/opt/bootmenu/tls
    # mkdir $MY_TLSD
    # openssl req -newkey rsa:2048 -nodes -keyout $MY_TLSD/$MY_NAME.key -x509 -days 3650 -out $MY_TLSD/$MY_NAME.pem

    Verify your certificate’s values. Make sure the “CN” value in the “Subject” line matches the DNS name that your iPXE clients use to connect to your bootmenu server:

    # openssl x509 -text -noout -in $MY_TLSD/$MY_NAME.pem

    Next, update the bootmenu application’s listen directive to use the HTTPS port and the newly created certificate and key:

    # sed -i "s#listen => .*#listen => ['https://$MY_NAME:443?cert=$MY_TLSD/$MY_NAME.pem\&key=$MY_TLSD/$MY_NAME.key\&ciphers=AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA'],#" /opt/bootmenu/bootmenu.conf

    Note the ciphers have been restricted to those currently supported by iPXE.

    GnuTLS requires the “CAP_DAC_READ_SEARCH” capability, so add it to the bootmenu application’s systemd service:

    # sed -i '/^AmbientCapabilities=/ s/$/ CAP_DAC_READ_SEARCH/' /etc/systemd/system/bootmenu.service
    # sed -i 's/Serves iPXE Menus over HTTP/Serves iPXE Menus over HTTPS/' /etc/systemd/system/bootmenu.service
    # systemctl daemon-reload

    Now, add an exception for the bootmenu service to the firewall and restart the service:

    # MY_SUBNET=192.0.2.0
    # MY_PREFIX=24
    # firewall-cmd --add-rich-rule="rule family='ipv4' source address='$MY_SUBNET/$MY_PREFIX' service name='https' accept"
    # firewall-cmd --runtime-to-permanent
    # systemctl restart bootmenu.service

    Use wget to verify it’s working:

    $ MY_NAME=server-01.example.edu
    $ MY_TLSD=/opt/bootmenu/tls
    $ wget -q --ca-certificate=$MY_TLSD/$MY_NAME.pem -O - https://$MY_NAME/menu

    Add HTTPS to iPXE

    Update init.ipxe to use HTTPS. Then recompile the ipxe bootloader with options to embed and trust the self-signed certificate you created for the bootmenu application:

    $ echo '#define DOWNLOAD_PROTO_HTTPS' >> $HOME/ipxe/src/config/local/general.h
    $ sed -i 's/^chain http:/chain https:/' $HOME/ipxe/init.ipxe
    $ cp $MY_TLSD/$MY_NAME.pem $HOME/ipxe
    $ cd $HOME/ipxe/src
    $ make clean
    $ make bin-x86_64-efi/ipxe.efi EMBED=../init.ipxe CERT="../$MY_NAME.pem" TRUST="../$MY_NAME.pem"

    You can now copy the HTTPS-enabled iPXE bootloader out to your clients and test that everything is working correctly:

    $ cp $HOME/ipxe/src/bin-x86_64-efi/ipxe.efi $HOME/esp/efi/boot/bootx64.efi

    Add User Authentication to Mojolicious

    Create a PAM service definition for the bootmenu application:

    # dnf install -y pam_krb5
    # echo 'auth required pam_krb5.so' > /etc/pam.d/bootmenu

    Add a library to the bootmenu application that uses the Authen-PAM perl module to perform user authentication:

    # dnf install -y perl-Authen-PAM;
    # MY_MOJO=/opt/bootmenu
    # mkdir $MY_MOJO/lib
    # cat << 'END' > $MY_MOJO/lib/PAM.pm
    package PAM;
    
    use Authen::PAM;
    
    sub auth {
       my $success = 0;
    
       my $username = shift;
       my $password = shift;
    
       my $callback = sub {
          my @res;
          while (@_) {
             my $code = shift;
             my $msg = shift;
             my $ans = "";
       
             $ans = $username if ($code == PAM_PROMPT_ECHO_ON());
             $ans = $password if ($code == PAM_PROMPT_ECHO_OFF());
       
             push @res, (PAM_SUCCESS(), $ans);
          }
          push @res, PAM_SUCCESS();
    
          return @res;
       };
    
       my $pamh = new Authen::PAM('bootmenu', $username, $callback);
    
       {
          last unless ref $pamh;
          last unless $pamh->pam_authenticate() == PAM_SUCCESS;
          $success = 1;
       }
    
       return $success;
    }
    
    return 1;
    END

    The above code is taken almost verbatim from the Authen::PAM::FAQ man page.

    Redefine the bootmenu application so it returns a netboot template only if a valid username and password are supplied:

    # cat << 'END' > $MY_MOJO/bootmenu.pl
    #!/usr/bin/env perl
    
    use lib 'lib';
    
    use PAM;
    use Mojolicious::Lite;
    use Mojolicious::Plugins;
    use Mojo::Util ('url_unescape');
    
    plugin 'Config';
    
    get '/menu';
    get '/boot' => sub {
       my $c = shift;
    
       my $instance = $c->param('instance');
       my $username = $c->param('username');
       my $password = $c->param('password');
    
       my $template = 'menu';
    
       {
          last unless $instance =~ /^fc[[:digit:]]{2}$/;
          last unless $username =~ /^[[:alnum:]]+$/;
          last unless PAM::auth($username, url_unescape($password));
          $template = $instance;
       }
    
       return $c->render(template => $template);
    };
    
    app->start;
    END

    The bootmenu application now looks for the lib directory relative to its WorkingDirectory. However, by default the working directory is set to the root directory of the server for systemd units. Therefore, you must update the systemd unit to set WorkingDirectory to the root of the bootmenu application instead:

    # sed -i "/^RuntimeDirectory=/ a WorkingDirectory=$MY_MOJO" /etc/systemd/system/bootmenu.service
    # systemctl daemon-reload

    Update the templates to work with the redefined bootmenu application:

    # cd $MY_MOJO/templates
    # MY_BOOTMENU_SERVER=$(</etc/hostname)
    # MY_FEDORA_RELEASES="28 29"
    # for i in $MY_FEDORA_RELEASES; do echo '#!ipxe' > fc$i.html.ep; grep "^kernel\|initrd" menu.html.ep | grep "fc$i" >> fc$i.html.ep; echo "boot || chain https://$MY_BOOTMENU_SERVER/menu" >> fc$i.html.ep; sed -i "/^:f$i$/,/^boot /c :f$i\nlogin\nchain https://$MY_BOOTMENU_SERVER/boot?instance=fc$i\&username=\${username}\&password=\${password:uristring} || goto failed" menu.html.ep; done

    The result of the last command above should be three files similar to the following:

    menu.html.ep:

    #!ipxe
    
    set timeout 5000
    
    :menu
    menu iPXE Boot Menu
    item --key 1 lcl 1. Microsoft Windows 10
    item --key 2 f29 2. RedHat Fedora 29
    item --key 3 f28 3. RedHat Fedora 28
    choose --timeout ${timeout} --default lcl selected || goto shell
    set timeout 0
    goto ${selected}
    
    :failed
    echo boot failed, dropping to shell...
    goto shell
    
    :shell
    echo type 'exit' to get the back to the menu
    set timeout 0
    shell
    goto menu
    
    :lcl
    exit
    
    :f29
    login
    chain https://server-01.example.edu/boot?instance=fc29&username=${username}&password=${password:uristring} || goto failed
    
    :f28
    login
    chain https://server-01.example.edu/boot?instance=fc28&username=${username}&password=${password:uristring} || goto failed

    fc29.html.ep:

    #!ipxe
    kernel --name kernel.efi ${prefix}/vmlinuz-4.19.5-300.fc29.x86_64 initrd=initrd.img ro ip=dhcp rd.peerdns=0 nameserver=192.0.2.91 nameserver=192.0.2.92 root=/dev/disk/by-path/ip-192.0.2.158:3260-iscsi-iqn.edu.example.server-01:fc29-lun-1 netroot=iscsi:192.0.2.158::::iqn.edu.example.server-01:fc29 console=tty0 console=ttyS0,115200n8 audit=0 selinux=0 quiet
    initrd --name initrd.img ${prefix}/initramfs-4.19.5-300.fc29.x86_64.img
    boot || chain https://server-01.example.edu/menu

    fc28.html.ep:

    #!ipxe
    kernel --name kernel.efi ${prefix}/vmlinuz-4.19.3-200.fc28.x86_64 initrd=initrd.img ro ip=dhcp rd.peerdns=0 nameserver=192.0.2.91 nameserver=192.0.2.92 root=/dev/disk/by-path/ip-192.0.2.158:3260-iscsi-iqn.edu.example.server-01:fc28-lun-1 netroot=iscsi:192.0.2.158::::iqn.edu.example.server-01:fc28 console=tty0 console=ttyS0,115200n8 audit=0 selinux=0 quiet
    initrd --name initrd.img ${prefix}/initramfs-4.19.3-200.fc28.x86_64.img
    boot || chain https://server-01.example.edu/menu

    Now, restart the bootmenu application and verify authentication is working:

    # systemctl restart bootmenu.service

    Make the iSCSI Target Writeable

    Now that user authentication works through iPXE, you can create per-user, writeable overlays on top of the read-only image on demand when users connect. Using a copy-on-write overlay has three advantages over simply copying the original image file for each user:

    1. The copy can be created very quickly. This allows creation on-demand.
    2. The copy does not increase the disk usage on the server. Only what the user writes to their personal copy of the image is stored in addition to the original image.
    3. Since most sectors for each copy are the same sectors on the server’s storage, they’ll likely already be loaded in RAM when subsequent users access their copies of the operating system. This improves the server’s performance because RAM is faster than disk I/O.

    One potential pitfall of using copy-on-write is that once overlays are created, the images on which they are overlayed must not be changed. If they are changed, all the overlays will be corrupted. Then the overlays must be deleted and replaced with new, blank overlays. Even simply mounting the image file in read-write mode can cause sufficient filesystem updates to corrupt the overlays.

    Due to the potential for the overlays to be corrupted if the original image is modified, mark the original image as immutable by running:

    # chattr +i </path/to/file>

    You can use lsattr </path/to/file> to view the status of the immutable flag and use  to chattr -i </path/to/file> unset the immutable flag. While the immutable flag is set, even the root user or a system process running as root cannot modify or delete the file.

    Begin by stopping the tgtd.service so you can change the image files:

    # systemctl stop tgtd.service

    It’s normal for this command to take a minute or so to stop when there are connections still open.

    Now, remove the read-only iSCSI export. Then update the readonly-root configuration file in the template so the image is no longer read-only:

    # MY_FC=fc29
    # rm -f /etc/tgt/conf.d/$MY_FC.conf
    # TEMP_MNT=$(mktemp -d)
    # mount /$MY_FC.img $TEMP_MNT
    # sed -i 's/^READONLY=yes$/READONLY=no/' $TEMP_MNT/etc/sysconfig/readonly-root
    # sed -i 's/^Storage=volatile$/#Storage=auto/' $TEMP_MNT/etc/systemd/journald.conf
    # umount $TEMP_MNT

    Journald was changed from logging to volatile memory back to its default (log to disk if /var/log/journal exists) because a user reported his clients would freeze with an out-of-memory error due to an application generating excessive system logs. The downside to setting logging to disk is that extra write traffic is generated by the clients, and might burden your netboot server with unnecessary I/O. You should decide which option — log to memory or log to disk — is preferable depending on your environment.

    Since you won’t make any further changes to the template image, set the immutable flag on it and restart the tgtd.service:

    # chattr +i /$MY_FC.img
    # systemctl start tgtd.service

    Now, update the bootmenu application:

    # cat << 'END' > $MY_MOJO/bootmenu.pl
    #!/usr/bin/env perl
    
    use lib 'lib';
    
    use PAM;
    use Mojolicious::Lite;
    use Mojolicious::Plugins;
    use Mojo::Util ('url_unescape');
    
    plugin 'Config';
    
    get '/menu';
    get '/boot' => sub {
       my $c = shift;
    
       my $instance = $c->param('instance');
       my $username = $c->param('username');
       my $password = $c->param('password');
    
       my $chapscrt;
       my $template = 'menu';
    
       {
          last unless $instance =~ /^fc[[:digit:]]{2}$/;
          last unless $username =~ /^[[:alnum:]]+$/;
          last unless PAM::auth($username, url_unescape($password));
          last unless $chapscrt = `sudo scripts/mktgt $instance $username`;
          $template = $instance;
       }
    
       return $c->render(template => $template, username => $username, chapscrt => $chapscrt);
    };
    
    app->start;
    END

    This new version of the bootmenu application calls a custom mktgt script which, on success, returns a random CHAP password for each new iSCSI target that it creates. The CHAP password prevents one user from mounting another user’s iSCSI target by indirect means. The app only returns the correct iSCSI target password to a user who has successfully authenticated.

    The mktgt script is prefixed with sudo because it needs root privileges to create the target.

    The $username and $chapscrt variables also pass to the render command so they can be incorporated into the templates returned to the user when necessary.

    Next, update our boot templates so they can read the username and chapscrt variables and pass them along to the end user. Also update the templates to mount the root filesystem in rw (read-write) mode:

    # cd $MY_MOJO/templates
    # sed -i "s/:$MY_FC/:$MY_FC-<%= \$username %>/g" $MY_FC.html.ep
    # sed -i "s/ netroot=iscsi:/ netroot=iscsi:<%= \$username %>:<%= \$chapscrt %>@/" $MY_FC.html.ep
    # sed -i "s/ ro / rw /" $MY_FC.html.ep

    After running the above commands, you should have boot templates like the following:

    #!ipxe
    kernel --name kernel.efi ${prefix}/vmlinuz-4.19.5-300.fc29.x86_64 initrd=initrd.img rw ip=dhcp rd.peerdns=0 nameserver=192.0.2.91 nameserver=192.0.2.92 root=/dev/disk/by-path/ip-192.0.2.158:3260-iscsi-iqn.edu.example.server-01:fc29-<%= $username %>-lun-1 netroot=iscsi:<%= $username %>:<%= $chapscrt %>@192.0.2.158::::iqn.edu.example.server-01:fc29-<%= $username %> console=tty0 console=ttyS0,115200n8 audit=0 selinux=0 quiet
    initrd --name initrd.img ${prefix}/initramfs-4.19.5-300.fc29.x86_64.img
    boot || chain https://server-01.example.edu/menu

    NOTE: If you need to view the boot template after the variables have been interpolated, you can insert the “shell” command on its own line just before the “boot” command. Then, when you netboot your client, iPXE gives you an interactive shell where you can enter “imgstat” to view the parameters being passed to the kernel. If everything looks correct, you can type “exit” to leave the shell and continue the boot process.

    Now allow the bootmenu user to run the mktgt script (and only that script) as root via sudo:

    # echo "bootmenu ALL = NOPASSWD: $MY_MOJO/scripts/mktgt *" > /etc/sudoers.d/bootmenu

    The bootmenu user should not have write access to the mktgt script or any other files under its home directory. All the files under /opt/bootmenu should be owned by root, and should not be writable by any user other than root.

    Sudo does not work well with systemd’s DynamicUser option, so create a normal user account and set the systemd service to run as that user:

    # useradd -r -c 'iPXE Boot Menu Service' -d /opt/bootmenu -s /sbin/nologin bootmenu
    # sed -i 's/^DynamicUser=true$/User=bootmenu/' /etc/systemd/system/bootmenu.service
    # systemctl daemon-reload

    Finally, create a directory for the copy-on-write overlays and create the mktgt script that manages the iSCSI targets and their overlayed backing stores:

    # mkdir /$MY_FC.cow
    # mkdir $MY_MOJO/scripts
    # cat << 'END' > $MY_MOJO/scripts/mktgt
    #!/usr/bin/env perl
    
    # if another instance of this script is running, wait for it to finish
    "$ENV{FLOCKER}" eq 'MKTGT' or exec "env FLOCKER=MKTGT flock /tmp $0 @ARGV";
    
    # use "RETURN" to print to STDOUT; everything else goes to STDERR by default
    open(RETURN, '>&', STDOUT);
    open(STDOUT, '>&', STDERR);
    
    my $instance = shift or die "instance not provided";
    my $username = shift or die "username not provided";
    
    my $img = "/$instance.img";
    my $dir = "/$instance.cow";
    my $top = "$dir/$username";
    
    -f "$img" or die "'$img' is not a file"; 
    -d "$dir" or die "'$dir' is not a directory";
    
    my $base;
    die unless $base = `losetup --show --read-only --nooverlap --find $img`;
    chomp $base;
    
    my $size;
    die unless $size = `blockdev --getsz $base`;
    chomp $size;
    
    # create the per-user sparse file if it does not exist
    if (! -e "$top") {
       die unless system("dd if=/dev/zero of=$top status=none bs=512 count=0 seek=$size") == 0;
    }
    
    # create the copy-on-write overlay if it does not exist
    my $cow="$instance-$username";
    my $dev="/dev/mapper/$cow";
    if (! -e "$dev") {
       my $over;
       die unless $over = `losetup --show --nooverlap --find $top`;
       chomp $over;
       die unless system("echo 0 $size snapshot $base $over p 8 | dmsetup create $cow") == 0;
    }
    
    my $tgtadm = '/usr/sbin/tgtadm --lld iscsi';
    
    # get textual representations of the iscsi targets
    my $text = `$tgtadm --op show --mode target`;
    my @targets = $text =~ /(?:^T.*\n)(?:^ .*\n)*/mg;
    
    # convert the textual representations into a hash table
    my $targets = {};
    foreach (@targets) {
       my $tgt;
       my $sid;
    
       foreach (split /\n/) {
          /^Target (\d+)(?{ $tgt = $targets->{$^N} = [] })/;
          /I_T nexus: (\d+)(?{ $sid = $^N })/;
          /Connection: (\d+)(?{ push @{$tgt}, [ $sid, $^N ] })/;
       }
    }
    
    my $hostname;
    die unless $hostname = `hostname`;
    chomp $hostname;
    
    my $target = 'iqn.' . join('.', reverse split('\.', $hostname)) . ":$cow";
    
    # find the target id corresponding to the provided target name and
    # close any existing connections to it
    my $tid = 0;
    foreach (@targets) {
       next unless /^Target (\d+)(?{ $tid = $^N }): $target$/m;
       foreach (@{$targets->{$tid}}) {
          die unless system("$tgtadm --op delete --mode conn --tid $tid --sid $_->[0] --cid $_->[1]") == 0;
       }
    }
    
    # create a new target if an existing one was not found
    if ($tid == 0) {
       # find an available target id
       my @ids = (0, sort keys %{$targets});
       $tid = 1; while ($ids[$tid]==$tid) { $tid++ }
    
       # create the target
       die unless -e "$dev";
       die unless system("$tgtadm --op new --mode target --tid $tid --targetname $target") == 0;
       die unless system("$tgtadm --op new --mode logicalunit --tid $tid --lun 1 --backing-store $dev") == 0;
       die unless system("$tgtadm --op bind --mode target --tid $tid --initiator-address ALL") == 0;
    }
    
    # (re)set the provided target's chap password
    my $password = join('', map(chr(int(rand(26))+65), 1..8));
    my $accounts = `$tgtadm --op show --mode account`;
    if ($accounts =~ / $username$/m) {
       die unless system("$tgtadm --op delete --mode account --user $username") == 0;
    }
    die unless system("$tgtadm --op new --mode account --user $username --password $password") == 0;
    die unless system("$tgtadm --op bind --mode account --tid $tid --user $username") == 0;
    
    # return the new password to the iscsi target on stdout
    print RETURN $password;
    END
    # chmod +x $MY_MOJO/scripts/mktgt

    The above script does five things:

    1. It creates the /<instance>.cow/<username> sparse file if it does not already exist.
    2. It creates the /dev/mapper/<instance>-<username> device node that serves as the copy-on-write backing store for the iSCSI target if it does not already exist.
    3. It creates the iqn.<reverse-hostname>:<instance>-<username> iSCSI target if it does not exist. Or, if the target does exist, it closes any existing connections to it because the image can only be opened in read-write mode from one place at a time.
    4. It (re)sets the chap password on the iqn.<reverse-hostname>:<instance>-<username> iSCSI target to a new random value.
    5. It prints the new chap password on standard output if all of the previous tasks compeleted successfully.

    You should be able to test the mktgt script from the command line by running it with valid test parameters. For example:

    # echo `$MY_MOJO/scripts/mktgt fc29 jsmith`

    When run from the command line, the mktgt script should print out either the eight-character random password for the iSCSI target if it succeeded or the line number on which something went wrong if it failed.

    On occasion, you may want to delete an iSCSI target without having to stop the entire service. For example, a user might inadvertently corrupt their personal image, in which case you would need to systematically undo everything that the above mktgt script does so that the next time they log in they will get a copy of the original image.

    Below is an rmtgt script that undoes, in reverse order, what the above mktgt script did:

    # mkdir $HOME/bin
    # cat << 'END' > $HOME/bin/rmtgt
    #!/usr/bin/env perl
    
    @ARGV >= 2 or die "usage: $0 <instance> <username> [+d|+f]\n";
    
    my $instance = shift;
    my $username = shift;
    
    my $rmd = ($ARGV[0] eq '+d'); #remove device node if +d flag is set
    my $rmf = ($ARGV[0] eq '+f'); #remove sparse file if +f flag is set
    my $cow = "$instance-$username";
    
    my $hostname;
    die unless $hostname = `hostname`;
    chomp $hostname;
    
    my $tgtadm = '/usr/sbin/tgtadm';
    my $target = 'iqn.' . join('.', reverse split('\.', $hostname)) . ":$cow";
    
    my $text = `$tgtadm --op show --mode target`;
    my @targets = $text =~ /(?:^T.*\n)(?:^ .*\n)*/mg;
    
    my $targets = {};
    foreach (@targets) {
       my $tgt;
       my $sid;
    
       foreach (split /\n/) {
          /^Target (\d+)(?{ $tgt = $targets->{$^N} = [] })/;
          /I_T nexus: (\d+)(?{ $sid = $^N })/;
          /Connection: (\d+)(?{ push @{$tgt}, [ $sid, $^N ] })/;
       }
    }
    
    my $tid = 0;
    foreach (@targets) {
       next unless /^Target (\d+)(?{ $tid = $^N }): $target$/m;
       foreach (@{$targets->{$tid}}) {
          die unless system("$tgtadm --op delete --mode conn --tid $tid --sid $_->[0] --cid $_->[1]") == 0;
       }
       die unless system("$tgtadm --op delete --mode target --tid $tid") == 0;
       print "target $tid deleted\n";
       sleep 1;
    }
    
    my $dev = "/dev/mapper/$cow";
    if ($rmd or ($rmf and -e $dev)) {
       die unless system("dmsetup remove $cow") == 0;
       print "device node $dev deleted\n";
    }
    
    if ($rmf) {
       my $sf = "/$instance.cow/$username";
       die "sparse file $sf not found" unless -e "$sf";
       die unless system("rm -f $sf") == 0;
       die unless not -e "$sf";
       print "sparse file $sf deleted\n";
    }
    END
    # chmod +x $HOME/bin/rmtgt

    For example, to use the above script to completely remove the fc29-jsmith target including its backing store device node and its sparse file, run the following:

    # rmtgt fc29 jsmith +f

    Once you’ve verified that the mktgt script is working properly, you can restart the bootmenu service. The next time someone netboots, they should receive a personal copy of the the netboot image they can write to:

    # systemctl restart bootmenu.service

    Users should now be able to modify the root filesystem as demonstrated in the below screenshot:

    Episode 129 - The EU bug bounty program

    Posted by Open Source Security Podcast on January 14, 2019 12:58 AM
    Josh and Kurt talk about the EU bug bounty program. There have been a fair number of people complaining it's solving the wrong problem, but it's the only way the EU has to spend money on open source today. If that doesn't change this program will fail.



    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/8242709/height/90/theme/custom/thumbnail/yes/preload/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes


      All systems go

      Posted by Fedora Infrastructure Status on January 13, 2019 10:04 PM
      Service 'The Koji Buildsystem' now has status: good: Everything seems to be working.

      NeuroFedora updated: 2019 week 2

      Posted by Ankur Sinha "FranciscoD" on January 13, 2019 08:55 PM

      We had our first meeting of the year. The full logs from our meeting are available here on the Fedora mote application. I have pasted the minutes of the meeting at the end for your convenience.

      The meeting was broadly for the team to come together and discuss a few things. We checked on the status of current tasks, and discussed our future steps. We've got to work on our documentation, for example. There's a lot to do, and a lot of cool new things to learn---in science, computing, and community development. If you'd like to get involved, please get in touch.

      We're continuing our work on including software in NeuroFedora, since that's the major chunk of our work load.


      Meeting summary

      Meeting started by FranciscoD at 14:00:15 UTC.

      Meeting ended at 15:10:43 UTC.


      NeuroFedora documentation is available on the Fedora documentation website. Feedback is always welcome. You can get in touch with us here.

      Appel à rejoindre Borsalinux-fr

      Posted by Charles-Antoine Couret on January 13, 2019 03:44 PM

      L'association

      Logo.png

      Borsalinux-fr est l'association qui gère la promotion de Fedora dans l'espace francophone. Nous constatons depuis quelques années une baisse progressive des membres à jour de cotisation et de volontaires pour prendre en main les activités dévolues à l'association.

      Nous lançons donc un appel à nous rejoindre afin de nous aider.

      L'association est en effet propriétaire du site officiel de la communauté francophone de Fedora, organise des évènements promotionnels comme les Rencontres Fedora régulièrement et participe à l'ensemble des évènements majeurs concernant le libre à travers la France principalement.

      Pourquoi nous lançons cet appel ?

      Nous constatons depuis 2012 ou 2013 une baisse progressive du nombre d'adhérents et en particulier de membres actifs au sein de l'association voire même de la communauté francophone dans son ensemble. Nous atteignons aujourd'hui une phase critique où l'activité est portée essentiellement par une poignée de personnes. Et certaines personnes actives aujourd'hui souhaitent baisser le rythme pour s'impliquer dans d'autres projets au sein de Fedora comme ailleurs.

      Ainsi il devient difficile de maintenir notre activité dans de bonnes conditions. Ce qui nuit à notre visibilité d'une part, mais aussi à l'attractivité du projet auprès des francophones d'autres part.

      Activités possibles

      Dans l'ensemble, les besoins les plus urgents sont au niveau de l'association où le renouvellement des membres dans le conseil d'administration est nécessaire. La traduction est aussi un domaine qui commence à être à l'arrêt. Et nous souhaitons aussi un élargissement de notre ancrage local. Actuellement les évènements de l'axe Bruxelles - Paris - Lyon - Nice sont assez bien couverts. En dehors nous avons des difficultés croissantes à envoyer quelqu'un sur place dans de bonnes conditions comme au Capitole du Libre à Toulouse ou aux RMLL suivant sa localisation.

      Si vous aimez Fedora, et que vous souhaitez que notre action perdure, vous pouvez :

      • Adhérer à l'association : les cotisations nous aident à produire des goodies, à nous déplacer pour les évènements, à payer le matériel ;
      • Postuler à un poste du Conseil d'Administration, en particulier pour la présidence, le secrétariat et la trésorerie ;
      • Participer à la traduction, sur le forum, sur les listes de diffusion, à la réfection de la documentation, représenter l'association sur différents évènements francophones ;
      • Concevoir des goodies ;
      • Organiser des évènements type Rencontres Fedora dans votre ville.

      Nous serions ravis de vous accueillir et de vous aider dans vos démarches. Toute contribution, même minime, est appréciée.

      Si vous souhaitez avoir un aperçu de notre activité, vous pouvez participer à nos réunions hebdomadaires chaque lundi soir à 20h30 (heure de Paris) sur IRC (canal #fedora-meeting-1 sur Freenode).

      Vous souhaitez nous aider ?

      N'hésitez pas à nous contacter pour nous faire part de vos idées et de ce que vous souhaitez faire.

      Par ailleurs le samedi 9 février 2019 à 14h à Paris (dans les locaux de la Fondation des Droits de l'Homme), l'Assemblée Générale Ordinaire procèdera au renouvellement du Conseil d'Administration et du Bureau de l'association. C'est l'occasion de se présenter et d'intégrer le fonctionnement de l'association ! C'est vraiment le moment idéal pour se tenir au courant de ce qui se passe et de présenter ses idées. Si vous ne pouvez pas venir sur place, n'hésitez pas à nous contacter avant pour nous communiquer vos idées et votre participation à la communauté francophone.

      The AppImage tool and Krita Next.

      Posted by mythcat on January 13, 2019 02:13 AM
      The AppImage is a universal software package format.
      The process of packaging the software in AppImage is a storage file provide by the the developer.
      This file is a compressed image with all the dependencies and libraries needed to run the desired software. The AppImage doesn’t really install the software just execute it without no extraction and no installation.
      The most common features:
      • Can run on various different Linux distributions;
      • No need of installing and compiling software;
      • No need of root permission and the system files are not touched;
      • Can be run anywhere including live disks;
      • Applications are in read only mode;
      • Software are removed just by just deleting the AppImage file;
      • Applications packaged in AppImage are not sandboxed by default.
      More about this can be read at official webpage.
      I tested the Krita Next with this tool.
      The appimage file of Krita Next can be found here.
      About the Krita Next this is a daily builds that contain new features, but could be unstable.
      After I download the file I change it to executable with:
      [mythcat@desk Downloads]$ chmod +x krita-4.2.0-pre-alpha-95773b5-x86_64.appimage 
      [mythcat@desk Downloads]$ ./krita-4.2.0-pre-alpha-95773b5-x86_64.appimage

      There are scheduled downtimes in progress

      Posted by Fedora Infrastructure Status on January 11, 2019 09:58 PM
      Service 'The Koji Buildsystem' now has status: scheduled: Scheduled outage of s390x builders is in progress

      FPgM report: 2019-02

      Posted by Fedora Community Blog on January 11, 2019 09:55 PM
      Fedora Program Manager weekly report on Fedora Project development and progress

      Here’s your report of what has happened in Fedora Program Management this week.

      I’ve set up weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

      Announcements

      Upcoming meetings & test days

      Fedora 30 Status

      Fedora 30 Change Proposal deadlines are approaching

      • Change proposals for Self-Contained Changes are due 2019-01-29.

      Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

      Fedora 30 includes a Change that will remove glibc langpacks from the buildroot. See the devel mailing list for more information and impacted packages.

      Changes

      Announced

      Submitted to FESCo

      Approved by FESCo

      Deferred

      The post FPgM report: 2019-02 appeared first on Fedora Community Blog.