Fedora People

Fedora 27 : Testing the new Django web framework .

Posted by mythcat on March 17, 2018 09:34 PM
Today I tested the Django web framework version 2.0.3 with python 3 on my Fedora 27 distro.
The main reason is to see if the Django and Fedora working well.
I used the pip to install and activate the virtual environment for python. First you need to create your project into your folder. I make one folder named django_001.
$ mkdir django_001
$ cd django_001
The next step is to create and activate the virtual environment for python language and your project.
$ python3 -m venv django_001_venv
$ source django_001_venv/bin/activate
Into this virtual environment named django_001_venv you will install django web framework.
pip install django
If you have problems with update pip then update this tool. Now start the django project named django_test.
$ django-admin startproject django_test
$ cd django_test
$ python3 manage.py runserver
Open the url with your web browser.
The result is this:

If you try to use the admin web with password and user you will see errors.
One of the most common problem for django come from settings, let's see some file from the project:

  • manage.py - this runs project specific tasks (the django-admin is used to execute system wide Django tasks) and is used to execute project specific tasks;
  • __init__.py - this file that allows Python packages to be imported from directories where it's present and it's a generic file used in almost all Python applications;
  • settings.py - the configuration settings for the Django project;
  • urls.py - contains URL patterns for the Django project; 
  • wsgi.py - is WSGI configuration properties for the Django project ( you don't need to setup WSGI to develop Django applications).

The next step is to create first django application.
$ python manage.py startapp django_blog
This make a folder named django_blog into the main django_test folder. Into the main django_test folder you have another django_test folder with settings.py file. Add into settings.py file the django_blog application.
Let's fix some issues about admin and the django_blog application.
Into the main django_test folder with manage.py file use this:
$ python3 manage.py migrate
$ python3 manage.py createsuperuser

This fix the django framework and let you to add your superuser using the admin page, see:

The next issue is: create the website ( see the django_blog folder) using the Django web framework.
I don't have a issue for this django_blog but you can follow this tutorial link.

Playing with PicoRV32 on the iCE40-HX8K FPGA Breakout Board (part 1)

Posted by Richard W.M. Jones on March 17, 2018 11:23 AM

It’s now possible to get a very small 32 bit RISC-V processor onto the reverse-engineered Lattice iCE40-HX8K FPGA using the completely free Project IceStorm toolchain. That’s what I’ll be looking at in this series of two articles.

I bought my development board from DigiKey for a very reasonable £41.81 (including tax and next day delivery). It comes with everything you need. This FPGA is very low-end [datasheet (PDF)], containing just 7680 LUTs, but it does have 128 kbits of static RAM, and the board has an oscillator+PLL that can generate 2-12 MHz, a few LEDs and a bunch of GPIO pins. The board is programmed over USB with the supplied cable. The important thing is the bitstream format and the probable chip layout have been reverse-engineered by Clifford Wolf and others. All the software to drive this board is available in Fedora:

dnf install icestorm arachne-pnr yosys emacs-verilog-mode

My first job was to write the equivalent of a “hello, world” program — flash the 8 LEDs in sequence. This is a good test because it makes sure that I’ve got all the tools installed and working and the Verilog program is not too complicated.

// -*- verilog -*-
// Flash the LEDs in sequence on the Lattice iCE40-HX8K.

module flash (input clk, output reg [7:0] leds);
   // Counter which counts upwards continually (wrapping around).
   // We don't bother to initialize it because the initial value
   // doesn't matter.
   reg [18:0] counter;
   // This register counts from 0-7, incrementing when the
   // counter is 0.  The output is wired to the LEDs.
   reg [2:0] led_select;

   always @(posedge clk) begin
      counter <= counter + 1;

      if (counter[18:0] == 0) begin
         led_select <= led_select + 1;

   // Finally wire each LED so it signals the value of the led_select
   // register.
   genvar i;
   for (i = 0; i < 8; i=i+1) begin
      assign leds[i] = i == led_select;      
endmodule // flash

<iframe allowfullscreen="true" class="youtube-player" height="312" src="https://www.youtube.com/embed/Q5pDgXuywHg?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="500"></iframe>

It looks like the base clock frequency is 2 MHz.

The fully working example is in this repo: https://github.com/rwmjones/icestorm-flash-leds

In part 2 I’ll try to get PicoRV32 on this board.

Linux - Minimize Spotify to taskbar

Posted by Robbi Nespu on March 17, 2018 12:00 AM


Hey, the last time I using Spotify was around a years ago on my previous company desktop workstation running Ubuntu 16.06 (As far I remember). Today I installing Spotify to my Fedora 27. Seem minimize to tray when close windows options are completely missing (Last time it have, but need to configure) from Spotify setting.

I don’t like it work like now.. messy. So as workaround, let use kdocker to dock Spotify to tray. It available for most distro too (Fedora, Arch, Ubuntu, Debian).

Here some infomation from Fedora repo :

$ dnf info kdocker
Installed Packages
Name         : kdocker
Version      : 5.0
Release      : 8.fc27
Arch         : x86_64
Size         : 208 k
Source       : kdocker-5.0-8.fc27.src.rpm
Repo         : @System
From repo    : fedora
Summary      : Dock any application in the system tray
URL          : https://github.com/user-none/KDocker
License      : GPLv2+
Description  : KDocker will help you dock any application in the system tray. This means you
             : can dock OpenOffice, XMMS, Firefox, Thunderbolt, Eclipse, anything! Just point
             : and click. Works for LXQT/KDE and GTK/GNOME (In fact it should work for most
             : modern window managers that support NET WM Specification for instance).
             : All you need to do is start KDocker and select an application using the mouse
             : and lo! the application gets docked into the system tray. The application can
             : also be made to disappear from the task bar.

To install kdocker with Fedora just run these command

$ sudo dnf install kdocker -y

or if you using Ubuntu / Debian

$ sudo apt-get install kdocker -y

Now edit file /usr/share/applications/spotify.desktop and change the Exec :

Exec=kdocker -q -o -l -i /usr/share/icons/Papirus/64x64/apps/spotify.svg spotify %U

Below is my full spotify.desktop file, please take note you need to change /usr/share/icons/Papirus/64x64/apps/spotify.svg with you own path to Spotify icon :

[Desktop Entry]
GenericName=Music Player
Comment=Spotify streaming music client
Exec=kdocker -q -o -l -i /usr/share/icons/Papirus/64x64/apps/spotify.svg spotify %U

Now close instance of Spotify if already open and launch again via menu then you will notice spotify automatically minimize to taskbar. Double click on icon and kdocker will open spotify windows menu for you and minimize again if you done or click minimize.

Nice! That all, see you next round ~

Fedora Atomic Workstation: Ruling the commandline

Posted by Matthias Clasen on March 16, 2018 06:49 PM

In my recent posts, I’ve mostly focused on finding my way around with GNOME Builder and using it to do development in Flatpak sandboxes. But I am not really the easiest target audience for an IDE like GNOME Builder, having spent most of my life on the commandline with tools like vim and make.

So, what about the commandline in an Atomic Workstation environment? There are many container tools, like buildah, atomic, oc, podman, and so on. I am not going to talk about these, since I don’t know them very well, and they are covered, e.g. on www.projectatomic.io.

But there are a few commands that are essential to life on the Atomic Workstation: rpm-ostree and flatpak.


First of all, there’s rpm-ostree, which is the commandline frontend to the rpm-ostreed daemon that manages the OS image(s) on the Atomic Workstation.

You can run

rpm-ostree status

to get some information about your OS image (and the other images that may be present on your system). And you can run

rpm-ostree upgrade

to get the latest update for your OS image (the terminology clash here is a bit unfortunate; rpm-ostree calls an upgrade what most Linux distros and packaging tools call an update).

You can run this command as normal user in a terminal, and rpm-ostreed will present you with a polkit dialog to do its privileged operations. Recently, rpm-ostreed has also gained the ability to check for and deploy upgrades automatically.

An important thing to keep in mind is that rpm-ostree never changes your running system. You have to reboot into the new image to see the changes, so

systemctl reboot

should be in your repertoire of commands as well. Alternatively, you can use the –reboot option to tell rpm-ostree to reboot when the upgrade command completes.


The other essential command is flatpak. Where rpm-ostree controls your OS image, flatpak rules the applications. flatpak has many commands that are worth exploring, I’ll only mention the most important ones here.

It is quite common to have more than one source for flatpaks enabled.

flatpak remotes

lists them all. If you want to find applications, then

flatpak search

will do that for you, and

flatpak install

will let you install what you found. An important detail to point out here is that applications can be installed in system-wide (in /var) or per-user (in ~/.local/share). You can choose the location with the –user and  –system options. If you choose to install system-wide, you will get a polkit prompt, since this is a privileged operation.

After installing applications, you should keep them up-to-date by installing updates. The most straightforward way to so is to just run

flatpak update

which will install available updates for all applications. To just check if updates are available, you can use

flatpak remote-ls --updates
Launching applications

Probably the most important thing you will want to do with flatpak is to run applications. Unsurprisingly, the command to do so is called run, and it expects you to specify the unique application ID:

flatpak run org.gnome.gitg

This is certainly a departure from the traditional commandline, and could be considered cumbersome (even though it has bash completion for the application ID).

Thankfully, flatpak has recently gained a way to recover the familiar interface. It now installs shell wrappers for the flatpak run command in ~/.local/share/flatpak/bin. After adding that directory to your PATH, you can run gitg like this:


If (like me), you are still not satisfied with this, you can add a shell alias to get the traditional command name back:

alias gitg=org.gnome.gitg

Now gitg works again, as it used to. Nice!


Generating a list of URL patterns for OpenStack services.

Posted by Adam Young on March 16, 2018 05:35 PM

Last year at the Boston OpenStack summit, I presented on an Idea of using URL patterns to enforce RBAC. While this idea is on hold for the time being, a related approach is moving forward building on top of application credentials. In this approach, the set of acceptable URLs is added to the role, so it is an additional check. This is a lower barrier to entry approach.

One thing I requested on the specification was to use the same mechanism as I had put forth on the RBAC in Middleware spec: the URL pattern. The set of acceptable URL patterns will be specified by an operator.

The user selects the URL pattern they want to add as a “white-list” to their application credential. A user could further specify a dictionary to fill in the segments of that URL pattern, to get a delegation down to an individual resource.

I wanted to see how easy it would be to generate a list of URL patterns. It turns out that, for the projects that are using the oslo-policy-in-code approach, it is pretty easy;

cd /opt/stack/nova
 . .tox/py35/bin/activate
(py35) [ayoung@ayoung541 nova]$ oslopolicy-sample-generator  --namespace nova | egrep "POST|GET|DELETE|PUT" | sed 's!#!!'
 POST  /servers/{server_id}/action (os-resetState)
 POST  /servers/{server_id}/action (injectNetworkInfo)
 POST  /servers/{server_id}/action (resetNetwork)
 POST  /servers/{server_id}/action (changePassword)
 GET  /os-agents
 POST  /os-agents
 PUT  /os-agents/{agent_build_id}
 DELETE  /os-agents/{agent_build_id}

Similar for Keystone

$ oslopolicy-sample-generator  --namespace keystone  | egrep "POST|GET|DELETE|PUT" | sed 's!# !!' | head -10
GET  /v3/users/{user_id}/application_credentials/{application_credential_id}
GET  /v3/users/{user_id}/application_credentials
POST  /v3/users/{user_id}/application_credentials
DELETE  /v3/users/{user_id}/application_credentials/{application_credential_id}
PUT  /v3/OS-OAUTH1/authorize/{request_token_id}
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}/roles/{role_id}
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}/roles
DELETE  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}

The output of the tool is a little sub-optimal, as the oslo policy enforcement used to be done using only JSON, and JSON does not allow comments, so I had to scrape the comments out of the YAML format. Ideally, we could tweak the tool to output the URL patterns and the policy rules that enforce them in a clean format.

What roles are used? Turns out, we can figure that out, too:

$ oslopolicy-sample-generator  --namespace keystone  |  grep \"role:
#"admin_required": "role:admin or is_admin:1"
#"service_role": "role:service"

So only admin or service are actually used. On Nova:

$ oslopolicy-sample-generator  --namespace nova  |  grep \"role:
#"context_is_admin": "role:admin"

Only admin.

How about matching the URL pattern to the policy rule?
If I run

oslopolicy-sample-generator  --namespace nova  |  less

In the middle I can see an example like this (# marsk removed for syntax):

# Create, list, update, and delete guest agent builds

# This is XenAPI driver specific.
# It is used to force the upgrade of the XenAPI guest agent on
# instance boot.
 GET  /os-agents
 POST  /os-agents
 PUT  /os-agents/{agent_build_id}
 DELETE  /os-agents/{agent_build_id}
"os_compute_api:os-agents": "rule:admin_api"

This is not 100% deterministic, though, as some services, Nova in particular, enforce policy based on the payload.

For example, these operations can be done by the resource owner:

# Restore a soft deleted server or force delete a server before
# deferred cleanup
 POST  /servers/{server_id}/action (restore)
 POST  /servers/{server_id}/action (forceDelete)
"os_compute_api:os-deferred-delete": "rule:admin_or_owner"

Where as these operations must be done by an admin operator:

# Evacuate a server from a failed host to a new host
 POST  /servers/{server_id}/action (evacuate)
"os_compute_api:os-evacuate": "rule:admin_api"

Both map to the same URL pattern. We tripped over this when working on RBAC in Middleware, and it is going to be an issue with the Whitelist as well.

Looking at the API docs, we can see that difference in the bodies of the operations. The Evacuate call has a body like this:

    "evacuate": {
        "host": "b419863b7d814906a68fb31703c0dbd6",
        "adminPass": "MySecretPass",
        "onSharedStorage": "False"

Whereas the forceDelete call has a body like this:

    "forceDelete": null

From these, it is pretty straight forward to figure out what policy to apply, but as of yet, there is no programmatic way to access that.

It would take a little more scripting to try and identity the set of rules that mean a user should be able to perform those actions with a project scoped token versus the set of APIs that are reserved for cloud operations. However, just looking at the admin_or_owner rule for most is sufficient to indicate that it should be performed using a scoped token. Thus, an end user should be able to determine the set of operations that she can include in a white-list.

Ramblings about long ago and far away

Posted by Stephen Smoogen on March 16, 2018 03:28 PM
My first job outside of college in 1994 was working at Los Alamos National Labs as a Graduate Research Assistant. It was supposed to be a post where I would use my bachelor's in Physics degree for a year until I became a graduate student somewhere. The truth was that I was burnt out of University and had little urge to go back. I instead used my time to learn much more about Unix system administration. It turned out the group I worked on had a mixture of SGI Irix's, Sun Sparcstations, HP, Convex, and I believe AIX. The systems had been run by graduate students for their professors and needed some central management. While I didn't work for the team that was doing that work, I spent more and more time working with them to get that in place. After a year, it was clear I was not going back to Physics, and my old job was ending. So the team I worked on gave me a reference to another place at the Lab where I began work. 

This network had even more Unix systems as they had NeXT cubes, old Sun boxes, Apollo, and some others I am sure to have forgotten. All of which needed a lot of love and care as they had been built for various Phd's and postdocs for various needs and then forgotten. My favorite box was one where the owner required that nearly every file was set 777. I had multiple emails which echo every comment people come up with Selinux in the last decade. If there was some problem on the system it was because it had a permission set.. and until it was shown it didn't work at 777 you could look at it being something else. [The owner was also unbelievably brilliant in other ways.. but hated arbitrary permission models.]

Any case, I got a lot of useful experience on all kinds of Unix systems, user needs, and user personalities. I also got to use Linux Softland Linux Systems (SLS) on a 486 with 4 MB of RAM running the linux kernel 0.99.4? and learn all kinds of things about PC hardware versus 'Real Computers'. The 486 was really an overclocked 386 with some added instructions that had been originally a Cyrix DX33 that had been relabeled with industrial whiteout as a 40MHz. It sort of worked at 40Mhz but was reliable only at 20Mhz. The issues with getting deals from Computer magazines.. sure the guy in the next apartment worked great.. mine was a dud.

I had originally run MCC (Manchester Computer Center Interim Linux) in college but when I moved it was easier to find a box of floppies with SLS so I had installed that on the 486. I would then download software source code from the internet and rebuild it for my own use using all the extra flags I could find in GCC to make my 20Mhz system seem faster. I instead learned that most of the options didn't do anything on i386 Linux at the time and most of my reports about it were probably met by eye-rolls with the people at Cygnus. My supposed goal was to try and set up a MUD so I could code up a text based virtual reality. Or to get a war game called Conquer working on Linux. Or maybe get xTrek working on my system. [I think I mostly was trying to become a game developer by just building stuff versus actually coding stuff. I cave-man debugged a lot of things using stuff I had learned in FORTRAN but it wasn't actually making new things.]

For years, I looked back on that time and thought it was a complete waste of time as I should have been 'coding' something. However I have come to realize I learned a lot about the nitty-gritty of hardware limitations. A 9600 baud Modem is not going to keep up with people on Ethernet playing xTrek. Moving it to a 56k modem later isn't going to keep up with a 56k partial T1. The numbers are the same but they are counting different things. A 5400 RPM IDE hard-drive is never going to be as good as 5400 RPM SCSI disks even if it is larger. 8 MB on a Sparc was enough for a MUD but on a PC it ran into problems because the CPU and MMU were not as fast or 'large'. 

All of this later became useful years later when I worked at Red Hat between 1997 and 2001. The customers at that time were people who had been using 'real Unix' hardware and were at times upset about how Linux didn't act the same way. In most cases it was the limitations of the hardware they had bought to put a system together, and by being able to debug that and recommend replacements, things improved. Being able to compare how a Convex used disks or an SGI graphics to the limitations of the old ISA and related buses helped show that you could redesign a problem to meet the hardware. [In many cases, it was cheaper to use N PC systems to replicate the behaviour of 1 Unix box but the problem needed to be broken in a way that it worked on N systems versus 1 box.] 

So what does this have to do with Linux today? Well mostly reminders to me to be less cranky with people who are 
  1. Having fun breaking things on their computers. People who want to tear apart their OS and rebuild it to something else are going to run into lots of hurdles. Don't tell them it was a stupid thing. The people at Cygnus may have rolled their eyes but they never told me to stop trying something else. Just read the documentation and see that it says 'undefined behavior' in a lot of places.
  2. Working with tiny computers to do stuff that you do on a bigger computer these days. It is very easy to think that because it is 'easier' and currently more maintainable to do a calculation on 1 large Linux box.. that you are wasting time on dozens of raspberry pis to do the same thing. But that is what the mainframers thought of the minicomputers, and the minicomputers thought of the Unix workstations, and the Unix thought of Linux on PC. 
  3. Seeming to spin around, not knowing what they are doing. I spent a decade doing that.. and while I could have been more focused.. I would have missed a lot of things that happened otherwise. Sometimes you need to do that to actually understand who you are. 

Long live Release Engineering

Posted by Dennis Gilmore on March 16, 2018 12:49 PM

My involvement in Fedora goes back to late 2003 early 2004 somewhere as a packager for fedora.us. I started by getting a few packages in to scratch some of my itches and I saw it as a way to give back to the greater open source community. Around FC3 somewhere I stepped up to help in infrastructure to rebuild the builders in plague, the build system we used before koji and that we used for EPEL(Something that I helped form) for awhile until we got external repo support in koji.

I was involved in the implementation of koji in Fedora, I joined OLPC as a build and release engineer, where I oversaw a move of the OS they shipped from FC6 to F8, and laid a foundation for the move to F9. I left OLPC when Red Hat opensourced RHN Satellite as “spacewalk project” I joined Red Hat as the release engineer for both, after a brief period there was some reorganisation in engineering that resulted in me handing off the release engineering tasks to someone closer the the engineers working on the code. As a result I worked on Fedora full time helping Jesse Keating. When he decided to work on the internal migration from CVS to git I took over as the lead.

During Fedora 14 the transition was made from Jesse to me. For the following 10 Fedora releases I was the primary person doing the work to get Fedora out the door. During that time there has been tremendous change in Fedora, how we do things and what we deliver. Fedora 14 shipped with 12905 packages, 1 install tree, a handful of livecd’s across two architectures. In Fedora 24 we shipped with 19760 packages, 4 install tree’s, 10 livecd’s, Cloud Images, Vagrant Box images, Container Base images, Atomic Host (Iamges and ostree) across 3 architectures. Along with all the primary deliverables we had a much more robust and functional Alternative Architecture program running. In Fedora 26 we added aarch64 and ppc64le to primary koji and in Fedora 27 we added s390x.

During this time we added things like Fedora Editions, and support for many new technologies. The tooling we used to compose Fedora grew in complexity to meet the growing demands and reduce the need for people to manually do the work. Management of the development of the tooling we use was taken over by different teams inside of Red Hat working upstream in the community.  Fedora Release Engineering gained a project manager who helped us to grow and become less of a black box and deal with the growing pains we faced. Fedora Infrastructure provided people to help develop and deploy the ability to build layered images for containers. Pungi the compose tool got a major version bump and grew up a lot. We also developed and worked with upstream koji to get new features and functionality into the buildsystem. Release Engineering was one of  the first adopters of pagure.

Mohan Boddu Joined Red Hat to help with Fedora just before Fedora 25, we worked on Fedora 25 together and Mohan has primarily been the person responsible for composing and making sure we ship Fedora since. During that time I have been the Team Lead for the platform team in release engineering inside of Red Hat, I have been spending my time between Fedora and RHEL and making sure that we bring together the way we build and compose both Operating Systems.

Recently I have accepted a Job offer to become the manager of a different team inside of Red Hat. My new role will be working on multi-arch support for internal products. As a result of my change in roles inside of Red Hat I will be stepping down on Friday the 23rd of March as the lead for release engineering in Fedora so I can focus on my new role. Mohan will be taking over for me,  he will be helped by the current project manager Suzanne Yeghiayan along with a cast of a handful who work tirelessly to ensure we can ship everything. All requests for projects should go though pagure or taiga for grooming and prioritisation.

Please give Mohan a big congratulations and be sure to make sure that you work with Suzanne to get your requests prioritised.

Add memtest86+ into your grub menu

Posted by Robbi Nespu on March 16, 2018 12:34 PM

I just bought a 8GB DDR3L So-DIMM from Shopee. The seller said it new and not refurbished. I need to check the memory if got some corruption or unusable memory bits.

By default, after installed Fedora into you machine, it don’t have memtest86+ on grub menu when booted. You need add it manually.

$ sudo memtest-setup
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg

If you hitch a ride with a scorpion…

Posted by Joe Brockmeier on March 16, 2018 12:19 PM
I haven’t seen a blog post or notice about this, but according to the Twitters, Coverity has stopped supporting online scanning for open source projects. Is anybody shocked by this? Anybody? This comes the same week that Slack announces that they’re ending support for IRC/XMPP gateways — that is, the same tools that persuaded a […]

OSCAL'18, call for speakers, radio hams, hackers & sponsors reminder

Posted by Daniel Pocock on March 16, 2018 08:46 AM

The OSCAL organizers have given a reminder about their call for papers, booths and sponsors (ask questions here). The deadline is imminent but you may not be too late.

OSCAL is the Open Source Conference of Albania. OSCAL attracts visitors from far beyond Albania (OpenStreetmap), as the biggest Free Software conference in the Balkans, people come from many neighboring countries including Kosovo, Montenegro, Macedonia, Greece and Italy. OSCAL has a unique character unlike any other event I've visited in Europe and many international guests keep returning every year.

A bigger ham radio presence in 2018?

My ham radio / SDR demo worked there in 2017 and was very popular. This year I submitted a fresh proposal for a ham radio / SDR booth and sought out local radio hams in the region with an aim of producing an even more elaborate demo for OSCAL'18.

If you are a ham and would like to participate please get in touch using this forum topic or email me personally.

Why go?

There are many reasons to go to OSCAL:

  • We can all learn from their success with diversity. One of the finalists for Red Hat's Women in Open Source Award, Jona Azizaj, is a key part of their team: if she is announced the winner at Red Hat Summit the week before OSCAL, wouldn't you want to be in Tirana when she arrives back home for the party?
  • Warm weather to help people from northern Europe to thaw out.
  • For many young people in the region, their only opportunity to learn from people in the free software community is when we visit them. Many people from the region can't travel to major events like FOSDEM due to the ongoing outbreak of immigration bureaucracy and the travel costs. Many Balkan countries are not EU members and incomes are comparatively low.
  • Due to the low living costs in the region and the proximity to larger European countries, many companies are finding compelling opportunities to work with local developers there and OSCAL is a great place to make contacts informally.

Sponsors sought

Like many free software communities, Open Labs is a registered non-profit organization.

Anybody interested in helping can contact the team and ask them for whatever details you need. The Open Labs Manifesto expresses a strong commitment to transparency which hopefully makes it easy for other organizations to contribute and understand their impact.

Due to the low costs in Albania, even a small sponsorship or donation makes a big impact there.

If you can't make a direct payment to Open Labs, you could also potentially help them with benefits in kind or by contributing money to one of the larger organizations supporting OSCAL.

Getting there without direct service from Ryanair or Easyjet

These notes about budget airline routes might help you plan your journey. It is particularly easy to get there from major airports in Italy. If you will also have a vacation at another location in the region it may be easier and cheaper to fly to that location and then use a bus to Tirana.

Making it a vacation

For people who like to combine conferences with their vacations, the Balkans (WikiTravel) offer many opportunities, including beaches, mountains, cities and even a pyramid (in Tirana itself).

It is very easy to reach neighboring countries like Montenegro and Kosovo by coach in just 3-4 hours. For example, there is the historic city of Prizren in Kosovo and many beach resorts in Montenegro.

If you go to Kosovo, don't miss the Prishtina hackerspace.

Tirana Pyramid: a future hackerspace?

Fedora 27 release party: Managua, Nicaragua

Posted by Fedora Community Blog on March 16, 2018 08:15 AM

On February 27th, the Fedora Community in Nicaragua ran a Release Party for the F27 Release. The activity took place in a salon of Hotel Mansión Teodolinda in Managua. This is our first activity of the year. This event was late in the Fedora Development Schedule because the Fedora 28 release is coming soon this year, but we need to keep the community active and keep promoting the Fedora Four Foundations in Nicaragua. The event schedule was…

  1. Few talks about news of the Fedora 27 release
  2. Coffee break
  3. Questions and Answers
Fedora 27 release part in Managua, Nicaragua

Fedora 27 Release Party Managua

What is new in Fedora 27?

This was the first talk of the event: a fast overview of the Features in Fedora 27 and a overview of upcoming features for Fedora 28.

News in Fedora 27

Constant innovation sometimes makes it hard to keep track of the features available in the last Fedora release. With this talk, we gave a fast overview of the development tools that make Fedora an awesome operating system for developers.

Also, we reviewed upcoming features for the Fedora 28 release. Some people were not aware that .NET is available in Linux or the work done with Fedora developers to fully support to the Rust Programming Language. It looks like we do not have users of Fedora i686 in Nicaragua for desktops, but people was interested in the support of 32-bit for Fedora Server.

Introduction to Docker containers

Introduction to Docker Containers

Containers are a hot topic in the current Linux ecosystem and Fedora offers first class support to Docker containers with the Atomic Host and the Cockpit Project. With a introduction to this technology by Omar Berróteran, we want to show Docker can be integrated in development and deployment of critical apps that take advantage of the latest optimizations made in the Fedora system.

Python development with Fedora

Fedora Loves Python

Fedora Loves Python

Python loves Fedora! With this talk, Porfirio Páiz made a introduction of the Python Classroom and make a great overview of the Python stack available for developers.

Managua event in numbers

This are some statistics about this event:

  • Number of Attendees: 30
  • New FAS sign-ups: 1
  • Talks: 3
  • Fedora Contributors in the event: 8
  • Budget executed (snacks): USD 54.22

Also we give many ISOs of Workstation, Plasma and the Classroom installers. Many people ask about how to join the local community. We had a lot of interaction with the event in social networks.


This Fedora Release Party was a great event with a great response of the local community. This shows that people understand that the Fedora Project offers a rock solid operative system, truly free and reliable that is a amazing choice for people that need to get the things done. After the event, many people show interest to give a try to Fedora and we share the Fedora Installation Media with the assistants to the event.

A point that we want to improve is to record our talks in videos, so people that cannot attend the event can watch the videos and create a source of information to newcomers to the community.

The post Fedora 27 release party: Managua, Nicaragua appeared first on Fedora Community Blog.

PHP version 7.1.16RC1 and 7.2.4RC1

Posted by Remi Collet on March 16, 2018 06:14 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.2.4RC1 are available as SCL in remi-test repository and as base packages in the remi-php72-test repository for Fedora 25-27 and Enterprise Linux.

RPM of PHP version 7.1.16RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26-27 or remi-php71-test repository for Fedora 25 and Enterprise Linux.

PHP version 7.0 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.2.2RC1 is also available in Fedora rawhide and version 7.1.16RC1 in updates-testing for Fedora 27, for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.4.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php71, php72)

Base packages (php)

Fedora Podcast 003 — Fedora Modularity

Posted by Fedora Magazine on March 16, 2018 05:42 AM

Langdon White

Episode 003 of the Fedora Podcast is now available. In Episode 003 features developer and software architect Langdon White from the Fedora Modularity team. Langdon also leads the Fedora Modularity objective. Langdon is a passionate technical leader with a proven success record architecting and implementing high-impact software systems. In the podcast, Langdon defines Modularity in the Fedora context, explains the issues that can be solved with it, as well as describing the process to help with this important project. You can read more about the Fedora Modularity objective over in their Pagure project.

One important thing to note: as of episode 003,  the Fedora Podcast is now available in iTunes. The Fedora Podcast series features interviews and talks with people who make the Fedora community awesome. These folks work on new technologies found in Fedora. Or they produce the distribution itself. Some work on putting Fedora in the hands of users.

<iframe frameborder="no" height="300" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/414074076&amp;color=%23324c77&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;show_teaser=true&amp;visual=true" width="100%"></iframe>

In addition to listening above, episode 003 is also available on Soundcloud, iTunes, Stitcher, Google Play, Simplecast, and x3mboy’s site. Full transcripts of Episode 003 are available here . Transcripts are also available for previous episodes!

Subscribe to the podcast

You can subscribe to the podcast in Simplecast, follow the Fedora Podcast on Soundcloud, on iTunes, Stitcher, Google Play, or periodically check the author’s site on fedorapeople.org.


This podcast is made with the following free software: Audacity, and espeak.

The following audio files are also used: Soft echo sweep by bay_area_bob and The Spirit of Nøkken by johnnyguitar01.

Fedora - Setting power management when low battery

Posted by Robbi Nespu on March 16, 2018 12:00 AM

Peace be upon you, I have issue with Fedora power management when my DELL Inspiron 14R 7420 SE battery are low, my laptop just shutdown and I lose what I currently working on. It bad when there is no autosave while editting some document or waiting f***ing Android Studio doing indexing or compiling.

Some how our spining disk also can get corrupt if it often happen like this. To be honest, I normally sleep at late night and mostly with my lappy on my desk (bad habit pun intended huh >_<)

We actually can control what action to take when you battery / ups are reach certain percentage or time left. There is three option :

  • PowerOff
  • Hibernate
  • HybridSleep

I prefer to use hibernate so if you like please read and follow my note how to hibernate Fedora but it recommended always do hybrid-sleep instead of suspend or hibernation.

You need to open and edit /etc/UPower/UPower.conf file. Here example of mine: <script src="https://gist.github.com/RobbiNespu/ee3033954fc15653a800a4a5823c84ee.js"></script>

The file comes with comment, so just read and you should understand what to modified. Have a look on UsePercentageForPolicy=true, you can set it as false if you prefer to depend on battery time remaining before deplete istead of battery percentage.

Then take look on Percentage (if you use UsePercentageForPolicy=true) or Time (if you use UsePercentageForPolicy=false) set the value according you need.

Lastly check CriticalPowerAction, here you should choose PowerOff, Hibernate or HybridSleep.

After you done with configuration, save the file (as root) and restart and check status of upower service

$ sudo systemctl restart upower.service
$ sudo systemctl status upower.service

Now you power management should work as you want. Gnome power management You also should actived the automatic suspend to save you battery if you left the computer too long.

That all, thanks!

Please end Daylight Savings Time

Posted by Stephen Smoogen on March 15, 2018 10:37 PM
This was going to be my 3rd article this week about something EPEL related, but I am having a hard time stringing any words together coherently. The following instead boiled out and I believe I have removed the profanity that leaked in.

So I like millions of other Americans (except those people blessed to be living in Arizona and Hawaii) are going through the week long jetlag when Daylight Savings Time starts. For the last 11? years the US has had DST start 2 weeks earlier than the rest of countries which observe this monstrosity, I think to show the rest of the  why it is a bad idea.  No one seems to learn and instead try to make it longer and longer.

I understand why it was begun during World War I in order to make electricity costs for lighting cheaper in factories. It just isn't solving that problem anymore. Instead I spend a week not really awake during the day, and for some reason as I get older not able to sleep at all during the night. And I get crankier and more sarcastic by the day. Finally sometime next Monday, I will conk out for 12-14 hours and be alright again. I would like to say I am anomaly, but this seems to happen to a lot of people around the world with higher numbers of heart attacks, strokes and accidents during the month of time changes.

So please, next time this comes up with your government (be it the EU, US, Canada Parliament, etc) write to your representatives that this needs to end. [For a fun read, the Wikipedia articles on various forms of daylight savings cover the political philandering to pay for this.]

Thank you for your patience while I whine about having only the lack of sleep when there are a hell of a lot worse things going on in the world.

Using the Red Hat Developer Toolset (DTS) in EPEL-7

Posted by Stephen Smoogen on March 15, 2018 05:58 PM
One of the problems developers find in supporting packages for any long lifed Enterprise Linux is that it is harder and harder to compile newer software. Packages may end requiring newer compilers and other tools in order to be made. Back-porting fixes or updating software become harder and harder because the tools are no longer available to make the newer code work.

In the past, this has been a problem with EPEL packages as various software upstreams focus on newer toolkits to meet their development needs. This has lead to many packages to either be removed or left to mummify at some level. The problem occurs outside of EPEL also which is why Red Hat has created a product called Developer Toolset (DTS) which contains newer gcc and other tools. This product uses software collections which have had a mixed history with Fedora and EPEL but was considered useful in this limited use.

How to Use DTS in spec files

In order to use DTS in a spec file you will need to do the following:
  1. If you are not using mock and fedpkg to build packages, you will need to add either the Red Hat DTS channel to your system or if you are using CentOS/Scientific Linux, you can add the repository following these instructions.
  2. If you are using mock/fedpkg, the scl.org repository should be available in the epel mock configs.
  3. In the spec file add the following section to the top area:
    %if 0%{?rhel} == 7

    BuildRequires: devtoolset-7-toolchain, devtoolset-7-libatomic-devel


    Then in the build section add the following:

    %if 0%{?rhel}

    . /opt/rh/devtoolset-7/enable


  4. Attempt to do a build using your favorite build tool (rpmbuild, mock -r , fedpkg mockbuild, etc).  
This should start finding what things you might need to add in to the buildrequires similar problems. We in the EPEL Steering Committee would like to get feedback on this and work out what additions are needed to get this working for other developers. 


There are several caveats to using the Developer ToolSet in EPEL.
  1. Packages may only have a BuildRequire: on the packages in the DTS. If your package will need to Require: something in the DTS or Software-collections, it can NOT be in EPEL at this time as many users do not have this enabled or used.
  2. This is only for EPEL-7. At the moment, I have not set up DTS for EL-6 because it was not asked for recently. The Steering Committee would like to hear from developers if they want it enabled in EL-6.
  3. The architectures where DTS exists are: x86_64, ppc64le, and aarc64. There is no DTS for ppc64 and we do not currently have an EPEL for s390x.


Our thanks to Tom Callaway and many other developers for his patience on getting this working.


  • Originally the article stated that the text %if 0%{?rhel} == 7 should be used. That fails. The correct code is %if 0%{?rhel}
  • If you build with mock, you are restricted to only pulling in the DTS packages. Currently koji does not have this limitation which is being fixed.

How I accidentally wrote a Wikipedia page on a layover in Dublin

Posted by Justin W. Flory on March 15, 2018 08:15 AM

One of the most unusual but wonderful experiences happened to me on a return trip from Europe to the United States.

A series of heavy noreasters hit the US east coast over the last couple weeks. This coincided with my travel dates back to Rochester, NY. While we didn’t have flooding, we had a lot of snow. A lot of snow means canceled flights.

As I made my way through border control in Dublin, Ireland on March 7, I discovered my connection to New York City would likely be canceled. A meander from baggage claim to the check-in desk confirmed this. Fortunately, Aer Lingus had no issue putting me up in a hotel overnight with dinner and breakfast to catch the next flight to New York the next day.

While waiting in airport queues, a friend happened to retweet a local event happening in Dublin the next day.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

The event was a local Wikimedia meet-up to celebrate International Women’s Day. Participants would create and edit Wikipedia pages for influential women in the history of the Royal College of Surgeons in Ireland. After digging deeper, I found out the event was 30 minutes away from my hotel from 09:30 to 12:30. My flight was at 16:10.

I put in my RSVP.

Meet the Wikimedia Ireland community

In an opportunistic stroke of fate, I would spend my extended layover for my first time in Dublin learning and listening about role model women in the Irish medicine community. I didn’t know it yet, but I would also take part in writing some of the history too!

Group photo of the participants and editors for the 2018 International Women's Day edit-a-thon

Group photo of the participants and editors for the 2018 International Women’s Day edit-a-thon. Source: Twitter, @RCSILibrary


The first part of the morning was an introduction to editing on Wikipedia and establishing the focus for edits.

Manuscript letters of support by men from the RCSI archive for women being admitted to medical schools and accepted into the British Medical Association. #HeForShe!

Manuscript letters of support by men from the RCSI archive for women being admitted to medical schools and accepted into the British Medical Association. #HeForShe! Source: Twitter, @RCSILibrary

The Royal College of Surgeons in Ireland (RCSI) started a new campaign to promote influential women in the history of the university. There is a historical board room in a prominent place on its campus. Inside the board room, there are portraits of influential people in the history of RCSI. But all of them are men. This makes it difficult for women to have role models or inspiration of women like them who “made it” in science and medicine.

On the contrary, there was also no shortage of influential women in the history of RCSI. Part of the morning was an introduction to primary sources that explained the pivotal work of female Irish doctors and pediatricians throughout the 20th century. After hearing about these inspirational women, it was a wonder – why were none of them represented in the board room?

This was actually the focus for the edit-a-thon. Recently, RCSI commissioned new portraits for some of the influential women alumnae. Half of the portraits in the board room would be relocated and replaced by the new portraits. This was part of their #WomenOnWalls campaign.

Discovering Victoria Coffey

After an introduction to the sources available and how to edit on Wikipedia, we began the editing. Organizers encouraged participants to improve an existing page first, since most of the participants were first-time editors.

Since I had some experience with Mediawiki mark-up and do a lot of writing, I decided to write a new page. There were a list of suggested women alumnae to write about. After hearing about Victoria Coffey, I decided to focus my two hours of writing on her legacy.

Project coordinator for Wikimedia Ireland, Rebecca O'Neill, introduces Wikipedia to students, librarians, and faculty (and me!)

Project coordinator for Wikimedia Ireland, Rebecca O’Neill, introduces Wikipedia to students, librarians, and faculty (and me!). Source: Twitter, @DrConorMalone

Who is Victoria Coffey?

Victoria Coffey was an Irish pediatrician. She was an alumna of RCSI, and one of the first to research sudden infant death syndrome (SIDS). Coffey spent most of her time in medicine researching and studying congenital abnormalities in infants and pediatrics. Later in her life, she founded the Faculty of Paediatrics at the Royal College of Physicians of Ireland in 1981 and was the first female president of the Irish Paediatric Society.

Writing her Wikipedia page

With the help and guidance of the Wikimedia Ireland and RCSI staff, I found resources to research and learn more about Victoria Coffey. While some public sources were available, I was also provided with a primary source from a paid online Irish encyclopedia.

From there, I had the basis to begin writing a stub for her biography. I created an infobox to summarize some of her contributions, wrote a paragraph on her life, and left external links for someone to expand and write more in the future.

You can find her Wikipedia page online now. Since its creation, it was viewed nearly 100 times, edited five times, and edited by three people.

Thank you RCSI and Wikimedia Ireland!

In a strange and opportunistic stroke of fate, I was lucky to meet this local community and work with a room of inspiring women in medicine (students, alumnae, and faculty) on lowering the wiki gap of women on Wikipedia. It was a privilege to take part and learn a unique kind of history for Ireland in my short stay in Dublin.

Thank you for this great experience, RCSI and Wikimedia Ireland!

I’m not sure if this will make me anticipate flight cancellations more or less from now on.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

The post How I accidentally wrote a Wikipedia page on a layover in Dublin appeared first on Justin W. Flory's Blog.

Dia-Dia com docker na aws.

Posted by kausdev on March 15, 2018 04:31 AM

Vamos la , nos pre-requisitos e no setup.

Aconselho dar uma lida na documentação Docker Enterprise edition for AWS.

por que eu to indo no que aprendi e concerteza irei esquecer mas por isso estou escrevendo..

Como fazer um Deployment

Existe  Opções de implantação
e tem duas maneiras de implantar Docker para AWS:

pode ser com  VPC pré-existente
ou pode usar um novo VPC criado pelo Docker
Pra nao da merda eu Recomedo que o Docker  AWS crie o seu VPC, pois permitindo que o Docker otimize  o seu ambiente.  de Instalação em um VPC existente isso requer mais trabalho. e senao ficar esperto ..

Vamos Criar um novissimo VPC
Eu irei tentar mostrar como pode cria um novo VPC, sub-redes, gateways e tudo isso é  necessário para executar Docker para AWS.  E a maneira mais fácil  e pratica com isso pode ir tomar café pq vc ganha tempo  e depois e  voce precisa fazer é executar o modelo do CloudFormation, responder algumas perguntas e pronto .. vai la pra tomar outro cafe..

e agora instalar usando um VPC EXISTENTE
Se você fizer essa instalaçao do Docker na AWS com um VPC existente, vai precisa fazer algumas etapas preliminares.   e  como nao entrarei em detalhes por  isso aqui so serve pra eu lembrar e mas indico que Consulte a configuração recomendada de VPC e sub-rede para obter mais detalhes e eu irei colocar link pra isso.

Escolha um VPC em uma região que você deseja usar.

Certifique-se de que o VPC selecionado esteja configurado com um Gateway de Internet, sub-redes e tabelas de rotas. (importante e ficar  bem esperto com vpc gratuito )

è preciso ter três sub-redes diferentes, idealmente cada uma em sua própria zona de disponibilidade. Se você estiver executando em uma região com apenas duas zonas de disponibilidade, você precisa adicionar mais de uma sub-rede a uma das zonas de disponibilidade. Para implementações de produção, recomendamos apenas a implantação em regiões com três ou mais Zonas de disponibilidade.

Quando  iniciar o docker para a pilha AWS CloudFormation, certifique-se de usar esse para VPCs existentes. Este modelo solicita o VPC e as sub-redes que você deseja usar para o Docker para AWS.

Pré-requisitos para o milagre divino
Acesse uma conta AWS com permissões para usar CloudFormation e criando os seguintes objetos. Conjunto completo de permissões necessárias.
Instâncias EC2 + Grupos de escala automática
Perfis IAM
Tabelas DynamoDB
Fila SQS
Sub-redes VPC + e grupos de segurança
Grupo de log CloudWatch
Chave SSH no AWS na região onde deseja implantar (necessário para acessar a instalação do Docker concluída)
Conta AWS que suporta EC2-VPC (Consulte as FAQ para obter detalhes sobre o EC2-Classic)
Para obter mais informações sobre a adição de um par de chaves SSH à sua conta, consulte os documentos do Amazon EC2 Key Pairs. e nao perder pra nao ter dores de cabeça.

Eu esstava lendo sobre que As partições AWS da China e dos EUA Gov Cloud não são atualmente suportadas. entao prescisa ser melhor estudada..

Agora vamos Configuração
O Docker da AWS ja vem instalado com um modelo CloudFormation que vem configurado Docker no modo swarm, executando em instâncias apoiadas por AMIs personalizados. Há duas maneiras de implantar o Docker na AWS. Você pode usar o AWS Management Console (baseado no navegador) ou usar a CLI AWS. Ambos têm as seguintes opções de configuração. (Atençao)

Escolha a chave SSH para ser usada quando você SSH nos nós gerenciadores.

Tipo de Instância
O tipo de instância EC2 para seus nós de trabalho.

O tipo de instância EC2 para seus nós gerenciadores. Quanto maior o seu enxame, maior o tamanho da instância que você deve usar.

O número de trabalhadores que deseja no seu enxame (0-1000).

O número de gerentes em seu enxame. No Docker CE, você pode selecionar 1, 3 ou 5 gerentes. Recomendamos apenas 1 gerente para testes e configurações de desenvolvimento. Não há garantias de failover com 1 gerente – se o gerente único falhar o enxame também diminui. Além disso, a atualização de enxames de gerenciador único não está garantida para ter sucesso.

No Docker EE, você pode escolher executar com 3 ou 5 gerentes.

Recomendamos pelo menos 3 gerentes, e se você tiver muitos trabalhadores, você deve usar 5 gerentes.

Habilite se você deseja que o Docker for AWS remova automaticamente o espaço não utilizado em seus nós de enxame.

Quando habilitado, a ameixa do sistema docker é escalonada todos os dias, começando às 1:42 AM UTC em trabalhadores e gerentes. Os tempos de ameixa são escalonados ligeiramente para que nem todos os nós sejam podados ao mesmo tempo. Isso limita os pontos de recurso no enxame.

A poda remove o seguinte:

Todos os recipientes parados
Todos os volumes não utilizados pelo menos em um recipiente
Todas as imagens pendentes
Todas as redes não utilizadas
Ative se você deseja que o Docker envie seus logs de contêiner para o CloudWatch. (“Sim”, “não”) Por padrão, sim.

Tamanho do volume de armazenamento efêmero dos trabalhadores em GiB (20 – 1024).

Tipo de volume de armazenamento efêmero do trabalhador (“padrão”, “gp2”).

Tamanho do volume de armazenamento efêmero do gerente em GiB (20 – 1024)

Gerador de tipo de volume de armazenamento efêmero (“padrão”, “gp2”)




How to test an update for EPEL

Posted by Stephen Smoogen on March 15, 2018 12:58 AM
Earlier this week the maintainer for clamav came onto the Freenode #epel channel asking for testers of EL-6. There was a security fix needing to be pushed to stable, but no one had given the package any karma in bodhi.

EPEL tries to straddle the slow and steady world of Enterprise Linux and the fast and furious world of Fedora. This means that packages are usually held in epel-testing for at least 14 days or until the package has been tested by at least 3 people who give a positive score in bodhi . Because EPEL is a 'Stone Soup' set of packages, it does not have dedicated QA which test every update, but instead relies on what people bring to the table in the form of testing things if they need it. This has its benefits, but it does lead to problems where someone who wants to get a CVE fix out right away having to find willing testers or wait 14 days for the package to auto-promote.

Since I had used clamav years ago, and I needed an article to publish on Wednesday.. I decided I would give it a go. My first step was to find a system to test with. My main website still runs happily on CentOS-6 and I saw that while I had configured spamassassin with postfix I had not done so with clamav. This would make a good test candidate because I could roll back to an older setup if the package did not work.

First step was to install the clamav updates. Unlike my desktop where I have epel-testing always on, I keep the setup rather conservative on the web server. So to get the testing version of clamav I needed to the following:

# yum list --enable=epel-testing clamav*
Available Packages
clamav.i686 0.99.4-1.el6 epel-testing
clamav-db.i686 0.99.4-1.el6 epel-testing
clamav-devel.i686 0.99.4-1.el6 epel-testing
clamav-milter.i686 0.99.4-1.el6 epel-testing
I then realized I had only configured clamav with sendmail in the past (yes it was a long time ago.. I watched the moon landings too.. and I can mostly remember what I had for breakfast). I googled through various documents and decided that a document at vpsget was a useful one to follow (Thank you to vpsget). Next up was to see if the packages listed had changed which they had not. So it is time to do an install:

# yum install --enable=epel-testing clamav clamstmp clamd
Loaded plugins: fastestmirror
Setting up Install Process
Loading mirror speeds from cached hostfile
* base: mirrordenver.fdcservers.net
* epel-testing: mirror.texas3006.com
* extras: repos-tx.psychz.net
* updates: centos.mirror.lstn.net
Resolving Dependencies
--> Running transaction check
Is this ok [y/N]:

I didn't use a -y here because I wanted to confirm that no large number of dependencies or other things were pulled in. It all looked good so I hit y and the install happened. I then went through the other steps and saw that there was a change in setup from when the document was written.

[root@linode01 smooge]# chkconfig --list | grep clam
clamd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
clamsmtp-clamd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
clamsmtpd 0:off 1:off 2:off 3:on 4:on 5:on 6:off

I turned on clamsmtp-clamd instead of clamd, and continued through the configs. After this I emailed an eicar to myself and saw it get blocked in the logs. I then looked at the CVE and saw if I could trigger a test against it. It didn't look like I could so I skipped that part. I then repeated the setup in a EL-6 VM I have at home to see that it worked there also. At this point it was time to report my findings in bodhi. I opened the report the owner had pointed me to and logged into the bodhi system. I added a general comment that I had tested it on 2 systems, and then plus 1'd the parts I had tested. Other people joined in and this package was able to get pushed much earlier than it would have previously.

There are currently 326 packages in EPEL-testing for EL-6. Please take the time to test 1 or two packages if you can. [I did pax-utils in writing this because I wanted to replicate the steps I had done. It needs 2 more people to test and it is a security fix also.]

Usando Docker na Aws

Posted by kausdev on March 14, 2018 08:24 PM


Aqui estou fazendo uma breve Introduçao ao uso de Docker na amazon , e na sequencia iremos inicia  com a mao na massa . usando docker , docker compose, e ansible e kubernets . o que aparecer irei testar..
Acredito que  principal intenção e mostrar a facilidade de usar e interagir em seu projeto  Docker na AWS  ou pelo tentar  mostrar o caminho  a ser seguido .

Está sendo ativamente desenvolvido para garantir que os usuários do Docker possam desfrutar de uma nova experiência  de serviço dentro da AWS  . Pode estar curioso para saber o que este projeto e o que ele tem a oferece para o gerenciamento de suas cargas de trabalho em desenvolvimento  e ṕrodução .

O Docker Nativo  da AWS.
Sim na aws tende-se a fornece uma solução nativa do Docker que evita algumas  complexidade operacional e tambem adiciona API adicionais desnecessárias .

Poedendo interagir diretamente com o Docker  e nesse sentido podendo incluindo a orquestração de seu serviço (container)  e nao tendo a necessidade de navegar por camadas extras da aplicaçao . Podendo concentrar no que mais importa: que é a execução de suas cargas de trabalho. com isso  ajuda você e sua equipe a oferecer mais valor ao negócio com mais agilidade no processos e nas entregas.

As habilidades que você e sua equipe já aprenderam e continuam aprendendo,  na utilização do Docker na área de trabalho ou em outro lugar automaticamente transferindo para usar o Docker no AWS. A consistência adicionada através de serviços em Cloud  e  também ajuda a garantir que uma migração ou uma estratégica para outras  serviços em nuvens (cloud) .

Evite o  retrabalho 

podendo utilizar o bootstraps da infra-estrutura recomendada para começar a usar  de forma automatica. Você não precisa se preocupar em rolar suas próprias instâncias, grupos de segurança ou balanceadores de carga ao usar  a AWS. (Dica isto gera custo e deve ser estudado para nao tomar susto na hora da conta ! ).

Da mesma forma, configurar e usar a funcionalidade do modo de enxame Docker para a orquestração de contêiner é gerenciada durante todo o ciclo de vida do cluster quando você usa  AWS. O Docker já coordenou os vários bits de automação que, de outra forma, estarias juntando sozinho para iniciar o modo de enxame Docker nessas plataformas. Quando o cluster terminar de inicializar, você pode pular diretamente e começar a executar os comandos do serviço docker.

Nós também fornecemos um caminho de atualização prescritivo que ajuda os usuários a atualizar entre várias versões do Docker de forma suave e automática. Em vez de experimentar o “problema de manutenção” à medida que pondera suas futuras responsabilidades, atualizando o software que você está usando, você pode atualizar facilmente para novas versões quando forem lançadas.

Base minima 
A distribuição  Linux  é personalizada usada na AWS  e é cuidadosamente desenvolvida para ser executar Docker  Tudo, desde a configuração do kernel até a pilha de rede, é personalizado para torná-lo um local favorável para executar. Por exemplo,  Aws  diz que assegura as versões do kernel  e são compatíveis com as últimas e melhores funcionalidades do Docker, como o driver de armazenamento overlay2.

Mas nada lhe impede de usar seu propria distro a qual você gosta de usar .coreOS, Debian, ubuntu ….etc

Auto-limpeza e autocuração ou autocorreção.
Mesmo o administrador mais consciencioso pode ser pego de surpresa por problemas como o logar inesperadamente agressivo ou o kernel do Linux matando processos com fome de memória. No Docker for AWS, seu cluster é resiliente a uma variedade de tais problemas por padrão. (isso e um problema grave porem nunca passei por isso).

A rotação do registro nativa do host está configurada para você automaticamente, então os logs conversíveis não usam todo o seu espaço em disco. Da mesma forma, a opção “sistema prune” permite garantir que os recursos do Docker não utilizados, como as imagens antigas, sejam limpos automaticamente. O ciclo de vida dos nós é gerenciado usando grupos de escala automática ou construções similares, de modo que, se um nó entrar em estado insalubre por motivos imprevistos, o nó é retirado da rotação do balanceador de carga e / ou substituído automaticamente e todas as suas tarefas de contêiner são reprogramadas .

Essas propriedades auto-limpeza e autocura são habilitadas por padrão e não precisam de configuração, para que você possa respirar mais fácil, pois o risco de tempo de inatividade é reduzido.

Registro nativo das plataformas
O registro centralizado é um componente crítico de muitas pilhas modernas de infraestrutura. Para que estes logs sejam indexados e pesquisáveis, são inestimáveis ​​para a depuração de problemas de aplicativos e sistemas à medida que surgiram. Fora da caixa, o Docker for AWS encaminha os logs dos contêineres para uma abstração do provedor da nuvem nativa (CloudWatch).

Ferramentas de relatório de erros do Docker de próxima geração
Um ponto de dor comum no relatório de problemas de código aberto é efetivamente comunicar o estado atual de sua infra-estrutura e os problemas que você está vendo para o rio a montante. No Docker for AWS, você recebe novas ferramentas para comunicar todos os problemas que você experimenta de forma rápida e segura para os funcionários da Docker. O Docker for AWS shell inclui um script docker-diagnostic que, a seu pedido, transmite informações de diagnóstico detalhadas para a equipe de suporte do Docker para reduzir o tradicional “please-post-the-output-of-this-command” de ida e volta freqüentemente encontrado em relatório de erros.

Part 1 of my article on RISC-V on LWN

Posted by Richard W.M. Jones on March 14, 2018 05:10 PM


I think part 2 will be next week.

LWN is a great publication, everyone should support it by subscribing.

Harden your JBoss EAP 7.1 Deployments with the Java Security Manager

Posted by Red Hat Security on March 14, 2018 01:30 PM


The Java Enterprise Edition (EE) 7 specification introduced a new feature which allows application developers to specify a Java Security Manager (JSM) policy for their Java EE applications, when deployed to a compliant Java EE Application Server such as JBoss Enterprise Application Platform (EAP) 7.1. Until now, writing JSM policies has been pretty tedious, and running with JSM was not recommended because it adversely affected performance. Now a new tool has been developed which allows the generation of a JSM policy for deployments running on JBoss EAP 7.1. It is possible that running with JSM enabled will still affect performance, but JEP 232 indicates the performance impact would be 10-15% (it is still recommended to test the impact per application).

Why Run with the Java Security Manager Enabled?

Running a JSM will not fully protect the server from malicious features of untrusted code. It does, however, offer another layer of protection which can help reduce the impact of serious security vulnerabilities, such as deserialization attacks. For example, most of the recent attacks against Jackson Databind rely on making a Socket connection to an attacker-controlled JNDI Server to load malicious code. This article provides information on how this issue potentially affects an application written for JBoss EAP 7.1. The Security Manager could block the socket creation, and potentially thwart the attack.

How to generate a Java Security Manager Policy


  • Java EE EAR or WAR file to add policies to;
  • Targeting JBoss EAP 7.1 or later;
  • Comprehensive test plan which exercises every "normal" function of the application.

If a comprehensive test plan isn't available, a policy could be generated in a production environment, as long as some extra disk space for logging is available and there is confidence the security of the application is not going to be compromised while generating policies.

Setup 'Log Only' mode for the Security Manager

JBoss EAP 7.1 added a new feature to its custom Security Manager that is enabled by setting the org.wildfly.security.manager.log-only System Property to true.

For example, if running in stand-alone mode on Linux, enable the Security Manager and set the system property in the bin/standalone.conf file using:

JAVA_OPTS="$JAVA_OPTS -Dorg.wildfly.security.manager.log-only=true"

We'll also need to add some additional logging for the log-only property to work, so go ahead and adjust the logging categories to set org.wildfly.security.access to DEBUG, as per the documentation, e.g.:


Test the application to generate policy violations

For this example we'll use the batch-processing quickstart. Follow the README to deploy the application and access it running on the application server at http://localhost:8080/batch-processing. Click the 'Generate a new file and start import job' button in the Web UI and notice some policy violations are logged to the $JBOSS_HOME/standalone/log/server.log file, for example:

DEBUG [org.wildfly.security.access] (Batch Thread - 1) Permission check failed (permission "("java.util.PropertyPermission" "java.io.tmpdir" "read")" in code source 
"(vfs:/content/batch-processing.war/WEB-INF/classes <no signer certificates>)" of "ModuleClassLoader for Module "deployment.batch-processing.war" from Service Module Loader")

Generate a policy file for the application

Checkout the source code for the wildfly-policygen project written by Red Hat Product Security.

git clone git@github.com:jasinner/wildfly-policygen.git

Set the location of the server.log file which contains the generated security violations in the build.gradle script, i.e.:

task runScript (dependsOn: 'classes', type: JavaExec) {
    main = 'com.redhat.prodsec.eap.EntryPoint'
    classpath = sourceSets.main.runtimeClasspath
    args '/home/jshepher/products/eap/7.1.0/standalone/log/server.log'

Run wildfly-policygen using gradle, i.e.:

gradle runScript

A permissions.xml file should be generated in the current directory. Using the example application, the file is called batch-processing.war.permissions.xml. Copy that file to src/main/webapp/META-INF/permissions.xml, build, and redeploy the application, for example:

cp batch-processing.war.permissions.xml $APP_HOME/src/main/webapp/META-INF/permissions.xml

Where APP_HOME is an environment variable pointing to the batch-processing application's home directory.

Run with the security manager in enforcing mode

Recall that we set the org.wildfly.security.manager.log-only system property in order to log permission violations. Remove that system property or set it to false in order to enforce the JSM policy that's been added to the deployment. Once that line has been changed or removed from bin/standalone.conf, restart the application server, build, and redeploy the application.

JAVA_OPTS="$JAVA_OPTS -Dorg.wildfly.security.manager.log-only=false"

Also go ahead and remove the extra logging category that was added previously using the CLI, e.g.:


This time there shouldn't be any permission violations logged in the server.log file. To verify the Security Manager is still enabled look for this message in the server.log file:

INFO  [org.jboss.as] (MSC service thread 1-8) WFLYSRV0235: Security Manager is enabled


While the Java Security Manager will not prevent all security vulnerabilities possible against an application deployed to JBoss EAP 7.1, it will add another layer of protection, which could mitigate the impact of serious security vulnerabilities such as deserialization attacks against Jackson Databind. If running with Security Manager enabled, be sure to check the impact on the performance of the application to make sure it's within acceptable limits. Finally, use of the wildfly-policygen tool is not officially supported by Red Hat, however issues can be raised for the project in Github, or reach out to Red Hat Product Security for usage help by emailing secalert@redhat.com.



Red Hat JBoss Enterprise Application Platform





Chemnitzer Linux Tage 2018

Posted by Fabian Affolter on March 14, 2018 10:38 AM

As usual was the Fedora Project present at the Chemnitzer Linux Tage 2018. For CLT this was kind of their 20th birthday.

Instead of showing how you can use Fedora to do 3D printing, we went with two Fedora Demo stations. CLT is still attracting new Linux users and it’s nice to show them a running Fedora installation.

There are some stability issues when you are running Fedora a single board computer. It’s was a bit annoying that we needed to power cycle our two Raspberry Pi on a regular base. Not sure, if the the GUI was the cause or the hardware itself.

This year we had the new Workstation guides to give away. It’s a nice replacement for the media and probably more sustainable than an installation disc.

People are still looking for live media. I’m not sure if they are collectors or actually using it but most visitors understood why we no longer have media. So, they went with a sticker or a pen.

During the event I needed to switch my hats. CLT is also a little bit about Home Assistant. We got our second Thomas-Krenn award. This was every unexpected and a really nice surprise. I’m hoping that Fedora IoT will bring Fedora and Home Assistant closer together. It’s a very long way to go and will require a huge amount of work.

If you are surrounded by other distributions then you will always hear the latest and greatest about their projects. At the end of the day it’s a bit sad to see that after almost 10 years (the first big attempt to join efforts between RPM based distributions was made during LinuxTag in 2009) we are still spending time to solve the same problems independently. I would really like to see that one day OpenSuSE, Mageia and Fedora are using the workflow for building packages and not need to maintain their own SPEC files.

Lucky me. At the end all cap were gone 😉

3 security videos from DevConf.cz 2018

Posted by Fedora Magazine on March 14, 2018 08:00 AM

The recent DevConf.cz conference in Brno, Czechia is an annual event run by and for open source developers and enthusiasts. Hundreds of speakers showed off countless technologies and features advancing the state of open source in Linux and far beyond. A perennially popular subject at open source conferences is security. Below is a selection of videos from the many outstanding sessions where presenters covered security topics.

Everyday security

Developers’ and administrators’ daily work can bring them into situations where mistakes can be costly. Miscreants can use numerous vectors to stage attacks or take advantage of software flaws. In this session, Christian Heimes shows how he has run into these issues in his work, and shares some thoughts on how to avoid common blunders. View the session here:

<iframe allow="autoplay; encrypted-media" allowfullscreen="allowfullscreen" frameborder="0" height="507" src="https://www.youtube.com/embed/HA932zMkLQc?feature=oembed" width="676"></iframe>


Autonomous security agents

Computer attacks are basically driven by scripts. In seconds, they can recon, exploit, and collect data of interest. DARPA’s Cyber Grand Challenge this year showed that computer security must match the speed of these attacks. In this session, Steve Grubb covers how autonomous security agents can deal with these threats. Watch the session here:

<iframe allow="autoplay; encrypted-media" allowfullscreen="allowfullscreen" frameborder="0" height="507" src="https://www.youtube.com/embed/omrByMoey2A?feature=oembed" width="676"></iframe>

SELinux loves Modularity

Currently, Fedora delivers the entire distribution SELinux policy in a single RPM package. This approach worked well when SELinux was first introduced. But as the legacy Fedora model starts to shift towards a decomposed, modular approach, so should the Fedora SELinux policy. In this session, Paul Moore talks about SELinux Modularity concepts, its advantages, and its necessity. Check out the talk here:

<iframe allow="autoplay; encrypted-media" allowfullscreen="allowfullscreen" frameborder="0" height="507" src="https://www.youtube.com/embed/7foVfBX0gH0?start=960&amp;feature=oembed" width="676"></iframe>

Nullcon 2018

Posted by Fabian Affolter on March 14, 2018 07:55 AM

guys 2018 is a security conference which takes place in India. For me it was the second time I attended and it was again a very nice experience.

Jörg’s Audit +++ took place on Wednesday and Thursday including the option to do the OPSE certification. The training session is not so much about technical skills but more about the soft skills. It should help managers to understand the work which security testers are doing and help security testers to do their work in a proper way.

Last year the infrastructure was suffering because of a couple of power outages. At some point the previously used OpenStack setup was just dead. To avoid this I decided to go with actual hardware which is backed up by a powerbank. The three Orange Pi Zero were running with Armbian. All attempts with Fedora failed unfortunatly.

The attendees are free to use whatever operating system or tools they want to perform the technical exercises. Last year we provided the Fedora Security Lab on USB keys but decided to do that for this training. One reason was that most exercises could be performed without any help of a computer.

During the conference I orchestrated the Crypto currencies village. This was about the basics of the crypto currencies and included my “Mini mining rig”. An old Orange Pi PC with an attached USB ASIC miner. Well, not much to see but it was a possibility for the attendees to see mining hardware. Fedora would have been my primary choice for the operating system but again I needed to go with Armbian. It’s is way simpler and faster to get it running.

As Nullcon is a security conference you see a lot of Windows related topic. But from my point of view it would be a perfect place to talk about about the measures the Linux community is taking to make the world a more secure place. The exhibition area was always crowded. Even in 2018 the first question you get is “For what company do you work?”. It seems that it’s still not common that Open Source contributors are attending those kind of events. Sure, I work for a company but there I’m not representing that company but the Fedora Project. I’m still hoping to see more Open Source project at those kind of events.

If you are attending a conference on a different continent then one big plus is that you can meet people in real life. Especially people you rarely meet online because their timezone is so much different than yours that it’s almost impossible to chat on a regular base.

Anyway, I would like to thanks the guys behind Nullcon for making it possible for me to be there.

Fedora 27 : 'No Space Left on Device' errors .

Posted by mythcat on March 13, 2018 11:08 PM
The No Space Left on Device error is a good issue for a linux user.
The error come one issue that you have no space left on your hard drive.
The purpose of linux user is to detect the problem with the linux operating system.
You can use your root account or sudo user to run some of this linux commands.
This error named No Space Left on Device is just a vague problem and can be the cause of multiple vague errors on Linux systems.
Here are some ways you can solve the problem.

Use du and df commands to see if your disk is full (check into home and root folders area):
# df -h
# df -i /
# du -sh /

Check deleted file reserved by process (PID) with lsof:
# lsof / | grep deleted
Take a look at unlinked files:
# lsof -a +L1 *mountpoint*

Check you have bad filesystem blocks with fsck:
# fsck -vcck /dev/sda2

If you use a virtual machine (VM) then you can get this error in process of emulation of linux operating system for some PID.
It also happened to me with Android Studio software under Fedora 27 - the error is show like: No Space Left on Device.
I do not know why this happens, but I assume it's an emulation task in the memory area.

2.5 Year Warning: EPEL-6 will be archived in January 2021.

Posted by Stephen Smoogen on March 13, 2018 04:39 PM
EPEL builds packages against Red Hat Enterprise Linux versions which are in Production Phases 1,2,3. Currently RHEL-6 will reach this on November 30, 2020. At this point, EPEL will follow the steps it did with RHEL-5 to end of life EPEL-6.
  1. New builds will be stopped in the koji builders.
  2. Branching into EL-6 will be stopped in the Fedora src mechanism
  3. Packages in epel-6 testing will no longer be promoted to epel-6.
  4. After about 2 months, we will archive the packages to fedora archives and have the mirrors point to that. 
What does this mean for users of EPEL-6 currently? Nothing much beyond the fact that you should start planning on moving to newer versions of (RH)EL in the next 2.5 years. [This includes me because my main website runs on CentOS-6.] If your EL-6 is looking to be run past December 1, 2020, then you need to look at getting extended software contracts from Red Hat (or some consultant who is mad enough to do so). [Red Hat Enterprise Linux 6 was initially released in 2010, so it will have had 10 years support by then.]

What does this mean for the EPEL Steering Committee? We need to work out a better mechanism than we had in EL-5 for various packages which were end of lifed. Currently the build system composes each EPEL tree like it was a completely new distribution of packages. When a package is retired by its maintainer, the only way for a user to get that copy is to get the last released build from koji.fedoraproject.org versus from a mirror. This puts a lot more load on koji and also on users who have to try and figure out how to keep an old box going. 

Conversor HDMI-VGA Aukey

Posted by Alvaro Castillo on March 13, 2018 03:41 PM


Hemos adquirido a través de Amazon, un conversor HDMI-VGA debido a que tenemos un portátil que no tiene toma VGA, solo un puerto HDMI y 2 micro-HDMI. Queriamos tan solo compartir lo bien que nos funciona este dispositivo y por lo que recomendamos su compra. 100% compatible con Linux. El conversor viene dentro de una caja de cartón fino, con un pequelo manual, una pegatina y el conversor con el puerto HDMI cubierto por un protector de plástico duro para evitar que se deteriore.


[F28] Participez à la journée de test consacrée à l'internationalisation

Posted by Charles-Antoine Couret on March 13, 2018 12:49 PM

Aujourd'hui, ce mardi 13 mars, est une journée dédiée à un test précis : sur l'internationalisation de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Comme chaque version de Fedora, la mise à jour de ses outils impliquent souvent l’apparition de nouvelles chaînes de caractères à traduire et de nouveaux outils liés à la prise en charge de langues (en particulier asiatiques).

Pour favoriser l'usage de Fedora dans l'ensemble des pays du monde, il est préférable de s'assurer que tout ce qui touche à l'internationalisation de Fedora soit testée et fonctionne. Notamment parce qu'une partie doit être fonctionnelle dès le LiveCD d'installation (donc sans mise à jour).

Les tests du jour couvrent :

  • Le bon fonctionnement d'ibus pour la gestion des entrées claviers ;
  • La personnalisation des polices de caractères ;
  • L'installation automatique des paquets de langues des logiciels installés suivant la langue du système ;
  • La traduction fonctionnelle par défaut des applications ;
  • Les nouvelles polices chinoises,japonaises et coréennes de Google Noto (changement de Fedora 28) ;
  • Synchronisation de glibc avec les dernières normes CLDR (changement de Fedora 28) ;
  • Boîte de dialogue IBus pour récupérer un Emoji à partir de son annotation et de sa description UNICODE (changement de Fedora 28) ;
  • Test de Fontconfig 2.13 (changement de Fedora 28).

Bien entendu, étant donné les critères, à moins de savoir une langue chinoise, l'ensemble des tests n'est pas forcément réalisable. Mais en tant que francophones, de nombreuses problématiques nous concernent et remonter les problèmes est important. En effet, ce ne sont pas les autres communautés linguistiques qui identifieront les problèmes d'intégration de la langue française.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Mastering Squid, Instalación

Posted by Alvaro Castillo on March 13, 2018 01:30 AM


¡Bievenido al Mastering en Squid! Este es un curso extendido sobre cómo utilizar Squid dividido en varios posts para facilitar la lectura y ser más organizados. Advertimos que va a ser un curso bastante largo, y que puedana provecharlo al máximo. Disfruten del primer post, la instalación.

Squid logo

¿Qué es un proxy?

Es un software que permite facilitar la comunicación entre un cliente y un servidor sin modificar peticiones o respuestas. Cuando nosotros iniciamos una petición hacia u...

Teaching an old dog new tricks

Posted by Dan Walsh on March 12, 2018 08:16 PM

I have been working on SELinux for over 15 years.  I switched my primary job to working on containers several years ago, but one of the first things I did with containers was to add SELinux support.  Now all of the container projects I work on including CRI-O, Podman, Buildah as well as Docker, Moby, Rocket, runc, systemd-nspawn, lxc ... all have SELinux support.  I also maintain the container-selinux policy package which all of these container runtimes rely on.

Any ways container runtimes started adding the no-new-privileges capabilities a couple of years ago. 


The no_new_privs kernel feature works as follows:

  • Processes set no_new_privs bit in kernel that persists across fork, clone, & exec.
  • no_new_privs bit ensures process/children processes do not gain any additional privileges.
  • Process aren't allowed to unset no_new_privs bit once set.
  • no_new_privs processes are not allowed to change uid/gid or gain any other capabilities, even if the process executes setuid binaries or executables with file capability bits set.
  • no_new_privs prevents Linux Security Modules (LSMs) like SELinux from transitioning to process labels that have access not allowed to the current process. This means an SELinux process is only allowed to transition to a process type with less privileges.

Oops that last flag is a problem for containers and SELinux.  If I am running a command like

# podman run -ti --security-opt no-new-privileges fedora sh

On and SELinux system, usually podman command would be running as unconfined_t, and usually podman asks for the container process to be launched as container_t.  


docker run -ti --security-opt no-new-privileges fedora sh

In the case of Docker the docker daemon is usually running as container_runtime_t. And will attempt to launch the container as container_t.

But the user also asked for no-new-privileges. If both flags are set the kernel would not allow the process to transition from unconfined_t -> container_t. And in the Docker case the kernel would not allow a transition from container_runtime_t -> container_t.

Well you may say that is pretty Dumb.  no_new_privileges is supposed to be a security measure that prevents a process from gaining further privs, but in this case it is actually preventing us from lessening its SELinux access.

Well the SELinux kernel and policy had the concept of "typebounds", where a policy writer could write that one type typebounted another type.  For example

typebounds container_runtime_t container_t, and the kernel would then make sure that container_t has not more allow rules then container_runtime_t.  This concept proved to be problematic for two reasons.  


Writing policy for the typebounds was very difficult and in some cases we would have to add additional access to the bounding type.  An example of this is SELinux can control the `entrypoint` of a process.  For example we write policy that says httpd_t can only be entered by executables labeled with the entrypoint type httpd_exec_t.  We also had a rule that container_runtime_t can only be entered via the entrpoint type of container_runtime_exec_t.  But we wanted to allow any processes to be run inside of a container, we wrote a rules that all executable types could be used as entrypoints to container_t.  With typebounds we needed to add all of these rules to container_runtime_t meaning we would have to allow all executables to be run as container_runtime_t.  Not ideal.

The second problem with typebounds and the kernel and policy only allowed a single typebounds of a type.  So if we wanted to allow unconfined_t processes to launch container_t processes, we would end up writing rules like

typebounds unconfined_t container_runtime_t
typebounds container_runtime_t container_t.

Now unconfined_t would need to grow all of the allow rules of container_runtime_t and container_t.



Well I was complaining about this to Lucas Vrabec, the guy who took over selinux-policy from me, and he tells me about this new allow rule called nnp_transitions.  The policy writer could write a rule into policy to say that a process could nnp_transition from one domain to another.

allow container_runtime_t confined_t:process2 nnp_transition;
allow unconfined_t confined_t:process2 nnp_transition;

With a recent enough kernel, SELinux would allow the transition even if the no_new_privs kernel flag was set, and the typebounds rules were NOT in place.  

Boy did I feel like an SELinux NEWBIE.  I added the rules on Fedora 27 and suddenly everything started working.  As of RHEL7.5 this feature will be back ported into the RHEL7.5 kernel.   Awesome.


While I was looking at the nnp_transition rules, I noticed that there was also a nosuid_transition permission. nosuid allows people to mount a file system with nosuid flag, this tells the kernel that even if a setuid application exists on this file system, the kernel should ignore it and not allow a process to gain privilege via the file.  You always want untrusted file systems like usb sticks to be mounted with this flag.  Well SELinux systems similarly ignore transition rules on labels based on a nosuid file system.  Similar to nnp_transition, this blocks a process from transition from a privileged domain to a less privileged domain.  But the nosuid_transtion flag allows us to tell the kernel to allow transitions from one domain to another even if the file system is marked nosuid.

allow container_runtime_t confined_t:process2 nosuid_transition;
allow unconfined_t container_t:process2 nosuid_transition;

This means that even if a user used podman to execute a file on a nosuid file system it would be allowed to transition from the unconfined_t to container_t. 

Well it is nice to know there are still things that I can learn new about SELinux.

Fedora and childish ideas .

Posted by mythcat on March 12, 2018 07:28 PM
The marketing, design and promotion of any product is a key element of success.
I have to admit that although I am not an active member of Fedora distribution teams and I am glad to be able to help where it is needed.
Over the last time I spend my online time with my son playing Roblox ( because he is away from me) and trying to show him what the computer can do for people.
This game allows development with the LUA programming language and object creation for users.
The idea that programs and games are essential factors in our lives.
Also the fact that I have been using Fedora for a long time, I have allowed myself to implement an intention to promote the Fedora distribution.
This is a shirt created with the Fedora logo and can be worn in the Roblox game. He found it here.

Pourquoi Scaleway ?

Posted by Guillaume Kulakowski on March 12, 2018 05:45 PM

Récemment ce blog a dû se trouver un nouvel hébergement. Pour cela, j'ai fait le choix de Scaleway comme fournisseur de VPS. Je vais pas chercher des arguments dans tous les sens, ce qui a fait mouche, c'est le prix !

Voici un petit compte rendu après 1 mois d'utilisation.

Avantages :

  • Le prix, on est moins cher que tout.
  • C'est du free, il y a des APIs et des clients.
  • La facturation est à la minute, idéal pour tester quelque chose.
  • Le choix des produits (x68_64, ARM, ARM_64).
  • Le choix des distros.
  • L'interface web qui permet de faire plein de chose et de se connecter directement en virtual KVM.

Inconvénients :

  • Les stocks... Il faut souvent attendre que quelqu'un libère un VPS pour pouvoir en upper un nouveau.
  • La performances, en effet des benchs montrent les VPS de Scaleway moins performant que ceux d'OVH.
  • La gestion des disques durs qui sont rajoutés comme des volumes qu'on empile.
  • Le fait de ne pas avoir de kernel officiel de la distro... Mais on a un mainline, ce qui est bien aussi ;-).

Au début j'étais parti sur un simple starter à 2Go de RAM (VC1S) qui tenait très bien mon architecture : Apache 2.4 / PHP 7.2 / MariaDB 10.1 via Docker. Et puis, j'ai démarré mon container GitLab... Et là je dois avouer avoir eu tout le mal du monde à contenir GitLab à moins de 1.5Go, mais heurement StackOverflow est là...

Du coup j'ai du faire un snapshot de ma VM et démarrer ce snapshot avec une VPS de 4Go de RAM (VC1M) et hop j'avais un serveurs de 4 Go. Avec les optimisations, le container GitLab tourne nickel et je vais pouvoir faire évoluer mon architecture (Varnish pour WordPress par exemple).

A follow up on Fedora 28’s background art

Posted by Máirín Duffy on March 12, 2018 12:04 PM

A quick post – I have a 4k higher-quality render of one of Fedora 28 background candidates mentioned in a recent post about the Fedora 28 background design process. Click on the image below to grab it if you would like to try / test it and hopefully give some feedback on it:

3D render of the Fedora logo in blue fiber optic light strands against a black background. Image is angled with some blur and bokeh effects. the angling of this version is such that it comes from below and looks up.

One of the suggestions I’ve received from your feedback is to try to vary the height between the ‘f’ and the infinity symbol so they stand out. I’m hoping to find some time this week to figure out how exactly to do that (I’m a Blender newbie 😳), but if you want to try your hand, the Blender source file is available.

Continuous integration in Fedora

Posted by Fedora Magazine on March 12, 2018 08:00 AM

Continuous Integration (CI) is the process of running tests for every change made to a project, integrated as if this were the new deliverable. If done consistently, it means that software is always ready to be released. CI is a very well established process across the entire IT industry as well as free and open source projects. Fedora has been a little behind on this, but we’re catching up. Read below to find out how.

Why do we need this?

CI will improve Fedora all around. It provides a more stable and consistent operating system by revealing bugs as early as possible. It lets you add tests when you encounter an issue so it doesn’t happen again (avoid regressions). CI can run tests from the upstream project as well as Fedora-specific ones that test the integration of the application in the distribution.

Above all, consistent CI allows automation and reduced manual labor. It frees up our valuable volunteers and contributors to spend more time on new things for Fedora.

How will it look?

For starters, we’ll run tests for every commit to git repositories of Fedora’s packages (dist-git). These tests are independent of the tests each of these packages run when built. However, they test the functionality of the package in an environment as close as possible to what Fedora’s users run. In addition to package-specific tests, Fedora also runs some distribution-wide tests, such as upgrade testing from F27 to F28 or rawhide.

Packages are “gated” based on test results: test failures prevent an update being pushed to users. However, sometimes tests fail for various reasons. Perhaps the tests themselves are wrong, or not up to date with the software. Or perhaps an infrastructure issue occurred and prevented the tests from running correctly. Maintainers will be able to re-trigger the tests or waive their results until the tests are updated.

Eventually, Fedora’s CI will run tests when a new pull-request is opened or updated on https://src.fedoraproject.org. This will give maintainers information about the impact of the proposed change on the stability of the package, and help them decide how to proceed.

What do we have today?

Currently, a CI pipeline runs tests on packages that are part of Fedora Atomic Host. Other packages can have tests in dist-git, but they won’t be run automatically yet. Distribution specific tests already run on all of our packages. These test results are used to gate packages with failures.

How do I get involved?

The best way to get started is to read the documentation about Continuous Integration in Fedora. You should get familiar with the Standard Test Interface, which describes a lot of the terminology as well as how to write tests and use existing ones.

With this knowledge, if you’re a package maintainer you can start adding tests to your packages. You can run them on your local machine or in a virtual machine. (This latter is advisable for destructive tests!)

The Standard Test Interface makes testing consistent. As a result, you can easily add any tests to a package you like, and submit them to the maintainers in a pull-request on its repository.

Reach out on #fedora-ci on irc.freenode.net with feedback, questions or for a general discussion on CI.

Photo by Samuel Zeller on Unsplash

Episode 87 - Chat with Let's Encrypt co-founder Josh Aas

Posted by Open Source Security Podcast on March 11, 2018 11:00 PM
Josh and Kurt talk about Let's Encrypt with co-founder Josh Aas. We discuss the past, present, and future of the project.
<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6353715/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

Screenshot, March 11, 2018

Posted by Petr Šabata on March 11, 2018 06:42 PM

Felt in a mood to tweak my window manager config and features again. The colour scheme is still base16-chalk but I switched to Source Code Pro as the main font. I also started using the tilegap and swallow dwm patches, the latter of which needed some extra patching.

The screenshot features dwm 6.1, xterm 330, mutt 1.9.2, weechat 2.0.1, google-chrome 65.0.3325.146, vim 8.0.1553, portage 2.3.19, screenfetch 3.7.0 & 3.8.0, and zsh 5.4.1.

And yes, I’m looking into Rust. Seems fun so far.

Fedora 27 Release Party - Coimbatore

Posted by Sachin Kamath on March 11, 2018 05:32 PM

As part of Anokha Tech Fest, a 2 day FOSS workshop was hosted in Amrita School of Engineering on 22nd and 23rd of February along with a Fedora Release party on the second day. Around 40 people from various years and disciplines attended the event to learn about Linux, Open …

Quick analysis of a Linux system

Posted by Luca Ciavatta on March 11, 2018 10:00 AM
<header class="entry-header">

Sometimes happens that you need to take a look at an unknown Linux system or that you have to quickly analyze the problems of a Linux server. You’re lucky because, on Linux, there are some commands to start an in-depth analysis and that serve to understand what’s happening behind the scene.



Looking, administer, analysis. Going down with command line interface

Not all the commands described are installed by default in every Linux distribution; many are part of the GNU Coreutils package, so you’ll quickly understand that a whole data analysis toolkit is already installed on your system. The first advice is to try to launch them on the systems (servers or normal PCs) that are normally administered and, if they are not working, install them. The suggestion is also the one to include commands and relative packages in your standard installations because those are commands that sooner or later come in handy.

If you want to go deeper in Linux system analysis, I can recommend you to read and study the Linux Performance Analysis, an excellent article by Brendan D. Gregg that lists all the useful commands to identify the possible problems. You can also find this page including the tools maps and links to various Linux performance material that he has created.


$ uptime
23:51:26 up 21:31, 1 user, load average: 30.02, 26.43, 19.02

The time that the system is up and running. Also, this is a quick way to view the load averages, which indicate the number of tasks (processes) wanting to run. The three numbers are exponentially damped moving sum averages with a 1 minute, 5 minutes, and 15 minutes constant.


$ date
So 11. Mär 11:19:02 CET 2018

A useful command to understand if the system has the correct time and which time zone is being used. This allows you to avoid incorrect interpretations of the logs and allows you to check if problems are related to the correct time depends or due to an incorrect setting of the system clock.


$ uname -a
Linux razen 4.15.0-10-generic #11-Ubuntu SMP Tue Feb 13 18:23:35 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

A quick look at the version of Linux that is running, the host name and the processor family. If you want more information about the hardware of the system in use, you use some other specific commands. Take a look at this reference: What is the Linux command to find out hardware info?.


$ ps ax
1 ? Ss 0:03 /sbin/init splash
2 ? S 0:00 [kthreadd]
4 ? I< 0:00 [kworker/0:0H]
6 ? I< 0:00 [mm_percpu_wq]
7 ? S 0:00 [ksoftirqd/0]
8 ? I 0:02 [rcu_sched]
9 ? I 0:00 [rcu_bh]
10 ? S 0:00 [migration/0]

A command to get an idea of what is going on inside. With this command, you can catch simple problems to solve. The ‘a’ option tells the command ‘ps’ to list the processes of all users on the system rather than just those of the current user and the ‘x’ option to include processes that are not running in a terminal, such as daemon processes.


$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 3,9G 0 3,9G 0% /dev
tmpfs 788M 1,9M 787M 1% /run
/dev/sda2 234G 165G 57G 75% /
tmpfs 3,9G 91M 3,8G 3% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
/dev/sda1 511M 4,7M 507M 1% /boot/efi
tmpfs 788M 16K 788M 1% /run/user/121
tmpfs 788M 2,1M 786M 1% /run/user/1000
/dev/fuse 250G 0 250G 0% /run/user/1000/keybase/kbfs

The command ‘df’ displays statistics about the amount of free disk space on the specified filesystem or on the filesystem of which file is a part. So, how much free space do we have? Some problems arise from (almost) full file systems.


$ free -m
total used free shared buff/cache available
Mem: 7879 2829 2917 462 2132 5103
Swap: 10239 0 10239

The command ‘free’ displays the total amount of free and used physical and swap memory in the system, as well as the buffers and caches used by the kernel. The free value indicates the memory available to start new programs without the swap intervening. If you prefer values expressed in Gigabyte, you can also use ‘free -h’.


$ top
Tasks: 244 total, 1 running, 191 sleeping, 0 stopped, 0 zombie
%Cpu(s): 6,0 us, 2,4 sy, 0,0 ni, 91,6 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st
KiB Mem : 8068856 total, 2887832 free, 3017140 used, 2163884 buff/cache
KiB Swap: 10485756 total, 10485756 free, 0 used. 5158896 avail Mem

3344 cialu 20 0 2474952 611800 192800 S 15,2 7,6 11:45.61 Web Content
2344 cialu 20 0 3514540 225424 100184 S 4,6 2,8 5:16.10 gnome-shell
3618 cialu 20 0 2535564 432428 128776 S 4,6 5,4 3:40.96 Web Content
3102 cialu 20 0 3069176 670060 222340 S 3,3 8,3 9:40.47 firefox

The ‘top’ command includes many metrics and it continuosly check the loads providing a dynamic real-time view of a running system. It can display system summary information as well as a list of processes or threads currently being managed by the Linux kernel.


$ dmesg
[ 37.224664] Bluetooth: RFCOMM ver 1.11
[ 38.179168] rfkill: input handler disabled
[ 159.022215] show_signal_msg: 20 callbacks suppressed
[ 159.022216] deja-dup-monito[3555]: segfault at bbadbeef ip 00007f82cdbfe0b8 sp 00007ffe71ce1930 error 6 in libjavascriptcoregtk-4.0.so.18.7.7[7f82cce41000+fc9000

Invoking ‘dmesg’ without any of its options causes it to write all the kernel related messages as output. As that output doesn’t fit a single terminal page, you can use text-manipulation tools like ‘grep‘, ‘less‘ or ‘grep‘ with ‘dmesg’ command.


$ w
11:59:20 up 1:19, 1 user, load average: 1,70, 1,42, 1,28
cialu tty2 tty2 10:40 1:19m 47:09 3:41 /usr/lib/firefox

It’s a bit redundant because the ‘w’ command is a combination of several other Unix programs: ‘who’, ‘uptime’ and ‘ps -a’. This command provides a quick summary of every user logged into the system, what each user is currently doing and what load all the activities are inflicting on the computer itself.

The post Quick analysis of a Linux system appeared first on cialu.net.

Seeking a board seat at OpenSource.org

Posted by Harish Pillay 9v1hp on March 11, 2018 09:07 AM

I’ve stepped up to be considered for a seat on the Board of the Open Source Initiative.

Why would I want to do this? Simple: most of my technology-based career has been made possible because of the existence of FOSS technologies. It goes all the back to graduate school (Oregon State University, 1988) where I was able to work on a technology called TCP/IP which I was able to build for the OS/2 operating system as part of my MSEE thesis. The existence of newsgroups such as comp.os.unix, comp.os.tcpip and many others on usenet gave me a chance to be able to learn, craft and make happen networking code that was globally useable. If I did not have access to the code that was available on the newsgroups I would have been hardpressed to complete my thesis work. The licensing of the code then was uncertain and arbitrary and, thinking back, not much evidence that one could actually repurpose the code for anything one wants to.

My subsequent involvement in many things back in Singapore – the formation of the Linux Users’ Group (Singapore) in 1993 and many others since then, was only doable because source code was available for anyone do as they pleased and to contribute back to.

Suffice to say, when Open Source Initiative was set up twenty years ago in 1998, it was a formed a watershed event as it meant that then Free Software movement now had a accompanying, marketing-grade branding. This branding has helped spread the value and benefits of Free/Libre/Open Source Software for one and all.

Twenty years of OSI has helped spread the virtue of what it means to license code in an manner that enables the recipient, participants and developers in a win-win-win manner. This idea of openly licensing software was the inspiration in the formation of the Creative Commons movement which serves to provide Free Software-like rights, obligations and responsibilities to non-software creations.

I feel that we are now at a very critical time to make sure that there is increased awareness of open source and we need to build and partner with people and groups within Asia and Africa around licensing issues of FOSS. The collective us need to ensure that the up and coming societies and economies stand to gain from the benefits of collaborative creation/adoption/use of FOSS technologies for the betterment of all.

As an individual living in Singapore (and Asia by extension) and being in the technology industry and given that extensive engagement I have with various entities:

I feel that contributing to OSI would be the next logical step for me. I want to push for a wider adoption and use of critical technology for all to benefit from regardless of their economic standing. We have much more compelling things to consider: open algorithms, artificial intelligence, machine learning etc. These are going to be crucial for societies around the world and open source has to be the foundation that helps build them from an ethical, open and non-discriminatory angle.

With that, I seek your vote for this important role.  Voting ends 16th March 2018.

I’ll be happy to take questions and considerations via twitter or here.

Power.Fake.It: PowerFake + FakeIt

Posted by Hedayat Vatankhah on March 10, 2018 10:41 PM

As I said in the introduction, PowerFake lacked the features of a complete mocking framework, and I was hoping to be able to integrate it with one or more mocking frameworks. So, I decided to try integrating it with FakeIt as the first target.

Thanks to its flexible design using virtual functions and abstract classes, I was able to integrate PowerFake with it nicely, and the result is PowerFakeIt template class. Using it, you can use almost all of FakeIt tools with free functions and non-virtual member functions, effectively extending FakeIt for such use cases.

You still have to use WRAP_FUNCTION macors to mark the desired functions. But instead of using MakeFake() directly, you can use PowerFakeIt<> class with utilities of FakeIt:

PowerFakeIt<> pfk;

When(Function(pfk, normal_func)).Do([](int ){ cout << "WOW :) " << endl; });


Verify(Function(pfk, normal_func).Using(100)).Exactly(1);

PowerFakeIt<SampleClass> pfk2;
When(Method(pfk2, CallThis)).Do([]() { cout << "WOW2" << endl; });
When(OverloadedMethod(pfk2, OverloadedCall, int())).Return(4);
When(ConstOverloadedMethod(pfk2, OverloadedCall, int(int))).Return(5);

SampleClass s;

Verify(Method(pfk2, CallThis)).Exactly(1);
Using(pfk2).Verify(Method(pfk2, CallThis)
+ OverloadedMethod(pfk2, OverloadedCall, int()));

VerifyNoOtherInvocations(Method(pfk2, CallThis));


Going for Virtual Functions

Initially, I didn’t intend to do anything for virtual functions in PowerFake, as I thought it is already covered enough in mocking frameworks. However, soon I realized that it is not true: you also need to use a mock object and pass it to the functions using the virtual functions… but it is not always possible to inject your mock object in the production code, because the function might have used an internal object for its purpose. In some cases, the virtual function call might even got devirtualized by the compiler, which will result in a direct function call rather than an indirect, virtual call. And, PowerFake already supports faking devirtualized calls.

Reading a bit, I figured that I can also cover virtual function calls of GCC in PowerFake. Now that PowerFake can be used with mocking utilities of FakeIt, my next priority is supporting virtual function calls. And, it’ll fake virtual functions of the target class, so every object of that class will call the fake function and there is no need to use & pass  a mock object to the target function.

Finally, note that PowerFake is still in the early stages of development, so no backward compatibility is guaranteed and there might be many corner cases which need to be fixed.






Running `make` from anywhere

Posted by James Just James on March 10, 2018 07:00 AM
Sometimes while I’m deep inside mgmt’s project directory, I want to run an operation from the Makefile which lives in the root! Unfortunately, if you do so while nested, you’ll just get: james@computer:~/code/mgmt/resources$ make build make: *** No rule to make target 'build'. Stop. The Ten Minute Solution: I figured I’d hack out a quick solution. What I came up with looks like this: #!/bin/bash # James Shubin, 2018 # run `make` in the first directory (or its parent recursively) that it works in MF='Makefile' # looks for this file, could look for others, but that's silly CWD=$(pwd) # starting here while true; do if [ -e "$MF" ]; then make $@ # run make!

Flathub Experience: Adding an App

Posted by Jiri Eischmann on March 09, 2018 05:14 PM

Flathub is a new distribution channel for Linux desktop apps. Truly distro-agnostic, unifying across abundance of Linux distributions. I was planning for a long time to add an application to Flathub and see what the experience is, especially compared to traditional distro packaging (I’m a Fedora packager). And I finally got to it this last week.


In Fedora I maintain PhotoQt, a very fast image viewer with very unorthodox UI. Its developer is very responsive and open to feedback. Already back in 2016 I suggested he provides PhotoQt as a flatpak. He did so and found making a flatpak really easy. However it was in the time before Flathub, so he had to host its own repo.

Last week I was notified about a new release of PhotoQt, so I prepared updates for Fedora and noticed that the Flatpak support became “Coming soon” again. So I was like “hey, let’s get it back and to Flathub”. I picked up the two-year-old flatpak manifest, and started rewriting it to successfully build with the latest Flatpak and meet Flathub requirements.

First I updated dependencies. You add dependencies to the manifest in a pretty elegant way, but what’s really time consuming is getting checksums of official archives. Most projects don’t offer them at all, so you have to download the archive and generate it yourself. And you have to do it with every update of that dependency. I’d love to see some repository of modules. Many apps share the same dependencies, so why to do the same work again and again with every manifest?

Need to bundle the latest LibRaw? Go to the repository and pick the module info for your manifest:

"name": "libraw",
"cmake": false,
"builddir": true,
"sources": [ { "type": "archive", "url": "https://www.libraw.org/data/LibRaw-0.18.8.tar.gz", "sha256":"56aca4fd97038923d57d2d17d90aa11d827f1f3d3f1d97e9f5a0d52ff87420e2" } ]

And on the top of such a repo you can actually build a really nice tooling. You can let the authors of apps add dependencies simply by picking them from the list and you can generate the starting manifest for them. And you could also check for dependency updates for them. LibRaw has a new version, wanna bundle it, and see how your app builds with it? And the LibRaw module section of your manifest would be replaced by the new one and a build triggered.

Of course such a repo of modules would have to be curated because one could easily sneak in a malicious module. But it would make writing manifests even easier.

Besides updating dependencies I also had to change the required runtime. Back in 2016 KDE only had a testing runtime without any versioning. Flathub now includes KDE runtime 5.10, so I used it. PhotoQt also uses “photoqt” in all file names and Flatpak/Flathub now requires it in the reverse-DNS format: org.qt.photoqt. Fortunately flatpak-builder can rename it for you, you just need to state it in the manifest:

"rename-desktop-file": "photoqt.desktop",
"rename-appdata-file": "photoqt.appdata.xml",
"rename-icon": "photoqt",

Once I was done with the manifest, I looked at the appdata file. PhotoQt has it in a pretty good shape. It was submitted by me when I packaged it for Fedora. But there were still a couple of things missing which are required by Flathub: OASR and release info. So I added it.

I proposed all the changes upstream and at this point PhotoQt was pretty much ready for submitting to Flathub. I never intended to maintain PhotoQt in Flathub myself. There should be a direct line between the app author and users, so apps should be maintained by app authors if possible. I knew that upstream was interested in adding PhotoQt to Flathub, so I contacted the upstream maintainer and asked him whether he wanted to pick it up and go through the Flathub review process himself or whether I should do it and then hand over the maintainership to him. He preferred the former.

The review was pretty quick and it only took 2 days between submitting the app and accepting it to Flathub. There were three minor issues: 1. the reviewer asked if it’s really necessary to give the app access to the whole host, 2. app-id didn’t match the app name in the manifest (case sensitivity), 3. by copy-pasting I added some spaces which broke the appdata file and of course I was too lazy to run validation before submitting it.

And that was it. Now PhotoQt is available in Flathub. I don’t remember how much time exactly it took me to get PhotoQt to Fedora, but I think it was definitely more and also the spec file is more complex than the flatpak manifest although I prefer the format of spec files to json.

Is not your favorite app available in Flathub? Just go ahead, flatpak it, and then talk to upstream, and try to hand the maintainership over to them.

Fedora IoT Edition is go!

Posted by Peter Robinson on March 09, 2018 02:16 PM

Tap tap tap… is this on?

So the Fedora Council has approved my proposal of IoT as a Council Objective. I did a presentation on my IoT proposal to the council a few weeks ago and we had an interesting and wide ranging discussion on IoT and what it means to Fedora. I was actually expecting IoT to be a Spin with a SIG to cover it but the Council decided it would be best to go the whole way and make it an Official Edition with a Working Group to back it! Amazing! One of the side effects of IoT being an accepted Objective is that the Objective Lead has a seat on the Council.

So I would say the real work starts now, but the reality is that there’s been no small amount of work I’ve been doing to get to this point, but there is also now a lot of work to do to get us to a release. We’re going to aim this initially for Fedora 29, with the intention to have a lightweight spin style process to get things up to speed as quick as possible between now and then.

So what will be happening over the coming weeks (and months)? We’ll be getting the working group in place, getting an initial monthly release process in place so that people can start to have something to kick the tires with and provide feedback and drive discussion. With those two big pieces in place we can start to grow the Fedora IoT community and work out the bits that work and bits that don’t work. Iterate early and iterate often as is often said!

So of course the big question is how do you get involved? We’ll be tracking all of the Working Group efforts in a number of places:

  • Fedora IoT Pagure Group: We’ll be using this for issue tracking, release milestones, and for git repositories to contain things like container recipes.
  • Fedora IoT mailing list: If you don’t have a FAS account you can subscribe by emailing (blank is fine) iot-subscribe AT lists.fedoraproject.org and the list server will reply with subscription options.
  • IRC on #fedora-iot
  • Fedora IoT Tracking bug: This will be primarily for tracking dependencies and component RFEs and issues.

The above list will change and evolve as we go, I expect the pagure group, mailing list and IRC to be the primary places of communication. There will of course be updates also on this blog, no doubt Fedora Magazine, FedoraIoT on twitter and elsewhere.

What will there be to do? Well lots, and that is still obviously in flux at the moment. The things that come to mind that we’ll definitely need to address will include, but certainly won’t be limited to, awesome docs, the actual OSTree Atomic host image which will be the key foundation, CI/CD pipelines to automate testing as much as possible, release processes including landing of features once they’re ready, containers and layers to add functionality, a selection of supported reference devices (see also CI/CD in this context too), various IoT frameworks, hardware enablement such as wireless standards and distinct from the supported reference HW, security (a single word can’t even begin to describe this iceberg!) and developer experience to name but a few but there’s so much more! Is everyone excited? Of course you are!

No podemos montar My Passport WD en Linux

Posted by Alvaro Castillo on March 09, 2018 01:25 PM


Nos dejaron un disco duro My Passport WD para pasarle unos archivos con la grata sorpresa de toparnos con un fallo de montaje porque no reconoce el dispositivo, y cómo lo hemos solucionado, vamos a publicar cómo arreglarlo para salir del paso.

¿Por qué no nos reconoce el sistema de archivos?

Parece que los Western Digital My Passport, vienen con un formato llamado exFAT, que es un formato desarrollado por Microsoft que mejora el rendimiento de unidades flash (estándar pa...

Improve your Python projects with mypy

Posted by Fedora Magazine on March 09, 2018 08:00 AM

The mypy utility is a static type checker for Python. It combines the benefits of dynamic typing and static typing. As you may know, the Python programming language is dynamically typed. So what does static type checking for Python mean? Read below to find out.

What is a type?

To store data used by a program, the system needs to know how much memory space is needed. To determine that, programming languages use types. A type matches a size of memory that must be allocated for the program to store the data. Some of the most common types are integer, float, and string.

A dynamically typed language checks types in the program during run time. Python has a “weak” type system, meaning the interpreter doesn’t enforce type checking.

A statically typed language checks types based on its analysis of the source code before the program runs. When a program passes a static type check, it is guaranteed to satisfy some set of type safety. Static type checking detects possible errors in the application code before run time. This is why mypy, which statically type checks Python applications, is useful.

Installation and running mypy

Since mypy is packaged in Fedora, installation is easy:

$ dnf install python3-mypy

Now, create a simple Python application to test and understand how mypy works.

class Person():
    def __init__(self,surname,firstname,age,job):
        self.surname = surname
        self.firstname = firstname
        self.age = age
        self.job = job

def display_doctors(persons):
    for person in persons:
        if person.job.lower()in['gp','dentist','cardiologist']:
            print(f'{person.surname} {person.name}')

mike = Person('Davis', 'Mike', '45', 'dentist')
john = Person('Roberts', 'John', 21, 'teacher')
lee = Person('Willams', 'Lee', 'gp', 56)

display_doctors([mike, john, 'lee'])

Save this code snippet into a file named testing_mypy.py. Next, run mypy against the test code:

$ mypy testing_mypy.py

Note mypy is permissive by default. When you run against the example no error is returned. This default is useful for an app with a large code base, since you can gradually introduce mypy into your project.

Adding type hints

Using the default Python 3.6 in Fedora 27 and up, you can use type hints to annotate your code. Then mypy can check the application against these type hints. Next, use an editor to add type hints to the display_doctors function in the example program:

from typing import List
def display_doctors(persons: List[Person]) -> None:
    for person in persons:
    if person.job.lower()in['gp','dentist','cardiologist']:
        print(f'{person.surname} {person.name}')

The example adds the following hints:

  • List[Person] – This syntax expresses that the display_doctors function expects a list of Person object as an argument.
  • -> None – This syntax specifies that the function will return a None value.

Now, run mypy again:

$ mypy testing_mypy.py
test_mypy.py:19: error: "Person" has no attribute "name"
test_mypy.py:26: error: Argument 1 to "display_doctors" has incompatible type "Person"; expected "List[Person]"
test_mypy.py:27: error: List item 2 has incompatible type "str"; expected "Person"

This results in some errors, which you can fix as follows:

  • First, in the print statement the program tries to access person.name, which does not exist. Instead, the program should use person.firstname.
  • The second error occurs when the program calls display_doctors for the first time. The expected argument is a list of Person, but the example only passes a Person.
  • Finally, the last error is due to a mistake in the list of Person. Instead of adding the Person object lee to the list, the example app has added the string ‘lee’.

Here are the relevant fixes for the example program:

def display_doctors(persons: List[Person]) -> None:
    for person in persons:
    if person.job.lower()in['gp','dentist','cardiologist']:
        print(f'{person.surname} {person.firstname}')
display_doctors([mike, john, lee])

Edit the code as above, and run mypy again.

There are other errors in the program. Your exercise as a reader is to find them. Look at using type hints for the Person class. Try it by yourself first. If you want to check your work, the error free program is available on Github.


Hopefully this short introduction to mypy shows you some benefits for your code base. mypy makes it easier to maintain your code, and to read the application code. You can also catch typos and mistakes before your code runs. Note that mypy is also available for Python 2.7 applications. However, it uses a different syntax based on commenting.

Photo by Chris Ried on Unsplash

IBus 1.5.18 is released (Cont #2)

Posted by Takao Fujiwara on March 09, 2018 07:29 AM

I wrote “IBus 1.5.18 is released” last time and I’d write the additional information.

Previously I could not take the screencast in GNOME in more than 30 seconds and recently I got the GSettings key ‘max-screencast-length’ of ‘org.gnome.settings-daemon.plugins.media-keys’.

% gsettings get org.gnome.settings-daemon.plugins.media-keys max-screencast-length
uint32 30
% gsettings set org.gnome.settings-daemon.plugins.media-keys max-screencast-length 0

Now I can take the screencast of IBus unicode typing in GNOME Wayland without the 30 seconds limitation.
<iframe allowfullscreen="true" class="youtube-player" height="349" src="https://www.youtube.com/embed/_Bov7tJFJOg?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="425"></iframe>

To enable the global shortcut key Ctrl-Shift-e in GNOME, which can be changed with ibus-setup utility, you have to add one more IBus input sources besides XKB sources with gnome-control-center region utility. E.g. If you have “English (US)” and “French”, you also need to add “English (English – US (Typing Booster))” or “Other (rawcode)” or “Chinese (Intelligent Pinyin)” or else so that you apply IBus engines to “English (US)” and “French” XKB sources.

Fedora ibus-1.5.18-2.fc28 was rebuilt for CLDR emoji 33 alpha and Unicode emoji 11.0 alpha.

# dnf update ibus

Bug Squashing and Diversity

Posted by Daniel Pocock on March 09, 2018 12:39 AM

Over the weekend, I was fortunate enough to visit Tirana again for their first Debian Bug Squashing Party.

Every time I go there, female developers (this is a hotspot of diversity) ask me if they can host the next Mini DebConf for Women. There have already been two of these very successful events, in Barcelona and Bucharest. It is not my decision to make though: anybody can host a MiniDebConf of any kind, anywhere, at any time. I've encouraged the women in Tirana to reach out to some of the previous speakers personally to scope potential dates and contact the DPL directly about funding for necessary expenses like travel.

The confession

If you have read Elena's blog post today, you might have seen my name and picture and assumed that I did a lot of the work. As it is International Women's Day, it seems like an opportune time to admit that isn't true and that as in many of the events in the Balkans, the bulk of the work was done by women. In fact, I only bought my ticket to go there at the last minute.

When I arrived, Izabela Bakollari and Anisa Kuci where already at the venue getting everything ready. They looked busy, so I asked them if they would like a bonus responsibility, presenting some slides about bug squashing that they had never seen before while translating them into Albanian in real-time. They delivered the presentation superbly, it was more entertaining than any TED talk I've ever seen.

The bugs that won't let you sleep

The event was boosted by a large contingent of Kosovans, including 15 more women. They had all pried themselves out of bed at 03:00 am to take the first bus to Tirana. It's rare to see such enthusiasm for bugs amongst developers anywhere but it was no surprise to me: most of them had been at the hackathon for girls in Prizren last year, where many of them encountered free software development processes for the first time, working long hours throughout the weekend in the summer heat.

and a celebrity guest

A major highlight of the event was the presence of Jona Azizaj, a Fedora contributor who is very proactive in supporting all the communities who engage with people in the Balkans, including all the recent Debian events there. Jona is one of the finalists for Red Hat's Women in Open Source Award. Jona was a virtual speaker at DebConf17 last year, helping me demonstrate a call from the Fedora community WebRTC service fedrtc.org to the Debian equivalent, rtc.debian.org. At Mini DebConf Prishtina, where fifty percent of talks were delivered by women, I invited Jona on stage and challenged her to contemplate being a speaker at Red Hat Summit. Giving a talk there seemed like little more than a pipe dream just a few months ago in Prishtina: as a finalist for this prestigious award, her odds have shortened dramatically. It is so inspiring that a collaboration between free software communities helps build such fantastic leaders.

With results like this in the Balkans, you may think the diversity problem has been solved there. In reality, while the ratio of female participants may be more natural, they still face problems that are familiar to women anywhere.

One of the greatest highlights of my own visits to the region has been listening to some of the challenges these women have faced, things that I never encountered or even imagined as the stereotypical privileged white male. Yet despite enormous social, cultural and economic differences, while I was sitting in the heat of the summer in Prizren last year, it was not unlike my own time as a student in Australia and the enthusiasm and motivation of these young women discovering new technologies was just as familiar to me as the climate.

Hopefully more people will be able to listen to what they have to say if Jona wins the Red Hat award or if a Mini DebConf for Women goes ahead in the Balkans (subscribe before posting).

Modularity features in Mock

Posted by Jakub Kadlčík on March 09, 2018 12:00 AM

In this article, we are going to talk about Mock and how to build packages on top of modules in it. While some cool use-cases come to mind, for the time being, let’s call it an experiment.


Mock uses configs for defining the build environment. We are going to take one of the preinstalled configurations and modify it. But which one to take? In the past, one of the modularity goals was to have modular-only buildroot, so in this case, the custom-1-x86_64 (or another architecture) was the best choice. The idea has changed and now we want to build modules (i.e. their packages) in standard Fedora buildroot, such as fedora-28-x86_64. Let’s start from there.

cp /etc/mock/fedora-28-x86_64.cfg ./modular-fedora-28-x86_64.cfg
vim ./modular-fedora-28-x86_64.cfg


Things are a little bit complicated because of the fact, that DNF with modularity features is not in Fedora yet, neither it is in upstream. We need to get it from Copr repository - mhatina/dnf-modularity-stable. If you are brave enough, you can install it on your host system, but we are going to talk about a better option here.

Open the project in your browser and click on the appropriate repofile for your buildroot, in our example . Now we are going to add it to our mock config.

config_opts['yum.conf'] = """


<-------- We are going to add here

The particular DNF repo will look like this.

name = Copr repo for dnf-modularity-stable owned by mhatina
baseurl = https://copr-be.cloud.fedoraproject.org/results/mhatina/dnf-modularity-stable/fedora-$releasever-$basearch/
enabled = 1

I’ve removed some less necessary lines from the original repofile to keep the example short, but you can use it without any changes just fine.

As a consequence of modularity code not being merged into DNF upstream, it can happen that dnf package in Fedora has a greater version then a dnf package built in the Copr repo. Naturally, the greater version is going to be installed into the Mock buildroot, so we have to do a little trick. Find the updates repo and exclude the dnf package from it.


Now we can be sure that the proper DNF will be installed. However, another trick is going to be needed. Consider this, the DNF is going to be installed, but will it be used to create the rest of the buildroot? Actually not, the DNF from your host is going to be used for this. We need to add the following line to the top of the config file

config_opts['use_bootstrap_container'] = True

This way Mock creates a minimal buildroot containing the DNF with modularity features and then operates within this buildroot to create the final buildroot in which the package is going to be built. That means that dnf module ... command will be available while creating the final buildroot. A bit complicated right? Fortunately, we don’t have to worry about this too much, and just easily configure the bootstrap container to be used.

Module in buildroot

First of all, we need to have some module built - either in Koji or Copr. Have you read the recent article about How to build modules in Copr? It’s pretty cool! Let’s assume that we have the httpd:master module from that article built in Copr. Now we can add it’s repository into the buildroot the same way as the DNF before.

name = Copr modules repo for frostyx/testmodule/httpd-master-20180128084132
baseurl = ...
enabled = 1

Alright, the module is available in the buildroot. It doesn’t mean that it is going to be installed though. Scroll up to the top of the configuration file and add

config_opts['module_install'] = ['httpd:master/default']

It means that Mock will execute dnf module install httpd:master/default while constructing the buildroot.

Build the package

The httpd:master module in its default profile provides the httpd package. Now we can have a package dependant on httpd

BuildRequires: httpd

and be able to build it.

mock -r ./modular-fedora-28-x86_64.cfg /path/to/your/package.src.rpm

Starter Kit - the turn-key template for your own pages

Posted by Cockpit Project on March 09, 2018 12:00 AM

The bare minimum

Cockpit’s API makes it easy to create your own pages (or “extensions” if you will) that appear in Cockpit’s menu and interact with your system in any way you like. Our pet example is the Pinger which is just the bare minimum: a HTML file with a form to enter an IP, a small piece of JavaScript to call the ping Linux command through Cockpit spawn() and capture its output; and a manifest file which tells cockpit how to add it to the menu and where the entry point is.

There is a rather old blog post which explains the Pinger example in detail. Cockpit changed its visual design quite dramatically since then, Pinger’s JavaScript got split into a separate file and does not use jQuery any more, but aside from these details that post is still generally applicable.

Requirements for real projects

Pinger is great for explaining and understanding the gist of how Cockpit works. But an actual production-ready project requires a lot more:

  • Separation of HTML, CSS, and JavaScript code: This ensures that your code can use a safe Content-Security-Policy and does not have to use e. g. unsafe-inline. We strongly recommend this for third-party pages, and absolutely require this for Cockpit’s own pages.

  • Modern frameworks for creating page contents. For any non-trivial page you really don’t want to dabble with piecing together myelement.innerHTML = … strings, but use something like React to build page contents and PatternFly so that your page fits into Cockpit’s design.

  • Use Babel to write your code in modern ES6 JavaScript.

  • Use ESLint to spot functional and legibility errors in your code.

  • A build system like webpack to drive all of the above and build blobs (“minified Javascript in a single file”) that are efficiently consumable by browsers.

  • Building of release tarballs, source and binary RPMs for testing and distribution.

  • Tests to make sure your code keeps working, new features work on all supported operating systems, and changes (pull requests) get validated.

  • As a bonus, easy and safe testing of your page in a Vagrant virtual machine.

Sounds complex? It indeed is for someone who is not familiar with the ever-changing “modern” JavaScript world, and doesn’t want to learn the details of all of these before you can even begin working on your code. This is where the Starter Kit comes in!

Bootstrapping your way from zero to “works!”

The Cockpit Starter Kit is an example project which provides all of the above requirements. It provides a simple React page that uses the cockpit.file() API to read /etc/hostname and show it. There is also an accompanying test that verifies this page. The other files are mostly build system boilerplate, i. e. the things you don’t want to worry about as the first thing when you start a project.

So, how to get this? Make sure you have the npm package installed. Then check out the repository and build it:

git clone https://github.com/cockpit-project/starter-kit.git
cd starter-kit

After that, install (or rather, symlink) the webpack-generated output page in dist/ to where cockpit can see it:

mkdir -p ~/.local/share/cockpit
ln -s `pwd`/dist ~/.local/share/cockpit/starter-kit

Now you should be able to log into https://localhost:9090 and see the “Starter Kit” menu entry:

starter kit

The symlink into your source code checkout is a very convenient and efficient way of development as you can just type make after changing code and directly see the effect in Cockpit after a page reload.

You should now play around with this a little by hacking src/starter-kit.jsx, running make, and reloading the page. For example, try to read and show another file, run a program and show its output, or use cockpit.file(“/etc/hostname”).watch(callback) to react to changes of /etc/hostname and immediately update the page.


Untested code is broken code. If not here and now, then in the future or some other operating system. This is why Cockpit has a rather complex machinery of regularly building 26 (!) VM images ranging from RHEL-7 and Fedora 27 over various releases of Debian and Ubuntu to OpenShift and Windows 8, and running hundreds of integration tests on each of them for every PR in an OpenShift cluster.

Replicating this for other projects isn’t easy, and this has been one, if not the major reason why there aren’t many third-party Cockpit projects yet. So we now made it possible for third-party GitHub projects to use Cockpit’s CI environment, test VM images, and (independently) Cockpit’s browser test abstraction API.

starter-kit uses all three of those: When you run make check, it will:

  • build an RPM out of your current code
  • check out cockpit’s bots/ directory that has the current image symlinks and tools to download, customize and run VM images
  • check out cockpit’s tests/common directory from a stable Cockpit release (as the API is not guaranteed to be stable) which provides a convenient Python API for the Chrome DevTools protocol
  • download Cockpit’s current CentOS-7 VM image; you can test on a different operating system by setting the environment variable TEST_OS=fedora-27 (or a different operating system - but note that starter-kit does not currently build debs)
  • create an overlay on that pristine centos-7 image with the operating system’s standard “cockpit” package and your locally built starter-kit RPM installed
  • run a VM with that overlay image with libvirt and QEMU
  • launch a chromium (or chromium-headless) browser
  • Run the actual check-starter-kit test which instructs the web browser what to do and which assertions to make
[starter-kit] $ make check
rpmbuild -bb [...] cockpit-starter-kit.spec
git fetch --depth=1 https://github.com/cockpit-project/cockpit.git
From https://github.com/cockpit-project/cockpit
 * branch            HEAD       -> FETCH_HEAD
git checkout --force FETCH_HEAD -- bots/
bots/image-customize -v -r 'rpm -e cockpit-starter-kit || true' -i cockpit -i `pwd`/cockpit-starter-kit-*.noarch.rpm -s /home/martin/upstream/starter-kit/test/vm.install centos-7
TEST_AUDIT_NO_SELINUX=1 test/check-starter-kit
# ----------------------------------------------------------------------
# testBasic (__main__.TestStarterKit)

ok 1 testBasic (__main__.TestStarterKit) # duration: 21s
# TESTS PASSED [22s on donald]

Note that the first time you run this will take a long time due to the rather large VM image download. But it will be reused for further tests.

For writing your own tests with the Cockpit Python API, have a look at the Browser and MachineCase classes in testlib.py. These provide both low-level (like click() or key_press()) and high-level (like login_and_go()) methods for writing test cases. And of course you have a wealth of Cockpit tests for getting inspiration.

starter-kit itself is also covered by Cockpit’s CI, i. e. pull requests will run tests on CentOS 7 and Fedora 27 (example, click on “View Details”). Please come and talk to us once your project is mature enough to do the same, then we can enable automatic pull request testing on your project as well.

Using different technologies

starter-kit makes opinionated choices like using React, webpack, and Cockpit’s testing framework. These are the technologies that we use for developing Cockpit itself, so if you use them you have the best chance that the Cockpit team can help you with problems. Of course you are free to replace any of these, especially if you have already existing code/tests or a build system.

For example, it is straightforward to just use Cockpit’s test images with the image-customize tool and running these as ephemeral VMs with testvm.py, but not use Cockpit’s test/common. Tests can also be written with e. g. puppeteer or nightmare. I will write about that separately.


starter-kit is still fairly new, so there are for sure things that could work more robustly, easier, more flexibly, or just have better documentation. If you run into trouble, please don’t hesitate telling us about it, preferably by filing an issue.

Happy hacking!

The Cockpit Development Team