Fedora People

How to change the Plymouth bootup theme

Posted by Fedora Magazine on January 19, 2018 10:24 AM

When starting Fedora, users are greeted with a neat graphical bootup sequence. The underlying software that displays the bootup graphics is called Plymouth, and the great thing is that it can be customized with different themes. In Fedora, the default theme is called Charge, and most users will be familiar with it, as it has been the default theme for many releases.

screenshot of the charge plymouth theme

Default plymouth boot theme, “Charge”

Other than Charge, Fedora workstation ships with a handful of other basic themes, but more are available in the Fedora repositories. In this article we will cover the basic procedure to change your theme, and cover some of the additional themes available in the Fedora repositories.

Changing the Plymouth theme

The plymouth-set-default-theme is a utility in Fedora for changing the theme. However, before changing the theme, we will need to know what themes are available and installed on the system. Get a list of the installed themes with the following command:

$ plymouth-set-default-theme --list
charge
details
text
tribar

You can also use the utility to check what is the current Plymouth theme:

$ plymouth-set-default-theme
charge

To change your Plymouth theme to “tribar”, use the following command.

$ sudo plymouth-set-default-theme tribar -R

Note that the -R flag will rebuild your initrd, and the next time you reboot your system, you will see the new theme in action.

More themes in the Fedora Repos

The official Fedora repositories contain a number of additional themes to try out. To use these, first install the package with dnf, then enable using the instructions above.

spinner

screenshot of the spinner plymouth theme

Spinner is a minimal theme with, as the name suggests, just a spinner to show you the bootup is still underway.

Install and enable this theme with the commands:

$ sudo dnf install plymouth-theme-spinner
$ sudo plymouth-set-default-theme spinner -R

Spinfinity

screenshot of the spinfinity plymouth theme

Spinfinity is a Fedora branded theme. It has an infinity symbol shaped indicator, as well as a plain white progress bar at the bottom of the screen.

Install and enable this theme with the commands:

$ sudo dnf install plymouth-theme-spinfinity
$ sudo plymouth-set-default-theme spinfinity -R

hot-dog theme

screenshot of the hot-dog plymouth theme

Take your Fedora back to 2012, and try out the Beefy Miracle theme. In this one, The Mustard Indicates Progress!

Install and enable this theme with the commands:

$ sudo dnf install plymouth-theme-hot-dog
$ sudo plymouth-set-default-theme hot-dog -R

 

 

PHP version 7.1.14RC1 and 7.2.2RC1

Posted by Remi Collet on January 18, 2018 02:04 PM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.2.2RC1 are available as SCL in remi-test repository and as base packages in the remi-php72-test repository for Fedora 25-27 and Enterprise Linux.

RPM of PHP version 7.1.14RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26-27 or remi-php71-test repository for Fedora 24-25 and Enterprise Linux.

PHP version 7.0 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.2.2RC1 is also available in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.4.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php71, php72)

Base packages (php)

[HowTo] Combine Python methods with Jinja filters in Ansible

Posted by Roland Wolters on January 18, 2018 09:14 AM

Ansible Logo

Ansible has a lot of ways to manipulate variables and their content. We shed some light on the different possibilities – and how to combine them.

Ansible inbuilt filters

One way to manipulate variables in Ansible is to use filters. Filters are connected to variables via pipes, |, and the result is the modified variable. Ansible offers a set of inbuilt filters. For example the ipaddr filter can be used to find IP addresses with certain properties in a list of given strings:

# Example list of values
test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']

# {{ test_list | ipaddr }}
['192.24.2.1', '::1', '192.168.32.0/24', 'fe80::100/10', '2001:db8:32c:faad::/64']

Jinja2 filters

Another set of filters which can be utilized in Ansible are the Jinja2 filters of the template engine Jinja2, which is the default templating engine in Ansible.

For example the map filter can be used to pick certain values from a given dictionary. Note the following code snippet where from a list of names only the first names are given out as a list due to the mapping filter (and the list filter for the output).

vars:
  names:
    - first: Foo
      last: Bar
    - first: John
      last: Doe
 
 tasks:
 - debug:
     msg: "{{ names | map(attribute='first') |list }}"

Python methods

Besides filters, variables can also be modified by the Python string methods: Python is the scripting language Ansible is written in, and and provides string manipulation methods Ansible can just use. In contrast to filters, methods are not attached to variables with a pipe, but with dot notation:

vars:
  - mystring: foobar something

- name: endswith method
  debug:
    msg: "{{ mystring.endswith('thing') }}"

...

TASK [endswith method] *****************************************************************
ok: [localhost] => {
 "msg": true
}

Due to the close relation between Python and Jinja2 many of the above mentioned Jinja2 filters are quite similar to the string methods in Python and as a result, some capabilities like capitalize are available as a filter as well as a method:

vars:
  - mystring: foobar something

tasks:
- name: capitalize filter
  debug:
    msg: "{{ mystring|capitalize() }}"

- name: capitalize method
  debug:
    msg: "{{ mystring.capitalize() }}"

Connecting filters and methods

Due to the different ways of invoking filters and methods, it is sometimes difficult to bring both together. Caution needs to be applied if filters and methods are to be mixed.

For example, if a list of IP addresses is given and we want the last element of the included address of the range 10.0.0.0/8, we first can use the ipaddr filter to only output the IP within the appropriate range, and afterwards use the split method to break up the address in a list with four elements:

vars:
 - myaddresses: ['192.24.2.1', '10.0.3.5', '171.17.32.1']

tasks:
- name: get last element of 10* IP
  debug:
    msg: "{{ (myaddresses|ipaddr('10.0.0.0/8'))[0].split('.')[-1] }}"

...

TASK [get last element of 10* IP] **************************************************************
ok: [localhost] => {
 "msg": "5"
}

As can be seen above, to attach a method to a filtered object, another set of brackets – ( ) – is needed. Also, since the result of this filter is a list, we need to take the list element – in this case this is easy since we only have one result, so we take the element 0. Afterwards, the split method is called upon the result, gives back a list of elements, and we take the last element (-1, but element 3 would have worked here as well).

 

Conclusion

There are many ways in Ansible to manipulate strings, however since they are coming from various sources it is sometimes a little bit tricky to find what is actually needed.

Privacy expectations and the connected home

Posted by Matthew Garrett on January 17, 2018 09:45 PM
Traditionally, devices that were tied to logins tended to indicate that in some way - turn on someone's xbox and it'll show you their account name, run Netflix and it'll ask which profile you want to use. The increasing prevalence of smart devices in the home changes that, in ways that may not be immediately obvious to the majority of people. You can configure a Philips Hue with wall-mounted dimmers, meaning that someone unfamiliar with the system may not recognise that it's a smart lighting system at all. Without any actively malicious intent, you end up with a situation where the account holder is able to infer whether someone is home without that person necessarily having any idea that that's possible. A visitor who uses an Amazon Echo is not necessarily going to know that it's tied to somebody's Amazon account, and even if they do they may not know that the log (and recorded audio!) of all interactions is available to the account holder. And someone grabbing an egg out of your fridge is almost certainly not going to think that your smart egg tray will trigger an immediate notification on the account owner's phone that they need to buy new eggs.

Things get even more complicated when there's multiple account support. Google Home supports multiple users on a single device, using voice recognition to determine which queries should be associated with which account. But the account that was used to initially configure the device remains as the fallback, with unrecognised voices ended up being logged to it. If a voice is misidentified, the query may end up being logged to an unexpected account.

There's some interesting questions about consent and expectations of privacy here. If someone sets up a smart device in their home then at some point they'll agree to the manufacturer's privacy policy. But if someone else makes use of the system (by pressing a lightswitch, making a spoken query or, uh, picking up an egg), have they consented? Who has the social obligation to explain to them that the information they're producing may be stored elsewhere and visible to someone else? If I use an Echo in a hotel room, who has access to the Amazon account it's associated with? How do you explain to a teenager that there's a chance that when they asked their Home for contact details for an abortion clinic, it ended up in their parent's activity log? Who's going to be the first person divorced for claiming that they were vegan but having been the only person home when an egg was taken out of the fridge?

To be clear, I'm not arguing against the design choices involved in the implementation of these devices. In many cases it's hard to see how the desired functionality could be implemented without this sort of issue arising. But we're gradually shifting to a place where the data we generate is not only available to corporations who probably don't care about us as individuals, it's also becoming available to people who own the more private spaces we inhabit. We have social norms against bugging our houseguests, but we have no social norms that require us to explain to them that there'll be a record of every light that they turn on or off. This feels like it's going to end badly.

(Thanks to Nikki Everett for conversations that inspired this post)

(Disclaimer: while I work for Google, I am not involved in any of the products or teams described in this post and my opinions are my own rather than those of my employer's)

comment count unavailable comments

Intel NUC

Posted by Richard W.M. Jones on January 17, 2018 04:15 PM

I’ve been looking for replacements for my HP Microservers which according to this blog are now nearly 7 years old!. Although still going (sort of) strong: one of them failed completely, and another has developed a faulty cache manifested by random 32 byte wide web server corruption (yes, it’s also my main web server …)

My virtualization cluster is also coming up to 4 years old, and while it works fine it turns out that running servers without cases isn’t such a good idea because they generate large amounts of RF interference.

So you can tell that my current computing setup is held together with string and sticky tape. Can I make a nicer system based on a pile of NUCs? I bought 1 NUC for testing:

20180117_125413

The total cost (including tax and delivery) was £583.96 from scan.co.uk. I also specced up a similar system with an M.2 SSD which would have been about £670. (An ideal system would have both M.2 SSD and a hard disk but that gets even more expensive.) The NUC model is NUC7i5BNH and the Wikipedia page is absolutely essential for understanding the different models.

Enough talk, how well does it work? To start off with, really badly with the NUC regularly hanging hard. This was because of a faulty RAM module, a problem I’ve had with the Gigabyte Brix before. Because of that, I’m only running with one 8 GB module:

Screenshot_2018-01-17_15-31-50

It has two real cores with hyperthreading. The cores are Kaby-Lake Intel(R) Core(TM) i5-7260U CPU @ 2.20GHz.

The compile performance is reasonable, not great, as you’d expect from an Intel i5 processor.

synergy-2.0.0 is in Fedora updates-testing

Posted by Ding-Yi Chen on January 17, 2018 02:03 PM

synergy is a software that allow you to use your favorite mouse and keyboard on multiple machines. It supports MacOS, Windows, and Linux.

I have packed the latest stable version, 2.0.0, for Fedora 27, 26 and EPEL 7. No EPEL 6 update this time as it requires CXX14, which EL6 does not provide.

F27-20180112 Updated Live Isos Released

Posted by Ben Williams on January 17, 2018 01:41 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated 27 Live ISOs, carrying the 4.14.13-300 kernel.

This set of updated isos will save about 800 MB of updates after install.  (for new installs.)

Build Directions: https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

We would also like to thank Fedora- QA  for running the following Tests on our ISOs.: https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=27&build=FedoraRespin-27-updates-20180112.0&groupid=1

These can be found at  http://tinyurl.com/live-respins .We would also like to thank the following irc nicks for helping test these isos: dowdle, Mac101 Southern_Gentlem.

Configure software repositories in Fedora

Posted by Fedora Magazine on January 17, 2018 08:00 AM

Your Fedora system gets its software from repositories, or repos. Each of these repos can have any number of software apps available for you to install and use. The official Fedora repos contain thousands of free and open source apps. Some repos may have free or proprietary apps. Some only contain one. You may want to configure software repositories at certain times.

Fortunately, this is easy in Fedora. For instance, you may want to get a package to test and see if it fixes a bug. In that case, you’d want to turn on Fedora’s testing repo. You might want to leave it on to get more packages for testing. Or you might want to turn it off, to stop participating.

Configuring with the command line

To configure repos at the command line, use the dnf command. To see a list of all enabled repos, use this command:

sudo dnf repolist

You can use command options to change configuration for just one command. To enable or disable a repo just once, use a command option:

sudo dnf --enablerepo=<reponame>...
sudo dnf --disablerepo=<reponame>...

Follow the option with the actual dnf command. For instance, to install the latest kernel from Fedora’s test repo:

sudo dnf --enablerepo=updates-testing install kernel\*

You can combine several enable and disable options together. For example:

sudo dnf --enablerepo=repo1 --disablerepo=repo2,repo3 install <package>

If you want to change the defaults permanently, use these commands:

sudo dnf config-manager --set-enabled <reponame>
sudo dnf config-manager --set-disabled <reponame>

Backing out confusion

Perhaps you install, update, or remove a lot of software using different setups. In this case, things may get confusing. You might not know which software is installed from what repos. If that happens, try this.

First, disable extra repos such as those ending in –testing. Ideally, enable only fedora and updates repos. Run this command for each unwanted repo:

sudo dnf config-manager --set-disabled <unwanted-repo>

Then run this command to synchronize your system with just stable, updated packages:

sudo dnf distro-sync

This ensures your Fedora system is only using the latest packages from specific repos.

For lots more detail on repositories, visit the Fedora documentation pages.

Sortie officielle de Dotclear 2.13

Posted by Casper on January 17, 2018 06:53 AM

La Devteam du CMS Dotclear vient de dévoiler le cru 2018. La version 2.13 est une importante version, elle contient toutes les modifications de la version 2.12.2 en plus des nouvelles features de la 2.13.

Ce modeste weblog tourne lui-même en ce moment sur la 2.13

On va pas se voiler la face, depuis 2012 j'ai testé beaucoup de moteurs de blog : Hugo, Pelican, Mezzanine, Wordpress... Seul Dotclear est resté. Il faut avouer qu'il fait le job avec excellence. Il est le seul à fournir une interface admin complète mais pas compliquée. Aucune faille de sécurité, des perfs à faire chavirer, bref, Dotclear représente pour moi une valeur sûre pour les années à venir.

Et si cette version se révélait être un très bon cru

Il y a une fonctionnalité que j'attendais avec impatience! Pour vous situer le contexte, j'avais posé une question toute bête à la Devteam il y a quelques temps déjà, l'issue de la discussion était un fix prévu dans cette version.

Capture d’écran_2017-12-30_10-12-55.png

Il est beau le Changelog, pas vrai? Et donc la fonctionnalité que j'attendais sous le sapin est : « Cope with MySQLi connection via socket ». Je prévois d'écrire un petit article sur le sujet.

Du fun

Cette version s'annonce très prometteuse, c'est moi qui vous le dit...

How to configure Tor onion service on Fedora

Posted by Kushal Das on January 17, 2018 05:53 AM

You can set up a Tor onion service in a VM on your home desktop, or on a Raspberry Pi attached to your home network. You can serve any website, or ssh service using the same. For example, in India most of the time if an engineering student has to demo a web application, she has to demo on her laptop or on a college lab machine. If you set up your web application project as an onion service, you can actually make it available to all of your friends. You don’t need an external IP or special kind of Internet connection or pay for a domain name. Of course, it may be slower than all the fancy website you have, but you don’t have to spend any extra money for this.

In this post, I am going to talk about how can you set up your own service using a Fedora 26 VM. The similar steps can be taken in Raspberry Pi or any other Linux distribution.

Install the required packages

I will be using Nginx as my web server. The first step is to get the required packages installed.

$ sudo dnf install nginx tor
Fedora 26 - x86_64 - Updates                     10 MB/s |  20 MB     00:01
google-chrome                                    17 kB/s | 3.7 kB     00:00
Qubes OS Repository for VM (updates)             98 kB/s |  48 kB     00:00
Last metadata expiration check: 0:00:00 ago on Wed Jan 17 08:30:23 2018.
Dependencies resolved.
================================================================================
 Package                Arch         Version                Repository     Size
================================================================================
Installing:
 nginx                  x86_64       1:1.12.1-1.fc26        updates       535 k
 tor                    x86_64       0.3.1.9-1.fc26         updates       2.6 M
Installing dependencies:
 gperftools-libs        x86_64       2.6.1-5.fc26           updates       281 k
 nginx-filesystem       noarch       1:1.12.1-1.fc26        updates        20 k
 nginx-mimetypes        noarch       2.1.48-1.fc26          fedora         26 k
 torsocks               x86_64       2.1.0-4.fc26           fedora         64 k

Transaction Summary
================================================================================
Install  6 Packages

Total download size: 3.6 M
Installed size: 15 M
Is this ok [y/N]:

Configuring Nginx

After installing the packages, the next step is to setup the web server. For a quick example, we will just show the default Nginx index page over this web service. We will have to change the web server port to a different one in /etc/nginx/nginx.conf file. Please read about Nginx to know more about how to configure Nginx with your web application.

listen 8090 default_server;

Here we have the web server running on port 8090.

Configuring Tor

Next, we will set up the Tor onion service. The configuration file is located at /etc/tor/torrc. We will add the following two lines.

HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 80 127.0.0.1:8090

We are redirecting port 80 in the onion service to the port 8090 in the same system.

Starting the services

Remember to open up port 80 in the firewall before starting the services. I am going to keep it an exercise for the reader to find out how :)

We will start nginx and tor service as the next step, you can also watch the system logs to find out status of Tor.

$ sudo systemctl start nginx
$ sudo systemctl start tor
$ sudo journalctl -f -u tor
-- Logs begin at Thu 2017-12-07 07:13:58 IST. --
Jan 17 08:33:43 tortest Tor[2734]: Bootstrapped 0%: Starting
Jan 17 08:33:43 tortest Tor[2734]: Signaled readiness to systemd
Jan 17 08:33:43 tortest systemd[1]: Started Anonymizing overlay network for TCP.
Jan 17 08:33:43 tortest Tor[2734]: Starting with guard context "default"
Jan 17 08:33:43 tortest Tor[2734]: Opening Control listener on /run/tor/control
Jan 17 08:33:43 tortest Tor[2734]: Bootstrapped 5%: Connecting to directory server
Jan 17 08:33:44 tortest Tor[2734]: Bootstrapped 10%: Finishing handshake with directory server
Jan 17 08:33:44 tortest Tor[2734]: Bootstrapped 15%: Establishing an encrypted directory connection
Jan 17 08:33:45 tortest Tor[2734]: Bootstrapped 20%: Asking for networkstatus consensus
Jan 17 08:33:45 tortest Tor[2734]: Bootstrapped 25%: Loading networkstatus consensus
Jan 17 08:33:55 tortest Tor[2734]: I learned some more directory information, but not enough to build a circuit: We have no usable consensus.
Jan 17 08:33:55 tortest Tor[2734]: Bootstrapped 40%: Loading authority key certs
Jan 17 08:33:55 tortest Tor[2734]: Bootstrapped 45%: Asking for relay descriptors
Jan 17 08:33:55 tortest Tor[2734]: I learned some more directory information, but not enough to build a circuit: We need more microdescriptors: we have 0/6009, and can only build 0% of likely paths. (We have 0% of guards bw, 0% of midpoint bw, and 0% of exit bw = 0% of path bw.)
Jan 17 08:33:56 tortest Tor[2734]: Bootstrapped 50%: Loading relay descriptors
Jan 17 08:33:57 tortest Tor[2734]: Bootstrapped 56%: Loading relay descriptors
Jan 17 08:33:59 tortest Tor[2734]: Bootstrapped 65%: Loading relay descriptors
Jan 17 08:34:06 tortest Tor[2734]: Bootstrapped 72%: Loading relay descriptors
Jan 17 08:34:06 tortest Tor[2734]: Bootstrapped 80%: Connecting to the Tor network
Jan 17 08:34:07 tortest Tor[2734]: Bootstrapped 85%: Finishing handshake with first hop
Jan 17 08:34:07 tortest Tor[2734]: Bootstrapped 90%: Establishing a Tor circuit
Jan 17 08:34:08 tortest Tor[2734]: Tor has successfully opened a circuit. Looks like client functionality is working.
Jan 17 08:34:08 tortest Tor[2734]: Bootstrapped 100%: Done

There will be a private key and the hostname file for the onion service in the /var/lib/tor/hidden_service/ directory. Open up Tor browser, and visit the onion address. You should be able to see a page like below screenshot.

Remember to backup the private key file if you want to keep using the same onion address for a longer time.

What all things can we do with this onion service?

That actually depends on your imagination. Feel free to research about what all different services can be provided over Tor. You can start with writing a small Python Flask web application, and create an onion service for the same. Share the address with your friends.

Ask your friends to use Tor browser for daily web browsing. The more Tor traffic we can generate, the more difficult it will become for the nation-state actors to try to monitor traffics, that in turn will help the whole community.

WARNING on security and anonymous service

Remember that this tutorial is only for quick demo purpose. This will not make your web server details or IP or operating system details hidden. You will have to make sure of following proper operational security practices along with system administration skills. Riseup has a page describing best practices. But, please make sure that you do enough study and research before you start providing long-term services over the Tor.

Also please remember that Tor is developed and run by people all over the world and the project needs donation. Every little bit of help counts.

Mindshare Elections: Interview with Gabriele Trombini (mailga)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Gabriele Trombini (mailga)

  • Fedora Account: mailga
  • IRC: mailga My channel list is composed of more than 40 channels but I’m mainly using #fedora-ambassadors #fedora-join and #fedora-mktg
  • Fedora User Wiki Page

Questions

Is there a specific task or issue you think that Mindshare should address this term?

There are many. Mainly Mindshare should set all the tasks it has to take care of, and consequently define the borders within which it can move. At the same it must work for granting continuity along the moving from the former body (FAmSCo) to the new.

Please elaborate on the personal “Why” which motivates you to be a candidate for Mindshare.

This is quite simply. I followed from the beginning the Mindshare project (former FOSCo) and I really worked hard in order to find a solution for the issues Mindshare I’m hoping will fix.

What are your thoughts on the impact (as an individual and then as a Mindshare group) that the group will have on the Fedora Mission?

Individually I think I could bring the experience gained in several years of working for the project, and the vision I have of the whole outreach sector.
I’m looking at Mindshare as a kind of revolution. At last, if all the things will work fine, we will have outreach groups worning together for reaching the same target and also it will allow fair sharing of available resources.

The post Mindshare Elections: Interview with Gabriele Trombini (mailga) appeared first on Fedora Community Blog.

Mindshare Elections: Interview with Nick Bebout (nb)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Nick Bebout (nb)

  • Fedora Account: nb
  • IRC: nb #fedora-admin, #fedora-noc, #fedora-ambassadors, #fedora-devel, #fedora-ops, etc.
  • Fedora User Wiki Page

Questions

Is there a specific task or issue you think that Mindshare should address this term?

I think Mindshare’s first objective is to define it’s goals, since it is a new group. I think then we should work on defining our plans, given the blog post from Matthew about aligning our efforts with the objectives set forth by the Fedora Council. See question #3 for more thoughts.

Please elaborate on the personal “Why” which motivates you to be a candidate for Mindshare.

I have been involved in Fedora Ambassadors for many years, and I have also been involved with other projects that are part of Mindshare, such as Docs, Design, etc. I believe that outreach is a critical component to the success of Fedora. If we make an awesome distribution, but no one knows about it, then we wouldn’t have users.

What are your thoughts on the impact (as an individual and then as a Mindshare group) that the group will have on the Fedora Mission?

I think Mindshare will be a great thing for Fedora, since this will allow us to better coordinate our “outreach” related teams.

The post Mindshare Elections: Interview with Nick Bebout (nb) appeared first on Fedora Community Blog.

Council Elections: Interview with Dennis Gilmore (ausil)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM
Fedora Ambassador Mentor badge used for Council

Fedora Council Elections

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Dennis Gilmore (ausil)

  • Fedora Account: ausil
  • IRC: dgilmore (found in #fedora-3dprinting #fedora-meeting-3 #fedora-meeting-2 #fedora-websites #fedora-cloud #proyecto-fedora #fedora-apps #fedora-qa #fedora-fedmsg #fedora-buildsys #fedora-meeting-1 #fedora-releng #fedora-ambassadors #fedora-kernel #fedora #fedora-latam #fedora-br #fedora-java #fedora-mips #fedora-s390x #fedora-ppc @#fedora-arm #epel #fedora-meeting #fedora-security #fedora-devel #fedora-noc #fedora-admin
  • Fedora User Wiki Page

Questions

What’s your background in Fedora? What expertise do you bring based on past experience, and what projects are you actively involved in now?

I started as a packager for fedora.us in 2003, I continued on with Fedora Extras, getting involved with infrastructure I took over managing the buildsystem in use at the time plague. Because of infrastructure work Mike McGrath and I started EPEL as we had needs for extra software to run n RHEL. As a result of my work in building and shipping in Fedora and for EPEL, I ended up being heavily involved in doing the work to setup and move to koji when Core and Extras merged. Shortly after I was the release engineer for OLPC, helping them to move to a newer Fedora and get their changes upstream in Fedora. Working at OLPC led to me joining Red Hat as a release engineer, I took over Fedora Release Engineering and have lead getting Fedora out the door since. I have been on the old Fedora Board as well as been a FESCo member at various times over the years.

I have a lot of experience in figuring out how to integrate and deliver new artefacts and deliverables, as well as a deep understaning of many pieces and the history of Fedora.

I am actively involved in Release Engineering and infrastructure, working on how we build and ship Fedora to enable us to work smarter. I have a deep involvement in multiarch work as well.

What do you plan to accomplish on the Council? What are the most pressing issues facing Fedora today? What should we do about them?

Fedora is growing a a rapid pace, some of the older solutions no longer scale well. I would like to look at how we can improve automation to replace and remove manual processes such as package reviews and updates in order to enable contributors to work on more interesting problems. I would like to see us push for the use of machine learning and automated systems to automate as much of the review, build and delivery pipeline as possible.

What are your interests and accomplishments outside of Fedora? What of those things will help you in this role?

I am working on a MBA currently and hope to bring the new skills I am learning to help Fedora grow and be stronger. I have been involved in many team sports over the years, it has helped me work on skills to work well as part of a team.

The post Council Elections: Interview with Dennis Gilmore (ausil) appeared first on Fedora Community Blog.

FESCo Elections: Interview with Jared K. Smith (jsmith)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM
Fedora Engineering Steering Council badge, awarded after Fedora Elections - read the Interviews to learn more about candidates

Fedora Engineering Steering Committee

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Jared K. Smith (jsmith)

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

I think the biggest challenge that FESCo will have in the coming year is the increasing complication of the operating system as it gets stretched in different directions (desktop/server/cloud/IoT). I bring a lot of industry experience to the role, and hope to improve the communication between the various people working on Fedora.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

The first objective needs to be usability. My opinion is that it’s great to be cutting-edge, but if you lose focus on making the technology usable, then you’re going to be cutting edge to smaller and smaller target audience. In short, FESCo needs to continue to find the right balance in that regard,and continue to help ensure new features play well within the greater Fedora ecosystem.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

I still feel like there’s a lot of upheaval in the tooling used by Release Engineering to build Fedora. It’s not really FESCo’s job to mandate or operate those tools — but I do think there’s an opportunity for FESCo to help define what they’d like to see (from a developer perspective) in those tools. A relevant example came up in last week’s FESCo meeting, related to batching in Bohdi. FESCo doesn’t run Bohdi, but it does have a role to play in helping define how it would like Bohdi to work with regards to batching of updates.

The post FESCo Elections: Interview with Jared K. Smith (jsmith) appeared first on Fedora Community Blog.

Council Elections: Interview with Langdon White (langdon)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM
Fedora Ambassador Mentor badge used for Council

Fedora Council Elections

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Langdon White (langdon)

  • Fedora Account: langdon
  • IRC: langdon #fedora-meeting-*; #fedora-devel; #fedora-modularity; #fedora; #fedora-releng; #fedora-server; #fedora-websites;#fedora-magazine; many others
  • Fedora User Wiki Page

Questions

What’s your background in Fedora? What expertise do you bring based on past experience, and what projects are you actively involved in now?

I have been a active Fedora user and contributor for about 7 years. I have been a Linux user for roughly 20 years. Most of that time, I have been a developer targeting Linux or Windows but have also been a sys admin for short stints. My most active project in Fedora is the Modularity Objective which touches almost every part of Fedora.

I have more than 15 years of experience with large, complex, multi-faceted software projects. I have worked with Fortune 50 CEOs and 10 person startups to help them realize their business goals through software. I have also worked as a developer advocate for RHEL (and, by extension, Fedora). I also spent about 6 months migrating an existing co-lo-based system to Linode using dynamic allocation and configuration management.

I believe my experience as a developer on Linux gives me a somewhat unique perspective on how a distribution can and should work. However, that experience is tempered by my time working as and with Sys Admins to understand how important stability is to production environments.

What do you plan to accomplish on the Council? What are the most pressing issues facing Fedora today? What should we do about them?

I advocate for web application developers on Fedora as I think they are under-represented as a community in the “machinery” of Fedora. I also am deeply interested in ensuring that Fedora continues to be “First” by supporting innovative choices even when they introduce some risk.

Increasingly, developers and sys admins using distributions confuse the distro’s stability and usability with low to no effort. Fedora, like many of the distros, has a hard time showing how valuable the effort of its contributing community is to its users. Many users just assume “someone” will keep doing the work. We need to work on this perception or lower quality distributions will drive the energy out of the higher quality ones.

What are your interests and accomplishments outside of Fedora? What of those things will help you in this role?

I volunteer, at present, on a Finance Committee board. I have also volunteered for several other boards in the past including my kids’ school board and the Fedora Council. While not strictly “interests” the role I have played as either board member or chair has taught me how to help steer volunteer organizations toward the organization’s goals. I also have helped to identify or clarify those goals when the organization was fuzzy on its direction. I believe that these skills are very useful to the Council in ensuring a light touch across the community that still supports Fedora’s mission.

The post Council Elections: Interview with Langdon White (langdon) appeared first on Fedora Community Blog.

FESCo Elections: Interview with Josh Boyer (jwboyer)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM
Fedora Engineering Steering Council badge, awarded after Fedora Elections - read the Interviews to learn more about candidates

Fedora Engineering Steering Committee

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Josh Boyer (jwboyer)

  • Fedora Account: jwboyer
  • IRC: jwboyer/jwb #fedora-devel, #fedora-kernel, #fedora-council, #fedora-admin
  • Fedora User Wiki Page

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

I continue to think that the rapid adoption of containers in the tech world is increasingly important. The work the Fedora community has put into FLIBS and other container efforts has been very valuable in pushing Fedora into that space. Combined with the Fedora Atomic work to provide a small and stable OS install that is easily consumed for running containers, I think Fedora is well positioned to continue in that space. I would like to see us improve on this work and expand both the offerings to meet more users needs.

However, containers are not and likely will never be the solution for ALL users. I have found the on-going Modularity work to be extremely interesting to help round out some of the gaps that containers don’t fill for whatever reason. Being able to build software decoupled from the underlying OS and offer multiple versions simultaneously expands the potential user base beyond what it is today. It would be interesting to see how far we can go with this concept.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

As mentioned above, I think Modularity is one. Outside of that specific technology, I would like to see us focus on the general concept of producing an OS and community that is able to handle different release cadences for various Editions. This will be difficult and require a lot of collaboration, but that is where the fun is had.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

I think we still struggle with cross-team collaboration and communication in many areas. We continue to improve on it, but I think FESCo can help specifically by more actively working with Marketing, Docs, QA, and websites. Perhaps my perception is wrong, but I still feel like we tend to stay too specific on the technical issues and assume others can determine what is relevant for end users from our devel list and meeting logs. Working with those teams to highlight changes, etc would benefit the entire distro and our end users.

On a similar token, I would like to see some increased focus on what purpose our Spins serve, how much usage they’re getting, and how the spin maintainers view the consumption of the OS bits we’re producing.

The post FESCo Elections: Interview with Josh Boyer (jwboyer) appeared first on Fedora Community Blog.

Council Elections: Interview with Nick Bebout (nb)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM
Fedora Ambassador Mentor badge used for Council

Fedora Council Elections

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Nick Bebout (nb)

  • Fedora Account: nb
  • IRC: nb #fedora-admin, #fedora-noc, #fedora-ambassadors, #fedora-devel, #fedora-ops
  • Fedora User Wiki Page

Questions

What’s your background in Fedora? What expertise do you bring based on past experience, and what projects are you actively involved in now?

I have been involved in Fedora for many years (FAS says my account was created in 2007). I’m involved in several different projects in Fedora, primarily Ambassadors, Packaging, and Infrastructure. I also occasionally work on Design Team tasks and in the past have also helped with docs and websites. I’m also a provenpackager and an Ambassadors Mentor.

What do you plan to accomplish on the Council? What are the most pressing issues facing Fedora today? What should we do about them?

I think we need to keep working on building marketshare. I think there is an opportunity now to promote how we do have mp3 encoding in Fedora, and we have openh264 easily installable. I’ve talked to some people in the past about why they choose other distributions and one of the main things they mention is that other distributions have mp3 and other codecs installed by default and we do not. Also, I think modularity will be a big benefit to Fedora once it gets fully up and running.

What are your interests and accomplishments outside of Fedora? What of those things will help you in this role?

I work as a Network Technician at the University of Southern Indiana. I am also taking on more of a Systems Administrator role. In my work at the University I have seen how a few of our classes use Linux and I have seen opportunities that we could take to promote Fedora (and Linux in general) in the higher education market. I recently earned my LPIC-1 certification and am working on preparing for the LPIC-2. I also have a Bachelor of Science in Business Administration and Computer Information Systems from USI and am currently a student working towards my Master of Business Administration degree here at USI.

The post Council Elections: Interview with Nick Bebout (nb) appeared first on Fedora Community Blog.

FESCo Elections: Interview with Dominik Mierzejewski (rathann)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM
Fedora Engineering Steering Council badge, awarded after Fedora Elections - read the Interviews to learn more about candidates

Fedora Engineering Steering Committee

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Dominik Mierzejewski (rathann)

  • Fedora Account: rathann
  • IRC: rathann #fedora-devel, #fedora-pl, #fedora-science, #ffmpeg-devel, #mplayerdev, #rpmfusion
  • Fedora User Wiki Page

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

Fedora is a member of one of the fastest changing industries. Our community members are mostly doing an excellent job staying on top of the latest and greatest technologies and integrating them into the distribution. However, I think the role of a Linux distribution is to provide an integrated experience, so introducing new technologies or making disruptive changes to existing ones should still take that into account. Sometimes, that might mean delaying a change or putting in additional work with upstreams. Some of the recent changes were not done as smoothly as they could have been and resulted in some avoidable pain for the users. The Firefox 57 update was one such case. I’d like to see FESCo put greater focus on avoiding breaking the user experience, as is done in the Linux kernel project, for example.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

Being on the cutting edge is a laudable goal, but that’s what upstream projects are for. Fedora, as a distribution, should provide a smooth, integrated user experience while including the latest usable versions of those upstream projects as the primary deliverable. Providing convenient playgrounds to aid in development, testing and integration of new technologies is a goal we fulfill quite well already, with COPR giving packagers access to Fedora infrastructure to experiment freely without disturbing main repositories. There are also initiatives like Modularity happening in parallel to releasing traditional Fedora editions. FESCo should focus on ensuring that changes do not break things for Fedora users and developers.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

Speaking with my packager hat on, package reviews are still a chore. However, relaxing the packaging guidelines is not the answer as that will lower the overall packaging quality. More automation in package reviews would certainly help here and I’m disappointed that projects like Fresque can’t seem to make it. More automated testing is also needed, especially for updated packages, which surreptitiously add dependencies or break ABI often enough to cause issues for users. Such things can and should be caught before an update reaches stable repository.

The post FESCo Elections: Interview with Dominik Mierzejewski (rathann) appeared first on Fedora Community Blog.

Council Elections: Interview with Jona Azizaj (jonatoni)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM
Fedora Ambassador Mentor badge used for Council

Fedora Council Elections

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Jona Azizaj (jonatoni)

  • Fedora Account: jonatoni
  • IRC: jonatoni #fedora-ambassadors #fedora-diversity #fedora-commops #fedora-g11n #fedora-sq etc
  • Fedora User Wiki Page

Questions

What’s your background in Fedora? What expertise do you bring based on past experience, and what projects are you actively involved in now?

I have been part of Fedora Project as a contributor for almost four years now. My first contributions consisted on promoting Fedora in my country, Albania. That’s why the first team that I was part of was Ambassadors. Furthermore I’m a core member of the Diversity team and I’m very excited to see what we have done/achieved in one year, trying to foster diversity and inclusion in Fedora community. Marketing and CommOps is connected with Ambassadors because it’s related to what we do in different events while we spread the word about Fedora so we need to document it on our personal blogs/Fedora CommBlog, “measure” how successful the event we participated and/or organized was etc. L10n is another team that I’m part of, because for me it’s really important to have Fedora in Albanian, to bring everything about Fedora closer to our community and also help/motivate other countries to do the same.

What do you plan to accomplish on the Council? What are the most pressing issues facing Fedora today? What should we do about them?

Our community is growing everyday, in this way we have more people joining us but we still face two “problems”. First one is that some of the contributors stay just for a period of time and then disappear and the other problem is that we don’t have – a lot of – people from under-represented groups part of our community. This doesn’t mean that we don’t have a friendly community because I love the spirit of the Fedora community, people are very friendly, helpful and everyone is welcomed regardless of their technical skill level. I know that outreach teams that we have like Ambassadors, Diversity team, CommOps etc are working on and helping to solve and make it better.

What are your interests and accomplishments outside of Fedora? What of those things will help you in this role?

Outside of Fedora I’m finishing my studies on Business Informatics, working as a Marketing Specialist for Collabora Productivity and co-organizing events/ conferences together with other members at Open Labs Hackerspace. I’m also part of other open source communities as well like Nextcloud, LibreOffice, RGSoC etc. Lately I’m very engaged with FLOSS technologies and trying to combine my technical and non-technical background to contribute to open source communities. Being part of different open source projects has helped me a lot in getting different points of view and the experience I have gained would help me in this role.

The post Council Elections: Interview with Jona Azizaj (jonatoni) appeared first on Fedora Community Blog.

FESCo Elections: Interview with Kevin Fenzi (kevin)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM
Fedora Engineering Steering Council badge, awarded after Fedora Elections - read the Interviews to learn more about candidates

Fedora Engineering Steering Committee

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Kevin Fenzi (kevin)

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

I think the coming year will continue to see technical issues around how we make Fedora. There is going to be lots more modularity work that should make it much easier to maintain packages, along with continuing container and other ways to consume Fedora. I think I bring a great deal of history to this discussion so I can make sure we learn from our mistakes.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

Modules and Automation. We should free our maintainers to do the hard interaction work (fixing bugs, helping users communicate with upstream, making things better) instead of grunt work. More automated testing and module building should help this a great deal.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

We have a number of areas for improvement, but I think FESCo’s job is to coordinate and help people doing work on this spots so they can make things better.

The post FESCo Elections: Interview with Kevin Fenzi (kevin) appeared first on Fedora Community Blog.

Mindshare Elections: Interview with Jared K. Smith (jsmith)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Jared K. Smith (jsmith)

Questions

Is there a specific task or issue you think that Mindshare should address this term?

I think our priority should be to revisit what has and hasn’t worked well in the past, and try to come up with concrete actions to help increase Fedora’s mindshare.

Please elaborate on the personal “Why” which motivates you to be a candidate for Mindshare.

Due to constraints at work, I haven’t been able to participate in Fedora as much as I would like to the last 18 months. I’ve recently changed jobs and am more easily able to contribute again, and I thought this would be a good way to give back to the Fedora community.

What are your thoughts on the impact (as an individual and then as a Mindshare group) that the group will have on the Fedora Mission?

I think this group has a strong chance of streamlining some processes and clarifying some policies that have been too complicated (and perhaps too nebulous) in the past.

The post Mindshare Elections: Interview with Jared K. Smith (jsmith) appeared first on Fedora Community Blog.

Mindshare Elections: Interview with Jona Azizaj (jonatoni)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Jona Azizaj (jonatoni)

  • Fedora Account: jonatoni
  • IRC: jonatoni #fedora-ambassadors #fedora-diversity #fedora-commops #fedora-g11n #fedora-sq etc
  • Fedora User Wiki Page

Questions

Is there a specific task or issue you think that Mindshare should address this term?

A task/issue that is very important I think is the communication between teams, not only between outreach teams like Ambassadors, Marketing, CoomOps etc but also with technical teams (FESCo). In this way every team will be informed, updated with the latest news and at the same page, because their activities are well-connected with each other. I’m pretty sure that Mindshare will do the best to work on this issue and avoid the lack of communication between different teams.

Please elaborate on the personal “Why” which motivates you to be a candidate for Mindshare.

Since the aim of Mindshare is to help outreach teams and my involvement in Fedora is mostly related to these teams, this was the reason why I wanted to join Mindshare. The responsibilities that Mindshare will have, will be very helpful for the outreach teams, here I’d like to mention the template that will be created for the ambassadors so they can work effectively and a short survey to see how many ambassadors we have active – which – was also one of the reasons why I joined FAmSCo and I’d like to continue this. I would also give a voice as a part of diversity team and help to solve issues if we will have any.

What are your thoughts on the impact (as an individual and then as a Mindshare group) that the group will have on the Fedora Mission?

I believe that my experience – as an individual and as part of Mindshare – will help outreach teams have a better communication with each other and with technical teams, in this way making the work more effectively to drive forward the Fedora mission. All the team working together, combining and channeling our efforts on the mission and difficulties that rise ahead, I think will help us emphasize and highlight the innovative platform we create for the community members.

The post Mindshare Elections: Interview with Jona Azizaj (jonatoni) appeared first on Fedora Community Blog.

Mindshare Elections: Interview with Radka Janeková (rhea)

Posted by Fedora Community Blog on January 16, 2018 11:59 PM

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Wednesday, January 17th and closes promptly at 23:59:59 UTC on Wednesday, January 24th, 2018.

Interview with Radka Janeková (rhea)

  • Fedora Account: rhea
  • IRC: Rhea #fedora-ambassadors, #fedora-commops, #fedora-diversity, #fedora-dotnet
  • Fedora User Wiki Page

Questions

Is there a specific task or issue you think that Mindshare should address this term?

Among other things (such as improving election process) I would like to address some issues with the inclusion and event organization issues within the regional ambassador groups.

Please elaborate on the personal “Why” which motivates you to be a candidate for Mindshare.

I have unique point of view at various issues and aspects of organizing events that involve other contributors and I have experience with problems that other people often can’t even see and I feel like I should do something about it.

What are your thoughts on the impact (as an individual and then as a Mindshare group) that the group will have on the Fedora Mission?

I hope that we can make our community a better place *for everyone* and I believe that the Mindshare group, being more diverse, can succeed solving problems that were previously overlooked.

The post Mindshare Elections: Interview with Radka Janeková (rhea) appeared first on Fedora Community Blog.

Episode 78 - Risk lessons from Hawaii

Posted by Open Source Security Podcast on January 16, 2018 10:40 PM
Josh and Kurt talk about the accidental missile warning in Hawaii. We also discuss general preparedness and risk.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/6156285/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Smart card forwarding with Fedora

Posted by Red Hat Security on January 16, 2018 02:30 PM

Smart cards and hardware security modules (HSM) are technologies used to keep private keys secure on devices physically isolated from other devices while allowing access only to an authorized user. That way only the intended user can use that device to authenticate, authorize, or perform other functions that involve the private keys while others are prevented from gaining access. These devices usually come in the form of a USB device or token which is plugged into the local computer.

In modern "cloud" computing, it is often desirable to use such a device like a smart card on remote servers. For example, one can sign software or documents on a remote server, use the local smart card to authenticate to Kerberos, or other possible uses.

There are various approaches to tackle the problem of using a local smart card on a remote system, and on different levels of the smart card application stack. It is possible to forward the USB device holding the smart card, or forward the lower-level PC/SC protocol which some smart cards talk, or forward the high-level interface used to communicate with smart cards, the PKCS#11 interface. It is also possible to forward between systems one’s OpenPGP keys via GnuPG by using gpg-agent, or one’s SSH keys via ssh-agent. While these are very useful approaches when we are restricted to one particular set of keys, or a single application, they fail to provide a generic smart card or forwarding mechanism.

Hence, in Fedora, we followed the approach of forwarding the higher level smart card interface, PKCS#11, as it provides the following advantages:

  • Unlike USB forwarding it does not require administrator access on the remote system, nor any special interaction with the remote system’s kernel.
  • It can be used to forward more than just smart cards, that is, a Trusted Platform Module (TPM) chip or any HSM can also be forwarded over the PKCS#11 interface.
  • Unlike any application-specific key forwarding mechanism, it forwards the whole feature set of the card, allowing it to access items like X.509 certificates, secret keys, and others.

In the following sections we describe the approach and tools needed to perform that forwarding over SSH secure communication channels.

Scenario

We assume having a local workstation, and a remote server. On the local computer we have inserted a smart card (in our examples we will use a Nitrokey card, which works very well with the OpenSC drivers). We will forward the card from the workstation to the remote server and demonstrate various operations with the private key on the card.

Installing required packages

Fedora, by default, includes smart card support; the additional components required to forward the card are available as part of the p11-kit-server package, which should be installed on both client and server. For the following examples we will also use some tools from gnutls-utils; these tools can be installed with DNF as follows:

 $ dnf install p11-kit p11-kit-server gnutls-utils libp11

The following sections assume both local and remote computers are running Fedora and the above packages are installed.

Setting up the PKCS#11 forwarding server on a local client

To forward a smart card to a remote server, you first need to identify which smart cards are available. To list the smart cards currently attached to the local computer, use the p11tool command from the gnutls-utils package. For example:

 $ p11tool --list-tokens
 ...
 Token 6:
         URL: pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29
         Label: UserPIN (Daiki's token)
         Type: Hardware token
         Manufacturer: www.CardContact.de
         Model: PKCS#15 emulated
         Serial: DENK0000000
         Module: opensc-pkcs11.so
 ...

This is the entry for the card I’d like to forward to remote system. The important pieces are the ‘pkcs11:’ URL listed above, and the module name. Once we determine which smart card to forward, we expose it to a local Unix domain socket, with the following p11-kit server command:

 $ p11-kit server --provider /usr/lib64/pkcs11/opensc-pkcs11.so “pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29”

Here we provide, to the server, the module location (optional) with the --provider option, as well as the URL of the card. We used the values from the Module and URL lines of the p11tool output above. When the p11-kit server command starts, it will print the address of the PKCS#11 unix domain socket and the process ID of the server:

P11_KIT_SERVER_ADDRESS=unix:path=/run/user/12345/p11-kit/pkcs11-12345
P11_KIT_SERVER_PID=12345

For later use, set the variables output by the tool on your shell prompt (e.g., copy and paste them or call the above p11-kit server command line with eval $(p11-kit server ...)).

Forwarding and using the PKCS#11 Unix socket on the remote server

On the remote server, we will initially forward the previously generated PKCS#11 unix socket, and then access the smart card through it. To access the forwarded socket as if it were a smart card, a dedicated PKCS#11 module p11-kit-client.so is provided as part of the p11-kit-server package.

Preparing the remote system for PKCS#11 socket forwarding

One important detail you should be aware of, is the file system location of the forwarded socket. By convention, the p11-kit-client.so module utilizes the "user runtime directory", managed by systemd: the directory is created when a user logs in, and removed upon logout, so that the user doesn't need to manually clean up the socket file.

To locate your user runtime directory, do:

 $ systemd-path user-runtime
 /run/user/1000

The p11-kit-client.so module looks for the socket file under a subdirectory (/run/user/1000/p11-kit in this example). To enable auto-creation of the directory, do:

 $ systemctl --user enable p11-kit-client.service

Forwarding the PKCS#11 socket

We will use ssh to forward the local PKCS#11 unix socket to the remote server. Following the p11-kit-client convention, we will forward the socket to the remote user run-time path so that cleaning up on disconnect is not required. The remote location of the run-time path can be obtained as follows:

$ ssh <user>@<remotehost> systemd-path user-runtime
/run/user/1000

The number at the end of the path above is your user ID in that system (and thus will vary from user to user). You can now forward the Unix domain socket with the -R option of the ssh command (after replacing the example path with the actual run-time path):

 $ ssh -R /run/user/<userID>/p11-kit/pkcs11:${P11_KIT_SERVER_ADDRESS#*=} <user>@<remotehost>

After successfully logging in to the remote host, you can use the forwarded smart card as if it were directly connected to the server. Note that if any error occurs in setting up the forwarding, you will see something like this on your terminal:

Warning: remote port forwarding failed for listen path /run/user/...

Using the forwarded PKCS#11 socket

Let’s first make sure it works by listing the forwarded smart card:

 $ ls -l /run/user/1000/p11-kit/pkcs11
 $ p11tool --provider /usr/lib64/pkcs11/p11-kit-client.so --list-tokens
 ...
 Token 0:
         URL: pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29
         Label: UserPIN (Daiki's token)
         Type: Hardware token
         Manufacturer: www.CardContact.de
         Model: PKCS#15 emulated
         Serial: DENK0000000
         Module: (null)
 ...

We can similarly generate, copy objects or test certificates to the card using the same command. Any applications which support PKCS#11 can perform cryptographic operations through the client module.

Registering the client module for use with OpenSSL and GnuTLS apps

To utilize the p11-kit-client module with OpenSSL (via engine_pkcs11 provided by the libp11 package) and GnuTLS applications in Fedora, you have to register it with p11-kit. To do it for the current user, use the following commands:

$ mkdir .config/pkcs11/modules/
$ echo “module: /usr/lib64/pkcs11/p11-kit-client.so” >.config/pkcs11/modules/p11-kit-client.module

Once this is done both OpenSSL and GnuTLS applications should work, for example:

$ URL=”pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0000000;token=UserPIN%20%28Daiki%27s%20token%29”

# Generate a key using gnutls’ p11tool
$ p11tool --generate-ecc --login --label test-key “$URL”

# generate a certificate request with the previous key using openssl
$ openssl req -engine pkcs11 -new -key "$URL;;object=test-key;type=private;pin-value=XXXX" \
         -keyform engine -out req.pem -text -subj "/CN=Test user"

Note that the token URL remains the same in the forwarded system as in the original one.

Using the client module with OpenSSH

To re-use the already forwarded smart card for authentication with another remote host, you can run ssh and provide the -I option with p11-kit-client.so. For example:

 $ ssh -I /usr/lib64/pkcs11/p11-kit-client.so <user>@<anotherhost>

Using the forwarded socket with NSS applications

To register the forwarded smart card in NSS applications, you can set it up with the modutil command:

 $ sudo modutil -dbdir /etc/pki/nssdb -add p11-kit-client -libfile /usr/lib64/pkcs11/p11-kit-client.so
 $ modutil -dbdir /etc/pki/nssdb -list
 ...
   3. p11-kit-client
     library name: /usr/lib64/pkcs11/p11-kit-client.so
        uri: pkcs11:library-manufacturer=OpenSC%20Project;library-description=OpenSC%20smartcard%20framework;library-version=0.17
      slots: 1 slot attached
     status: loaded

      slot: Nitrokey Nitrokey HSM (010000000000000000000000) 00 00
     token: UserPIN (Daiki's token)
       uri: pkcs11:token=UserPIN%20(Daiki's%20token);manufacturer=www.CardContact.de;serial=DENK0000000;model=PKCS%2315%20emulated

Conclusion

With the smart card forwarding described, we make it easy to forward your smart card, or any device accessible under PKCS#11, to the “cloud”. The forwarded device can then be used by OpenSSL, GnuTLS, and NSS applications as if it was a local card, enabling a variety of applications which were not previously possible.

English

Product

Other

Category

Secure

Tags

security smart_card

Component

gnutls nss

Fixing flatpak startup times

Posted by Alexander Larsson on January 16, 2018 09:26 AM

A lot of people have noticed that flatpak apps sometimes start very slowly. Upon closer inspection you notice this only happens the first time you run the application. Still, it gives a very poor first time impression.

So, what is causing this, and can we fix it?

The short answer to this is font-cache generation, and yes, I landed a fix today. For the longer version we have to take a detour into how flatpak and fontconfig works.

All flatpak applications use something called a runtime, which supplies the /usr that the application sees. This contains a copy of the fontconfig library, but also some basic fonts in /usr/share/fonts. However, since the /usr from the host is covered, the app cannot see the fonts installed on the host, which is not great.

To allow flatpak applications to use system fonts flatpak exposes a read-only copy of the host fonts in /run/host/fonts. The copy of fontconfig shipped in the runtime is configured to look for fonts in this location as well as /usr.

Loading font files scattered in a directory like this is very slow. You have to open each font file to get the details of it, like the name, so you can properly select fonts. To avoid this fontconfig has a font cache, which is generated each time a font is installed (using the fc-cache tool). Flatpak exposes these caches to the application too.

Unfortunately the format of the fontconfig cache is using the absolute filename as the cache key, which breaks when we relocate the font files from /usr/share/fonts to /run/host/fonts. This means that the first time an application starts it has to open all the file and generate its own cache (which is later reused, so it only affects the first launch).

It would be much better if the existing cache on the host could be re-used, even when the directory name is changed. I’ve been working on a fix for this which has recently landed in fontconfig (and is scheduled to be released as 2.13.0 soon).

Today I landed the fix for this in the standard flatpak runtimes, which, coupled with using the new fontconfig version on the host, dramatically reduce initial launch times. For example, launching gedit the first time on my machine goes from 2 seconds to 0.5 seconds.

All users should automatically get the runtime part of the the fix, but it will unfortunately take some time until all distributions have moved to the new fontconfig version. But once its there, this problem will be fixed for good.

Do not limit yourself

Posted by Kushal Das on January 16, 2018 04:17 AM

This post is all about my personal experience in life. The random things I am going to write in this post, I’ve talked about in many 1x1 talks or chats. But, as many people asked for my view, or suggestions on the related topics, I feel I can just write all them down in one single place. If you already get the feeling that this post will be a boring one, please feel free to skip. There is no tl;dr version of it from me.

Why the title?

To explain the title of the post, I will go back a few years in my life. I grew up in a coal mine area of West Bengal, studied in the village’s Bengali medium school. During school days, I was very much interested in learning about Science, and kept doing random experiments in real life to learn things. They were fun. And I learned life lessons from those. Most of my friends, school teachers or folks I knew, kept telling me that those experiments were impossible, or they were beyond my reach. I was never a class topper, but once upon a time I wanted to participate in a science exam, but the school teacher in charge told me that I was not good enough for it. After I kept asking for hours, he finally said he will allow me, but I will have to get the fees within the next hour. Both of my parents were working, so no chance of getting any money from them at that moment. An uncle who used to run one of the local book stores then lent me the money so that I could pay the fees. The amount was very small, but the teacher knew that I didn’t get any pocket money. So, asking for even that much money within an hour was a difficult task. I didn’t get a high score in that examination, but I really enjoyed the process of going to a school far away and taking the exam (I generally don’t like taking written exams).

College days

During college days I spent most of my time in front of my computer at the hostel, or in the college computer labs. People kept laughing at me for the same, batchmates, juniors, seniors, or sometimes even professors. But, at the same time I found a few seniors and friends, and professors who kept encouraging whatever I did. The number of people laughing at me were always higher. Because of the experience during school days, I managed to ignore those.

Coming to the recent years

The trend continued through out my working life. There are always more people who kept laughing at everything I do. They kept telling me that the things I try to do, do not have any value and beyond my limit. I don’t see myself as one of those bright developers I meet out in the world. I kept trying to do things I love, tried to help the community whichever way possible. What ever I know, I learned because someone else took time to teach me, took time to explain it to me. Now, I keep hearing the similar stories from many young contributors, my friends, from India. Many times I saw how people kept laughing at my friends in the same way they do at me. They kept telling my friends that the things they are trying to achieve are beyond their limit. I somehow managed to meet many positive forces in my life, and I keep meeting the new ones. This helped me to put in my mind that we generally bound ourselves in some artificial limits. Most of the folks laughing at us, never tried anything in life. It is okay if we can not write or speak the perfect English like them, English is not our primary language anyway. We can communicate as required. The community out there welcomes everyone as they are. We don’t have to invent the next best programming language, or be the super rich startup person to have good friends in life. One can always push at personal level, to learn new things. To do things which makes sense to each of us. That maybe is totally crazy in other people’s life. But, it is okay to try things as you like. Once upon a time, during a 1x1 with my then manager (and lifelong mentor) Sankarshan Mukhopadhyay, he told me something which remained with me very strong to this day. We were talking about things I can do, or rather try to do. By taking another example of one of my good friends from Red Hat, he explained to me that I may think that my level is nowhere near to this friend. But, if I try to learn and do things like him, I may reach 70% level, or 5% or 50%. Who knows unless I try doing those new things. While talking about hiring for the team, he also told me about how we should always try to get people who are better than us, that way, we always will be in a position to learn from each other I guess those words together changed many things in my life. The world is too large, and we all can do things in our life at certain level. But, what we can do depends on where we draw those non-existing limits in our lives.

The Python community is one such example, when I went to PyCon US for the first time in 2013, the community welcomed me the way I am. Even though almost no one knew me, I never felt that while meeting and talking to my life time heroes. Funny that in the same conference, a certain senior person from India tried to explain that I should start behaving like a senior software engineer. I should stand in the corner with all the world’s ego, and do not talk to everyone the way I do. Later in life, the same person tried to convince me that I should stop doing anything related to community as that will not help me to make any money.

Sorry, but they are wrong in that point. I never saw any of my favorite human beings doing that. Does not matter how senior people are, age or experience wise, they always listen to others, talk nicely with everyone. Money is not everything in life. I kept jumping around in PyCon every year, kept clicking photos or talking with complete strangers about their favorite subjects. Those little conversations later become much stronger bonds, I made new friends whom I generally meet only once in a year. But, the community is still welcoming. No one cared to judge me based on how much money I make. We tried to follow the same in dgplug. The IRC channel #dgplug on Freenode is always filled with folks from all across the world. Some are very experienced contributors, some are just starting. But, it is a friendly place, we try to help each other. The motto of Learn yourself, teach others is still very strong among us. We try to break any such stupid limits others try to force on our lives. We dream, we try to enjoying talking about that book someone just finished. We discuss about our favorite food. I will end this post saying one thing again. Do not bound yourself in some non existing limits. Always remember, What a great teacher, failure is (I hope I quoted Master Yoda properly). Not everything we will try in life will be a super successful thing, but we can always try to learn from those incidents. You don’t have to bow down in front of anyone, you can do things you love in your life without asking for others’ permissions.

Command Line Heroes

Posted by Maxim Burgerhout on January 16, 2018 12:00 AM

I’ve been looking forward to this for quite a while, ever since it was announced: today, the first two episodes of Command Line Heroes were published. Command Line Heroes, or CLH for short, is a series of podcasts that tells the stories of open source. It’s hosted by Saron Yitbarek, of CodeNewbie fame, and sponsored by Red Hat.

The podcast is not about how to create an alias in Bash, or how to generate a CSR with openssl. It is about the history of Linux, open source, devops and cloud.

If you’re a geek, a hacker, a developer, a programmer, or any other sort of passionat open source builder, maker, or user, you’ll definitely want to look into this.

The first episode is about the OS wars of the 80s and how Linux came to be. I’m listening to it right now, and it’s worth every second :)

You can sign up for the newletter around Command Line Heroes here, or just pull the RSS feed into your favorite podcast client.

Galerías de fotografías del museo almeriense de retroinformática

Posted by Ismael Olea on January 15, 2018 11:00 PM

almacén de nuestra colección de retroinformática

calculadora En mi recuperación de contenidos y referencias pasadas recopilo estos enlaces relacionados con la asociación Museo almeriense de retroinformática, un proyecto que fundamos tres amigos de toda la vida el 6 de enero de 2004 y que desde entonces ha permanecido en un estado casi catatónico pero que al menos ha seguido sirviendo al propósito de conservar material informático, electrónico y últimamente y por extensión, eléctrico, de cálculo y otras máquinas, que valoramos un poco sentimentalmente pero con afán objetivo por significación, impacto tecnológico, diseño industrial y valor historiográfico. Y sí, los expertos en museografía lo primero que criticarán es el nombre: efectivamente ahora sabemos que lo que mantenemos es sólo una colección y no un museo. El museo sigue siendo una aspiración pero asentar la colección ya es un objetivo serio y suficientemente complicado. Espero que le demos más cariño en el futuro.

osciloscopio casco de realidad virtual, años 90

Exposición en las Jornadas SLCENT

Exposición organizada en las XI Jornadas SLCENT de Informática y Electrónica (noviembre de 2014) que anualmente organiza el I.E.S. Al-Ándalus.

Galería de fotografías realizada por Ana Mora:

_33A9781<script async="" charset="utf-8" src="https://embedr.flickr.com/assets/client-code.js"></script>

Galería de fotografías realizada por Paco Cantón:

Galería de fotografías de Paco Cantón

Aparición en Canal Sur Noticias

Posted by Ismael Olea on January 15, 2018 11:00 PM

Brevísima aparición en la edición almeriense de Canal Sur Noticias para explicar el contexto de las amenazas en la ciberseguridad actuales.

<iframe allow="autoplay; encrypted-media" allowfullscreen="" frameborder="0" height="270" src="https://www.youtube.com/embed/sgndWStzYY0?feature=oembed" width="480"></iframe>

Gracias a la redacción de Canal Sur por su confianza.

Container testing in the OpenShift with the Meta-Test-Family

Posted by Petr Hracek on January 15, 2018 04:23 PM

In my last article, I have mentioned, we can use the Meta-Test-Family (MTF) to validate  “standalone” containers. Without proper testing, we should not ship any container. We should guarantee that given service in container works properly.

Next possible way, how to test containers is with OpenShift environment.

This article describes how to test container in the “orchestrated” OpenShift world.

Fedora alone contains huge amount of containers.

MTF installation

Before we can start with the testing itself, we have to install MTF.

To install MTF from official Fedora repository, type:


sudo dnf install -y meta-test-family



To install MTF from COPR repository, which contains development version and should not be used on a production environment, type:

sudo dnf copr enable phracek/meta-test-family
sudo dnf install -y meta-test-family



To install MTF directly from GitHub, type:
git clone git@github.com:fedora-modularity/meta-test-family.git
cd meta-test-family
sudo python setup.py install



Now we can start with testing container in the OpenShift environment.

Prepare test for the OpenShift

Running your containers locally is dead-simple — just do `docker run`. But that’s not how you run your application in production — it’s OpenShift’s business. In order to make sure that your containers are orchestrated well, you should test them in such environment. Bear in mind that standalone and orchestrated environments are different.

What is difference between “standalone” and “orchestrated” containers?

Standalone containers can be executed easily with a single command. Managing such containers is not so easy: you need to figure out persistent storage, backups, updates, routing, scaling — all the things you get for free with orchestrators.

OpenShift environment has security restrictions, differences in persistent storage logic, expects a pod to be stateless, support for updates, multi-node environment, native source-to-image support and much much more.  Deploying orchestrator is not an easy task. This is the reason why we decided to add support for OpenShift to MTF so you can easily test your containerized application in an orchestrated environment. I’ll show how.

Before running and preparation the OpenShift environment, you have to create a test and a configuration file, for MTF, in YAML format. These two files have to be in the same directory and tests will be executed from the directory.

Configuration file for MTF


document: modularity-testing
version: 1
name: memcached
service:
    port: 11211
module:
    openshift:
        container: docker.io/modularitycontainers/memcached



What does it mean each fields in YAML config file for MTF:

  • service.port – port, where the service is available.

  • module.openshift – configuration part relevant only for OpenShift environment

  • module.openshift.container – Reference to container, which will be used for testing in OpenShift.

Test for memcached container

Memcached test for container looks like this:


$ cat sanity1.py

import pexpect
from avocado import main
from avocado.core import exceptions
from moduleframework import module_framework
from moduleframework import common

class SanityCheck1(module_framework.AvocadoTest):

"""
:avocado: enable
"""
def test_smoke(self):
    self.start()
    session = pexpect.spawn("telnet %s %s " % (self.ip_address,self.getConfig()['service']['port']))
    session.sendline('set Test 0 100 4\r\n\n')
    session.sendline('JournalDev\r\n\n')
    common.print_info("Expecting STORED")
    session.expect('STORED')
    common.print_info("STORED was catched")
    session.close()

if __name__ == '__main__':
    main()




This test connects to memcached via telnet on given ip_address and port. Port is specified in configuration file for MTF. I will speak about ip_address in following sections.

How to prepare the OpenShift environment for container testing

Let’s say, we don’t have installed the OpenShift environment on our laptop or PC. This can be done by MTF.

MTF provides a command mtf-env-set.


$ sudo MODULE=openshift OPENSHIFT_LOCAL=yes mtf-env-set
Setting environment for module: openshift
Preparing environment ...
Loaded config for name: memcached
Starting OpenShift
Starting OpenShift using openshift/origin:v3.6.0 ...
OpenShift server started.

The server is accessible via web console at:
https://127.0.0.1:8443

You are logged in as:
User: developer
Password: <any value="">

To login as administrator:
oc login -u system:admin</any>



What the command does? In this case, If parameter OPENSHIFT_LOCAL is specified, it checks if packages origin and origin-clients are installed . If not, then it installs them.

Container testing is in this case performed on local machine. In case, we would like to test containers on remote OpenShift instance, this step can be ignored. In case the parameter is missed, then tests are going to be executed on remote OpenShift instance specified by parameter OPENSHIFT_IP. Let’s write about it later on.

By this step, environment for container testing is ready.

Alone container testing

Now let’s test container either on local instance or on remote instance.

For running test we use command mtf.

Container testing is different only by parameters specified in mtf command.

Testing on local OpenShift instance


$ sudo MODULE=openshift OPENSHIFT_LOCAL=yes mtf sanity1.py



In this case, sanity1.py uses 127.0.0.1 as a self.ip_address

Testing on remote OpenShift instance


$ sudo OPENSHIFT_IP=<ip_address> OPENSHIFT_USER=<username> OPENSHIFT_PASSWD=<passwd> mtf sanity1.py



In this case, sanity1.py uses OPENSHIFT_IP as a self.ip_address.

All other parameters remain.

Tests are going to be executed from environment, in our case from notebook or PC, where are stored configuration file and tests to given OpenShift instance.

Test output


$ sudo MODULE=openshift OPENSHIFT_LOCAL=yes mtf sanity1.py
JOB ID : c2b0877ca52a14c6c740582c76f60d4f19eb2d4d
JOB LOG : /root/avocado/job-results/job-2017-12-18T12.32-c2b0877/job.log
(1/1) sanity1.py:SanityCheck1.test_smoke: PASS (13.19 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME : 13.74 s
JOB HTML : /root/avocado/job-results/job-2017-12-18T12.32-c2b0877/results.html



Let’s open log marked bold and have a look on it, what it contains.


[...snip...]['/var/log/messages', '/var/log/syslog', '/var/log/system.log'])
2017-12-18 14:29:36,208 job L0321 INFO | Command line: /bin/avocado run --json /tmp/tmppfZpNe sanity1.py
2017-12-18 14:29:36,208 job L0322 INFO |
2017-12-18 14:29:36,208 job L0326 INFO | Avocado version: 55.0
2017-12-18 14:29:36,208 job L0342 INFO |
2017-12-18 14:29:36,208 job L0346 INFO | Config files read (in order):
2017-12-18 14:29:36,208 job L0348 INFO | /etc/avocado/avocado.conf
2017-12-18 14:29:36,208 job L0348 INFO | /etc/avocado/conf.d/gdb.conf
2017-12-18 14:29:36,208 job L0348 INFO | /root/.config/avocado/avocado.conf
2017-12-18 14:29:36,208 job L0353 INFO |
2017-12-18 14:29:36,208 job L0355 INFO | Avocado config:
2017-12-18 14:29:36,209 job L0364 INFO | Section.Key [...snip...]

:::::::::::::::::::::::: SETUP ::::::::::::::::::::::::

2017-12-18 14:29:36,629 avocado_test L0069 DEBUG|

:::::::::::::::::::::::: START MODULE ::::::::::::::::::::::::




MTF verifies, If application really does not exist in OpenShift environment


2017-12-18 14:29:36,629 process L0389 INFO | Running 'oc get dc memcached -o json'
2017-12-18 14:29:36,842 process L0479 DEBUG| [stderr] Error from server (NotFound): deploymentconfigs.apps.openshift.io "memcached" not found
2017-12-18 14:29:36,846 process L0499 INFO | Command 'oc get dc memcached -o json' finished with 1 after 0.213222980499s



In the next step MTF verifies if POD does not exist in OpenShift.


2017-12-18 14:29:36,847 process L0389 INFO | Running 'oc get pods -o json'
2017-12-18 14:29:37,058 process L0479 DEBUG| [stdout] {
2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "apiVersion": "v1",
2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "items": [],
2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "kind": "List",
2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "metadata": {},
2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "resourceVersion": "",
2017-12-18 14:29:37,059 process L0479 DEBUG| [stdout] "selfLink": ""
2017-12-18 14:29:37,060 process L0479 DEBUG| [stdout] }
2017-12-18 14:29:37,064 process L0499 INFO | Command 'oc get pods -o json' finished with 0 after 0.211796045303s



Following step creates an application with given label mtf_testing and with the name which is taken from config.yaml file in container tag.


2017-12-18 14:29:37,064 process L0389 INFO | Running 'oc new-app -l mtf_testing=true docker.io/modularitycontainers/memcached --name=memcached'
2017-12-18 14:29:39,022 process L0479 DEBUG| [stdout] --> Found Docker image bbc8bba (5 weeks old) from docker.io for "docker.io/modularitycontainers/memcached"
2017-12-18 14:29:39,022 process L0479 DEBUG| [stdout]
2017-12-18 14:29:39,022 process L0479 DEBUG| [stdout] memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.
2017-12-18 14:29:39,022 process L0479 DEBUG| [stdout]
2017-12-18 14:29:39,022 process L0479 DEBUG| [stdout] Tags: memcached
2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout]
2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout] * An image stream will be created as "memcached:latest" that will track this image
2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout] * This image will be deployed in deployment config "memcached"
2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout] * Port 11211/tcp will be load balanced by service "memcached"
2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout] * Other containers can access this service through the hostname "memcached"
2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout]
2017-12-18 14:29:39,023 process L0479 DEBUG| [stdout] --> Creating resources with label mtf_testing=true ...
2017-12-18 14:29:39,032 process L0479 DEBUG| [stdout] imagestream "memcached" created
2017-12-18 14:29:39,043 process L0479 DEBUG| [stdout] deploymentconfig "memcached" created
2017-12-18 14:29:39,063 process L0479 DEBUG| [stdout] service "memcached" created
2017-12-18 14:29:39,064 process L0479 DEBUG| [stdout] --> Success
2017-12-18 14:29:39,064 process L0479 DEBUG| [stdout] Run 'oc status' to view your app.
2017-12-18 14:29:39,069 process L0499 INFO | Command 'oc new-app -l mtf_testing=true docker.io/modularitycontainers/memcached --name=memcached' finished with 0 after 2.00025391579s



The step mentioned now verifies if application is really Running and on which IP address is reachable.


2017-12-18 14:29:46,201 process L0389 INFO | Running 'oc get service -o json'
2017-12-18 14:29:46,416 process L0479 DEBUG| [stdout] {
2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "apiVersion": "v1",
2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "items": [
2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] {
2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "apiVersion": "v1",
2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "kind": "Service",
2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "metadata": {
2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "annotations": {
2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] "openshift.io/generated-by": "OpenShiftNewApp"
2017-12-18 14:29:46,417 process L0479 DEBUG| [stdout] },
2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "creationTimestamp": "2017-12-18T13:29:39Z",
2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "labels": {
2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "app": "memcached",
2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "mtf_testing": "true"
2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] },
2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "name": "memcached",
2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "namespace": "myproject",
2017-12-18 14:29:46,418 process L0479 DEBUG| [stdout] "resourceVersion": "2121",
2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "selfLink": "/api/v1/namespaces/myproject/services/memcached",
2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "uid": "7f50823d-e3f7-11e7-be28-507b9d4150cb"
2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] },
2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "spec": {
2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "clusterIP": "172.30.255.42",
2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "ports": [
2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] {
2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "name": "11211-tcp",
2017-12-18 14:29:46,419 process L0479 DEBUG| [stdout] "port": 11211,
2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] "protocol": "TCP",
2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] "targetPort": 11211
2017-12-18 14:29:46,420 process L0499 INFO | Command 'oc get service -o json' finished with 0 after 0.213701963425s
2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] }
2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] ],
2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] "selector": {
2017-12-18 14:29:46,420 process L0479 DEBUG| [stdout] "app": "memcached",
2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "deploymentconfig": "memcached",
2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "mtf_testing": "true"
2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] },
2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "sessionAffinity": "None",
2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "type": "ClusterIP"
2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] },
2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "status": {
2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] "loadBalancer": {}
2017-12-18 14:29:46,421 process L0479 DEBUG| [stdout] }
2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] }
2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] ],
2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] "kind": "List",
2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] "metadata": {},
2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] "resourceVersion": "",
2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] "selfLink": ""
2017-12-18 14:29:46,422 process L0479 DEBUG| [stdout] }



In the last phase, alone tests are executed.


2017-12-18 14:29:46,530 output L0655 DEBUG| Expecting STORED
2017-12-18 14:29:46,531 output L0655 DEBUG| STORED was catched
2017-12-18 14:29:46,632 avocado_test L0069 DEBUG|

:::::::::::::::::::::::: TEARDOWN ::::::::::::::::::::::::

2017-12-18 14:29:46,632 process L0389 INFO | Running 'oc get dc memcached -o json'
2017-12-18 14:29:46,841 process L0479 DEBUG| [stdout] {
2017-12-18 14:29:46,841 process L0479 DEBUG| [stdout] "apiVersion": "v1",
2017-12-18 14:29:46,841 process L0479 DEBUG| [stdout] "kind": "DeploymentConfig",
2017-12-18 14:29:46,841 process L0479 DEBUG| [stdout] "metadata": {



At the end of the tests, let’s verify if service is really not running in OpenShift environment. This can be verified by command oc status:


$ sudo oc status
In project My Project (myproject) on server https://127.0.0.1:8443

You have no services, deployment configs, or build configs.
Run 'oc new-app' to create an application.



We can see, we are able to test arbitrary container and at the end, OpenShift environment is clear.

Summary

As can be seen from this article, writing tests for containers is really easy and we can garant, that container is really working properly as package in RPM world.

In near future, we would like to extend MTF with S2I testing and testing containers with OpenShift templates.

MTF documentation is available here.

Simpleline – new way how to write your Text UI

Posted by rhinstaller on January 15, 2018 03:37 PM

Simpleline is a text UI framework. Originally a part of the Anaconda installer project. It is designed to be used with line-based machines and tools (e.g. serial console) so that every new line it appended to the bottom of the screen. Printed lines are never rewritten!

It is written completely in Python 3 with a possibility to have non-python event loops. By default you can use GLib event loop or Simpleline’s event loop. With the exception of optional event loops, Simpleline has almost no dependencies on external libraries.

I know there is the ncurses library which is widely used for text user interface but we (Anaconda developers) required something which will be easy to show in line-based devices and services like serial console. Thanks to this library you can even create nice and easy UI which can be printed by fax machine if you want to :).

How to use

The best learning sources can be found in the examples directory in the GitHub repository and you can read the Guide to Simpleline section of this documentation. However, some basic usage of Simpleline will be shown here too, to get an idea of how Simpleline works:

from simpleline import App
from simpleline.render.screen import UIScreen
from simpleline.render.screen_handler import ScreenHandler
from simpleline.render.widgets import TextWidget


# UIScreen is the main building item for Simpleline. Every screen
# which will user see should be inherited from UIScreen.
class HelloWorld(UIScreen):

    def __init__(self):
        # Set title of the screen.
        super().__init__(title=u"Hello World")

    def refresh(self, args=None):
        # Fill the self.window attribute by the WindowContainer and set screen title as header.
        super().refresh()
        widget = TextWidget("Body text")
        self.window.add_with_separator(widget)


if __name__ == "__main__":
    # Initialize application (create scheduler and event loop).
    App.initialize()

    # Create our screen.
    screen = HelloWorld()

    # Schedule screen to the screen scheduler.
    # This can be called only after App.initialize().
    ScreenHandler.schedule_screen(screen)

    # Run the application. You must have some screen scheduled
    # otherwise it will end in an infinite loop.
    App.run()

The output from the simple Hello World example above:

$ ./run_example.sh 00_basic
================================================================================
================================================================================
Hello World

Body text

Please make a selection from the above ['c' to continue, 'q' to quit, 'r' to
refresh]:

If a user presses r and then enter to refresh, the same screen is printed again. This will be printed to a monitor:

$ ./run_example.sh 00_basic
================================================================================
================================================================================
Hello World

Body text

Please make a selection from the above ['c' to continue, 'q' to quit, 'r' to
refresh]: r
================================================================================
================================================================================
Hello World

Body text

Please make a selection from the above ['c' to continue, 'q' to quit, 'r' to
refresh]:

As you can see the whole screen is not rewritten – only printed again on the bottom. This is the expected behavior so the actual screen is always at the bottom but you can see the whole history. This behavior makes working with line based machines and tools much easier.

Improve Simpleline

This library is still in its beginning. It is mature enough for use but the goal is to polish a few more things before releasing version 1.0 version. If you have any interesting ideas, or if you want to help with development, please go to the GitHub page and create an issue or make a pull request. I will gladly discuss your ideas on the channel #anaconda on the IRC Freenode server.

Upgrade Fedora 26 ke Fedora 27

Posted by Fedora Indonesia on January 15, 2018 03:01 PM
Setelah official rilis Fedora 27, jika hendak melakukan pembaruan system maka disarankan membackup data dan system yang kamu miliki sebelum pembaruan dilakukan. Lakukan Proses pembaruan perangkat lunak mu dengan baris perintah, sudo dnf upgrade --refresh Berikutnya adalah pasang DNF plugin, dengan mengetikkan perintah berikut, sudo dnf install dnf-plugin-system-upgrade Nah, setelah melakukan perintah-perintah di atas artinya … Lanjutkan membaca "Upgrade Fedora 26 ke Fedora 27"

Editing XFA PDF forms on Linux/Fedora

Posted by Josef Strzibny on January 15, 2018 01:33 PM

PDF should be this nice universal format that many government institution now work with. But what if they require you to fill in XFA forms inside their template PDFs? The ones I need to fill for my insurance company are certainly not supported in the standard Document Viewer shipped with Fedora. This weekend I tried installing Master PDF Editor and guess what? It works like a charm for me. They even offer various RPM builds for CentOS 6/7. And they work on my Fedora out of the box. Lucky me. Thanks a lot for this!

Building An API With Django 2.0: Part I

Posted by Jeff Sheltren on January 15, 2018 09:18 AM
We’ve helped build many interesting websites at Tag1. Historically, we started as a Drupal shop in 2007, heavily involved in the ongoing development of that popular PHP-based CMS . We also design and maintain the infrastructures on which many of these websites run. That said, we’ve long enjoyed applying our knowledge and skills for building sustainable and high-performing systems to different technologies as well. In this blog series, we’re going to build a backend API server for managing users on a high-traffic website using the Python-based Django framework. We’re going to assume you’re generally comfortable with Python, but new to Django. In this first blog of the series, we’ll build a simple registration and login system which can be used by a single page app, or a mobile app. Coming from a Drupal CMS background, it can initially be surprising to learn that such a simple task requires additional libraries and custom code. This is because Django is a framework, not a CMS. As you read through this first blog, you’ll gain a general understanding of how Django works, and how to add and use libraries. We’ll create a stateless REST API, using JSON Web Tokens for authentication. And we’ll tie it all together with consistent paths. You can follow along and write out the code yourself, or view it online on GitHub . Future blogs in this series will add support for two-factor authentication and unit testing, allowing us to automatically verify that all our functionality is working as designed.Read more
Jeremy Mon, 01/15/2018 - 01:18

RHL'18 in Saint-Cergue, Switzerland

Posted by Daniel Pocock on January 15, 2018 08:02 AM

RHL'18 was held at the centre du Vallon à St-Cergue, the building in the very center of this photo, at the bottom of the piste:

People from various free software communities in the region attended for a series of presentations, demonstrations, socializing and ski. This event is a lot of fun and I would highly recommend that people look out for the next edition. (subscribe to rhl-annonces on lists.swisslinux.org for a reminder email)

Ham radio demonstration

I previously wrote about building a simple antenna for shortwave (HF) reception with software defined radio. That article includes links to purchase all the necessary parts from various sources. Everything described in that article, together with some USB sticks running Debian Hams Live (bootable ham radio operating system), some rolls of string and my FT-60 transceiver, fits comfortably into an OSCAL tote bag like this:

It is really easy to take this kit to an event anywhere, set it up in 10 minutes and begin exploring the radio spectrum. Whether it is a technical event or a village fair, radio awakens curiosity in people of all ages and provides a starting point for many other discussions about technological freedom, distributing stickers and inviting people to future events. My previous blog contains photos of what is in the bag and a video demo.

Open Agriculture Food Computer discussion

We had a discussion about progress building an Open Agriculture (OpenAg) food computer in Switzerland. The next meeting in Zurich will be held on 30 January 2018, please subscribe to the forum topic to receive further details.

Preparing for Google Summer of Code 2018

In between eating fondue and skiing, I found time to resurrect some of my previous project ideas for Google Summer of Code. Most of them are not specific to Debian, several of them need co-mentors, please contact me if you are interested.

An update on ongoing Meltdown and Spectre work

Posted by Fedora Magazine on January 15, 2018 05:17 AM

Last week, a series of critical vulnerabilities called Spectre and Meltdown were announced. Because of the nature of these issues, the solutions are complex and requires fixing delicate code. The fixes for Meltdown are mostly underway. The Meltdown fix for x86 is KPTI. KPTI has been merged into the mainline Linux tree and many stable trees, including the ones Fedora uses. Fixes for other arches are close to being done and should be available soon. Fixing Spectre is more difficult and requires fixes across multiple areas.

Similarly to Meltdown, Spectre takes advantage of speculation done by CPUs. Part of the fix for Spectre is disallowing the CPU to speculate in particular vulnerable sequences. One solution developed by Google and others is to introduce “retpolines” which do not allow speculation. A sequence of code that might allow dangerous speculation is replaced with a “retpoline” which will not speculate. The difficult part of this solution is that the compiler needs to be aware of where to place a retpoline. This means a complete solution involves the compiler as well.

The first part of the work necessary for retpoline is now done. This should be completely merged in the next few days and available in Fedora stable releases shortly. These patches by themselves do provide a degree of protection against Spectre attacks but more work is needed to be a complete solution. The compiler support to provide further protection are still under review by upstream developers. Support for other arches is ongoing.

An alternative to the retpoline patches involves exposing some hardware features to more tightly control speculation. Some CPUs have a feature called Indirect Branch Restricted Speculation (IBRS). When this feature is enabled, userspace programs are further restricted in how they are able to speculatively execute instructions. Fully supporting this feature requires microcode updates, some of which are available now with others available shortly. IBRS provides a more complete solution without the need for compiler support but at a higher performance cost. The IBRS patches are still under review and should be merged eventually but will not be available in time for 4.15. When the IBRS patches are available, we will be backporting them to Fedora stable branches.

Both IBRS and retpoline cover the “variant 2” version of Spectre. The “variant 1” version of Spectre doesn’t have a solution with a quick and catchy name. The solution for variant 1 involves scanning the code for sequences that may be problematic. The method for scanning the code tends to produce many false positives (sequences that are not actually vulnerable) so upstream developers are trying to narrow down which parts of the code actually need fixing. Fixes for sequences which are known to be vulnerable have been merged.

Although Spectre is an important security issue, just as important is careful review of fixes to make sure the solution is maintainable. Rushing a fix could cause more problems in the future. The Fedora team is continually monitoring Spectre fixes to bring them to you when they are ready.

All systems go

Posted by Fedora Infrastructure Status on January 12, 2018 08:19 PM
New status good: Everything seems to be working. for services: Ask Fedora, Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Darkserver, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, FedoraHosted.org Services, Fedora pastebin service, FreeMedia, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Fedora People, Package Database, Package maintainers git repositories, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on January 12, 2018 07:37 PM
New status scheduled: Reboots in progress, services should be down soon for services: Ask Fedora, Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Darkserver, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, FedoraHosted.org Services, Fedora pastebin service, FreeMedia, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Fedora People, Package Database, Package maintainers git repositories, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

Slice of Cake #22

Posted by Brian "bex" Exelbierd on January 12, 2018 02:20 PM

A slice of cake

Last week as the FCAIC I:

  • Lots of expense processing. Really, the start of the year should be lighter :D
  • Talked with another community AsciiBinder, our documentation engine as they may adopt it as well
  • A bunch of meetings that got compressed into the smaller week, including one about Modular Documentation

À la mode

  • I was away from 6-9 January so I got less done than normal.

Cake Around the World

I’ll be traveling some and hope you’ll ping me for coffee if you’re nearby.

  • DevConf.cz, Brno, Czech Republic - 27-28 January
  • Fedora CommOps FAD, Brno, Czech Republic - 29 January - 1 February
  • Grimoire/CHAOSS Con, Brussels, Belgium - 2 February
  • FOSDEM, Brussels, Belgium - 3-4 February

Note: Posting this again on Friday. This makes Monday a lot nicer :)

10 Fedora Women Days across the world

Posted by Fedora Community Blog on January 12, 2018 08:30 AM

10 Fedora Women Days across the world
The Diversity Team encouraged local communities to gather and present the accomplishments of women in the Fedora Project and thank them. We are happy to see 10 Fedora Women Days happened in different regions to promote the participation of more women and raise awareness about the gender gap in tech communities.

Different topics were covered during the events, not only for people already familiar with our community but especially for newcomers intrigued by the open source world and willing to join the Fedora Project. This year we presented in Guwahati, Bangalore, Tirana, Managua, Cusco, Puno, Pune, Lima, Brno and Prishtina, spreading the word about Fedora and saying thank you to all the women contributors to our project.

Even though the events were dedicated to women, everyone of all identities were welcomed to participate or give a talk. We are glad to see how much interest there was in these events in different local communities and how successful they were, making the decision easier for us to organize them again next year.

Read the event reports

Find more details about each FWD and check their group photos:

  1. FWD in Guwahati
  2. FWD in Bangalore
  3. FWD in Tirana
  4. FWD in Managua
  5. FWD in Cusco
  6. FWD in Puno
  7. FWD in Pune
  8. FWD in Lima
  9. FWD in Brno
  10. FWD in Prishtina

What we accomplished

  • 10 Fedora Women Days in 10 different cities
  • approximately 200 attendees (~40 speakers)
  • 900 FWD stickers
  • awarded the FWD badge to 32 people who had already a FAS account

Special thanks goes to event organizers for the amazing Fedora Women Days we had this year and to speakers for sharing their knowledge/experience with other people.

Until next year, keep contributing to open source! 😉

The post 10 Fedora Women Days across the world appeared first on Fedora Community Blog.

Submit Wallpaper for Fedora 28 Supplemental Wallpaper!

Posted by Fedora Magazine on January 12, 2018 01:20 AM

Each release, the Fedora Design team works with the community on a set of 16 additional wallpapers. Users can install and use these to supplement the standard wallpaper. Submissions are now open for the Fedora 28 Supplemental Wallpapers, and will remain open until February 13, 2018.

You always wanted to start contributing to Fedora but don’t know how? Contributing a supplemental wallpaper is one of the easiest way to start as a Fedora Contributor.

What exactly are the supplemental wallpapers?

Supplemental wallpapers are the non-default wallpapers provided with Fedora. Each release, the Fedora Design team works with the community on a set of 16 additional wallpapers. Users can install and use these to supplement the standard wallpaper.

If you are looking for some inspiration when submitting, here are the winners from the last Supplemental wallpapers package:

Fedora 26 wallpaper - Bluebird Fedora 26 wallpaper - Bluerose Fedora 26 wallpaper - Alternative Blue

Dates and deadlines

The submission phase opens January 2, 2018 and ends February 12 at 23:59 UTC.

Important note, submissions during the last hours can in certain circumstances not get into the election, if there is no time to do the legal research.

The legal research is done by hand and very time consuming, so please help by following the guidelines correctly and submit only work that has the correct license.

The voting will open automatically 13 February 2018 and will be open until February 25 2018 at 23:59 UTC

How to contribute

Fedora uses for the submission the application Nuancier for managing the submissions and the voting process. Nuancier has the full list of rules and guidelines for submitting a wallpaper. The recommended license for submissions is CC-BY-SA — Note that we can not accept NC (no commercial use) or ND (no derrivatives) submissions.

For an submission you need an Fedora account. If you have none, you have to create one before here. For being allowed to vote, you must have membership in another group as cla_done or cla_fpca.

The number of submissions a contributor can make is limited. A participant can only upload two submissions to Nuancier. In case you submit multiple versions of the same image, the team will choose one version of it and accept it as one submission.

Submissions from previous supplemental wallpapers contests will not be selected. Creations which have not an essential height get rejected. Denied submissions also count, so if you make two contributions and both are rejected, you cannot submit more. Use your best judgment for your submissions.

 

Setting up OpenVPN client with systemd template unit files

Posted by Amit Saha on January 12, 2018 01:00 AM

First, I installed openvpn:

$ sudo dnf  -y install openvpn

Then, I used the following systemd unit file from here to create a systemd service for creating a new VPN connection on Fedora 27:

$ cat /etc/systemd/system/openvpn@.service 

[Unit]
Description=OpenVPN service for %I
After=syslog.target network-online.target
Wants=network-online.target
Documentation=man:openvpn(8)
Documentation=https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
Documentation=https://community.openvpn.net/openvpn/wiki/HOWTO

[Service]
Type=notify
PrivateTmp=true
WorkingDirectory=/etc/openvpn/client/%i/
ExecStart=/usr/sbin/openvpn --status %t/openvpn-server/status-%i.log --status-version 2 --suppress-timestamps --cipher AES-256-GCM --ncp-ciphers AES-256-GCM:AES-128-GCM:AES-256-CBC:AES-128-CBC:BF-CBC --config /etc/openvpn/client/%i/%i.conf
CapabilityBoundingSet=CAP_IPC_LOCK CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SYS_CHROOT CAP_DAC_OVERRIDE
LimitNPROC=10
DeviceAllow=/dev/null rw
DeviceAllow=/dev/net/tun rw
ProtectSystem=true
ProtectHome=true
KillMode=process
RestartSec=5s
Restart=on-failure

[Install]
WantedBy=multi-user.target

The WorkingDirectory set as /etc/openvpn/client/%i has the client configuration and all the other configuration that I needed. If you nedded support for two VPN connections, we would have two directories here corresponding to each. In my case, the files in my client/flndirectory are: vpn.key, vpn.crt, ca.crt, fln.conf and tls-auth.key.

Once I created the unit file, I enabled and started it as follows:

$ sudo systemctl enable openvpn@fln.service
$ sudo systemctl start openvpn@fln.service

If I had a second configuration, I would do something like:

$ sudo systemctl enable openvpn@fln2.service
$ sudo systemctl start openvpn@fln2.service

Troubleshooting

If something goes wrong, you can see the logs via journalctl:

$ sudo journalctl -u openvpn@fln
..

References

Remove audio noise from video using audacity and avidemux

Posted by Robbi Nespu on January 12, 2018 12:00 AM

Audio noice

I always forgot how to setup the best mixer setting for sound card input and how to remove noise from video screen capture. So I made this post as my personal note how to remove audio noise from video. Please note that I am using linux Fedora 27.

First of all, I need to setup sound card via alsamixer. Press F6 to select sound card (HDA Intel PCH) and press F4 to just show mixer for audio recording. Set as image below:

HDA Intel PCH

This is most optimize mixer to capture audio but after all we still had a little noise.

Record some audio or video using what ever application (SimpleScreenRecorder is my favourite application to record my desktop), save it and load into Audacity. If you don’t have it yet, please install Audacity from RPMfusion repository:

$ sudo dnf install audacity-freeworld

AudacitySelecting audio sample for noise reduction profile

The peek view above are example of selecting some sample of audio with noise and use as profile from noise reduction effect.

After that, select all audio track and re-apply noise reduction effect. Later, you can use amplify effect and apply noise reduction effect again until you are satistied with the audio. Normally, a single step should be enough.

When you are done, export our audio as mp3 format file and save it.

Open Avidemux application, load previously video with noise audio. Then click menu audio > select track. Select previously mp3 file we export from Audacity.

AvidemuxLoad audio file (previously we export from Audacity) into Avidemux

We almost finish, just click save video and avidemux will render new video for you without audio noise, yeah our mission are completed!

Copr Modularity in retrospect

Posted by Jakub Kadlčík on January 12, 2018 12:00 AM

This article is about the journey that we made since the Fedora modularity project started and we decided to get involved and provide modularity features in Copr. It has been a long and difficult road and we are still not on its end because the whole modularity project is a living organism that is still evolving and changing. Though, we are happy to be part of it.

First demo

More than a year ago, in let’s say the dark ages, nothing existed. Just the void, … well ok, enough metaphors. At that time, the whole modularity was just an idea, any real implementation existed. Then we came with possibly one of the first prototypes. It was a simple feature that allowed a user to assign a modulemd yaml file to a chroot in the project settings. When packages were built in such project/chroot and the repodata was created, we added the modulemd to them. This was the first piece of modularity code in Copr and the beginning of our journey.

The “webform”

A decision needed to be made, which audience we want to target to? We realized that the modularity team (later known as factory 2.0) will focus at first on experienced users, i.e. themselves. For us, it meant that the whole space targeted on complete beginners will not be covered soon, so we decided to focus on this area.

Ask yourself a question. If your entire community is people with diverse, but at least fundamental knowledge of RPM packaging, what is the most challenging task that they need to do, to be able to distribute their software as a module? They all need to learn how to write a modulemd yaml file. It isn’t a rocket science, but it is time-consuming and also at that time the only guide was a documented template file. This could be enough to discourage a lot of people. We saw a chance to attract people by simplifying things.

The idea of “the webform” was born. We implemented a form for creating modules without any knowledge requirement. A user could just specify the module name, version, and release (currently name, stream, and version) and select from all the packages built in the particular project, which should be part of the module. Then Copr parsed it, constructed a yaml via the python-modulemd library and it continued the pipeline so the module was built (accordingly to what was considered “to be built” at that time). This was a killer feature for Copr. You can see a demo here.

DNF 1.x

Copr was able to build modules and generate repofiles for them. However, nobody could be sure, whether it was done correctly and if the output was in expected format, because there was no official way, how to install modules on the client machine. To be precise, some prototypes existed. Modularity features for DNF were implemented as the fm-dnf-plugin and fm-modulemd-resolver. There was a little problem though - it didn’t work with the most recent version of the modulemd format. We decided to update the code in order to use the DNF plugin in our tests to ensure that we produce the module repository in a way that we are supposed to. The funny thing is that meanwhile the DNF plugin was stated as obsolete in favor of DNF 2.0 with internal support for modularity. Hence, my pull requests PR#5 and PR#1 were destined to be forever unmerged.

Module Build Service

aka MBS, aka fm-orchestrator, formerly known as Říďa (from Czech comic book Čtyřlístek), is a service for orchestrating a module build in configured buildsystem, such as Koji, Copr or Mock.

As the complexity of building modules grew it was a particularly good idea not to re-invent a wheel and use/improve tools that Factory 2.0 already created. The most important for us was the MBS as it should have been able to resolve module build dependencies, configure the buildroot and schedule the builds in the right order. Moreover, the logic was abstracted to general code that was common for all builders, so from our perspective it was a handy little blackbox managed and internally reworked by people who defined the Modularity.

It was on us to implement the Copr builder and deploy our own instance of this service. As we quickly found out, we were the first ones who wanted to do so. There was no ansible playbook for deployment and moreover, it wasn’t even packaged for Fedora. Packaging was eventually done by the upstream and we came up with the playbook. You can see the demo of our first builds via MBS here.

Buildroot from modules

As you can see from the previous demo, we used to build modules in Fedora rawhide chroot. This contradicted with one of the former modularity goals which was a fully modular system built in fully modular buildroot. We solved it by building modules in custom-1-x86_64 chroot with an enabled external repository to base-runtime (later platform) module. You can see the demo here.

We took it even further. Although the MBS constructs a buildroot from modules, it actually kind of cheats. It determines which modules should be installed into the buildroot and then finds out which packages are provided by those modules. Those are installed into the buildroot. This makes a sense maybe as a workaround, but I consider it wrong from the design perspective. This made us implement a support for Mock to be able to directly install modules into the buildroot and then updated MBS so it specifies dependencies to the Copr as modules instead of packages.

Modules with Copr packages

How to built modules from Copr packages? Wait for an upcoming article. It will be very short with a lot of images :-).

MBS Plugins

It turned out that having the code of third-party builders such as Copr in the fm-orchestrator repository is quite ineffective. The Copr release process became dependant on MBS release and at the same time, the MBS upstream didn’t know the implementation details of the Copr side. We mutually agreed that custom builders should be moved out to their own repositories in the form of plugins. That day module-build-service-copr was born.

Hybrid Modularity

Just before Christmas break a highly awaited article Modularity is Dead, Long Live Modularity! was published. If you haven’t read it yet, I recommend you to do so. It explains how the Modularity design is going to be changed in the foreseeable future. From the Copr perspective, the most important revelation is that the minimal buildroot is not going to be composed of modules anymore. Instead, the standard Fedora buildroot will be used.

Conclusion

What to take from this already too long article? We very much appreciate your feedback. Things regarding modularity change very fast and very often and it is sometimes difficult for us to find the right way how to do things and to catch up the hot news. Your opinions and ideas help us to prioritize the agenda, speed up the process of figuring out what are the best solutions for us and ultimately determine the direction which we should take in the future. Thank you for that.

In the following article, we are going to talk about how you can currently build modules in Copr and how the current user interface looks like.

InfoSec Basketball Rebounds

Posted by Susan Lauber on January 11, 2018 07:55 PM
I was reading a post about red team vs blue team and all the support for purple teams and was struck with inspiration from one of the comments mentioned: 

"(offense wins games, defense wins championships)"

They do not appear to be implying basketball and given the timing of the comment, they may have been thinking more about football.  For me though, as a huge fan of Women's Basketball, I recognized it as a variation on a quote from the great Pat Summit:

"Offense sells tickets, defense wins games, rebounding wins championships."

And suddenly a hobby and work collide. My brilliant inspiration comes from how this really does apply to information security as well.

Everyone loves good offense. For some it is high flying dunk or a buzzer beater from half court. For others it is a successful, innovative attack as part of a red team. We attend "ethical hacking" courses because breaking into things is fun. Big exploits make the news and get cute little logos. Other crackers just keep working and getting two points here and there until it adds up on the scoreboard.

Defense is what is needed to win the game though. It doesn't matter how many point you put up if the other team is allowed to put up more. Can you prevent the problems in the first place? Have you done the basics of password security and patch management? Are we monitoring the logs? And even if it is done well, it might not make a stat line. Sure a couple blocks and few steals here and there but the standard box score doesn't list shot clock violations and a zone defense rarely makes the sports center top ten. The blue teams that prevent attacks from being expensive do not get badges (logos).

Shutouts are very rare and never in championship matches. Defense will not stop everything and even the best offense misses a lot of shots. Rebounds are how we react on a miss. Defensive rebounds end an opportunity for more points. The offense got a shot off, but if they miss, did you stand and watch or did you go after the ball and box out the opponent? If you are on offense and your shot bounces back, your goal is to secure the ball and try again. Attack another way, find another opening, maybe the same opening if you didn't get boxed out. With information security, are you monitoring logs, are your alerts set up correctly, are you reacting to even the missed attempt or are you just waiting and letting them take another shot? Are you boxing out?

Offense often comes down to skill and on the court, natural ability plays a big part. Defense can be taught with basic and repetitive drills. Rebounding is about heart. You don't have to be the tallest or biggest or strongest. Who wants it more? Who will go after the ball? Who can read the play and be the in correct position to respond? 

And no one wins anything without a contributions in all areas and a whole lot of teamwork.

When it comes to programming and coming up with creative attacks, I do not have the natural abilities to make a good red team member. I am much more comfortable practicing defense and jumping into position to grab a rebound.  Along with some post game analysis and armchair coaching! 

-SML

Tell us your Fedora 2017 Year in Review

Posted by Justin W. Flory on January 11, 2018 08:45 AM
Tell us your Fedora 2017 Year in Review

The past year was a busy for Fedora. The community released Fedora 26 and 27. Different sub-projects of Fedora give their share of time for the overall success of Fedora. But in a project as big as Fedora, it’s hard to keep track of what everyone is doing! If you’re a developer, you likely know more about what’s happening inside the code of Fedora, but you may not know what’s happening with the Fedora Ambassadors. Or maybe you’re involved with Globalization (G11n) and translating and know what’s happening there, but you’re not as familiar with what the Fedora Design team is working on.

Share your 2017 “Year in Review”

To communicate with the rest of the Fedora community what we worked on in 2017, the Fedora Community Operations team (CommOps) encourages every sub-project of Fedora put together their own “Year in Review” article on the Fedora Community Blog. The CommOps team has created an easy to use template to document your top three highlights of 2017 and one goal for 2018.

Read the original announcement of the 2017 “Year in Review” on the Fedora Community Blog. Contributors are encouraged to work with their sub-projects to come up with the three 2017 highlights and one 2018 goal. These are only set as a minimum. If your sub-project has a lot to say or has many big tasks for 2018, include more highlights or more goals! The only requirement is to meet the minimum, but there is no limit for what you wish to include.

Share your Fedora 2017 Year in Review

<iframe class="wp-embedded-content" data-secret="6VkCLZyBio" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/fedora-2017-year-in-review/embed/#?secret=6VkCLZyBio" title="“Share your Fedora 2017 Year in Review” — Fedora Community Blog" width="600"></iframe>

Where to find “Year in Review” posts

All “Year in Review” articles end up on the Fedora Community Blog. See examples from 2015 for some inspiration. To find new posts, find them in the “Year in Review 2017” tag.

Start discussing this now and craft your own “Year in Review” post for 2017! Sub-projects are encouraged to have a draft in the Community Blog before the end of February.

The post Tell us your Fedora 2017 Year in Review appeared first on Justin W. Flory's Blog.

Episode 77 - npm and the supply chain

Posted by Open Source Security Podcast on January 11, 2018 02:42 AM
Josh and Kurt talk about the recent npm happenings. What it means for the supply chain, and we end with some thoughts on how maybe none of this matters.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/6134997/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Watching the meltdown.

Posted by Susan Lauber on January 11, 2018 01:45 AM
I have been watching Meltdown and Spectre unfold from the sidelines. Other than applying available updates, I'm just watching and absorbing the process of the disclosure. This one appears mid way along a long road.

I teach mostly administrators. I teach some developers. I teach those in, or desiring to be in, infosec. I like teaching security topics. I think securing systems requires more people thinking about security from the beginning of design and as an everyday, no big deal part of life. A question I ask with these newsworthy issues is what normal practices can mitigate even part of the problems?  There are two big basics - least privilege and patch management - to always keep in mind. Issues like ShellShock and Venom were mostly mitigated from the beginning with SElinux enabled (least privilege) and WannaCry had little impact on those systems patched long ago when the SMB bug was first found and fixed.

However, in some cases, both exploits and accidents come from doing something that no one else thought of trying. This is why I like open source. There is the option (not always used) for more people trying different things and finding better uses as well as potential flaws. Any type of cooperation and collaboration can be the source of some of these findings including pull requests, conference talks, or corporations working with academic research projects.

Spectra and Meltdown are not the first bug of their kind, nor the last. Anything that grabs or holds more information than is requested - such as cache or speculation - is bound to eventually grab and expose something it shouldn't. Or allow some type of injection. I gave some kudos to the team getting the credit for this discovery and got some push back from a friend defending another friend that gave a related talk at a conference in 2016. Maybe not enough credit is given to those that speculated (pun intended) on this type of problem in the past. This timeline lists several and some retweets from people I trust to be smarter than me in this topic point to ideas even older.

<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

The Google Project Zero team is getting the recognition because of a variety of pieces in a big puzzle. Right place, right time. Privilege from the backing of a large company. Their use of the embargo and disclosure process working across the industry. A new proof of concept and published paper. Indications of ways to exploit it at scale. A mitigation. It all comes together and suddenly more than just the researchers realize the scope of the risk that has been taken. Intel is getting more than their share of the blame too when people recognize a company name faster than a general concept or part of a computer. And, yes, in some cases there is also too much fluff and fear in the reporting.

The embargo and disclosure process is pretty interesting too. I sat in a talk a couple of years ago about how a large company deals with this in the open source world and Mike Bursell has a post with thoughts about it again in reference to this case. I actually had an idea something big was coming from the combination of noise and speculation about patches being submitted and who was NOT talking about them.

We are still discovering the full impact of the CPU design decisions made. Sure, they are serious, especially as more people are able to automate attacks against the vulnerability, but they are also nothing to panic about. This is not just an Intel problem. It is a market driven quest for more power with less money and despite various risks. We are all to blame. Apply the patches, monitor the impact, invest in the next generation of inventors and inventions. In other words, business as usual.

The choices were made in favor of optimization, so will things be a little slower now? Probably for many people, but not everyone. Will we get over it? I would think so.

What will happen in the long run with the latest news? I predict many people will choose performance over security. I predict a few years from now when someone finds a scalable way to exploit one or more of the variations, people will have forgotten that they should have updated bios, firmware, and kernels today. If we are lucky, they will have the latest patches already deployed and just need to make some configuration changes. But when has luck worked out as the best security practice?

Links I have collected helping me to understand:

SANS Institute webcast.


Fedora Magazine KTPI overview.

OpenStack, What you need to know.

Project Zero technical overview.

xkcd

My favorite analogy thread - the library comparison - (more were rounded up here).


<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>



-SML