Fedora People

Cockpit 153

Posted by Cockpit Project on October 17, 2017 10:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 153.

Add oVirt package

This version introduces the “oVirt Machines” page on Fedora for controlling oVirt virtual machine clusters. This code was moved into Cockpit as it shares a lot of code with the existing “Machines” page, which manages virtual machines through libvirt.

This feature is packaged in cockpit-ovirt and when installed it will replace the “Machines” page.

oVirt overview

Packaging cleanup

This release fixes a lot of small packaging issues that were spotted by rpmlint/lintian.

Try it out

Cockpit 153 is available now:

Protect your wifi on Fedora against KRACK

Posted by Fedora Magazine on October 17, 2017 12:23 AM

You may have heard about KRACK (for “Key Reinstallation Attack”), a vulnerability in WPA2-protected Wi-Fi. This attack could let attackers decrypt, forge, or steal data, despite WPA2’s improved encryption capabilities. Fear not — fixes for Fedora packages are on their way to stable.

Guarding against KRACK

New wpa_supplicant packages contain the fix for Fedora 25, 26, and 27, as well as Rawhide. The maintainers have submitted them to the stable repos. They should show up within a day or so for most users.

To update your Fedora system, use this command once you configure sudo. Type your password at the prompt, if necessary.

sudo dnf update wpa_supplicant

Fedora provides worldwide mirrors at many download sites to better serve users. Some sites refresh their mirrors at different rates. If you don’t get an update right away, wait until later in the day.

Updating immediately

If you’re worried about waiting until stable updates show up, use this process to get the packages. First, install the bodhi-client package:

sudo dnf install bodhi-client

Then note the build ID for your Fedora system:

  • Fedora 27 prerelease: wpa_supplicant-2.6-11.fc27
  • Fedora 26: wpa_supplicant-2.6-11.fc26
  • Fedora 25: wpa_supplicant-2.6-3.fc25.1

Now download the packages for your system and update them. This example is for Fedora 26:

mkdir ~/krack-update && cd ~/krack-update
bodhi updates download wpa_supplicant-2.6-11.fc26
dnf update ./wpa_supplicant*.rpm

If your system is on Rawhide, run sudo dnf update to get the update.

Copr stack dockerized!

Posted by Jakub Kadlčík on October 17, 2017 12:00 AM

Lately, I decided to dockerize the whole Copr stack and utilize it for development. It is quite nifty and just ridiculously easy to use. In this article, I want to show you how to run it, describe what is inside the containers and explain my personal workflow.

There are no prerequisites required, you only need to have properly configured docker and docker-compose command installed.

Usage

Have I already said, that it is ridiculously easy to use? Just run following command in the copr root directory.

docker-compose up -d

It builds images for all Copr services and runs containers from them. Once it is done, you should be able to open http://127.0.0.1:5000 and successfully build a package in it.

How so?

There is a docker-compose.yaml file in the copr root directory, which describes all the Copr services and ties them together. At this point, we have a frontend, distgit, backend and database. This may change in the future by splitting the functionality across more containers.

In copr repository also lies a directory called docker which contains the corresponding Dockerfile for each service.

All the images are built in the same way. First, the whole copr repository is copied in. Then the tito is used to build an appropriate package for the service. It is installed, configured and started. The only exception here is the database, which just setups a simple PostgreSQL server.

The parent process for the services running in containers is a supervisord so they can be controlled via supervisorctl command.

In the containers is also bind mounted live version of copr repository to the /opt/copr.

Cheat sheet

How can I see running containers?

docker-compose ps

Why doesn’t some container start as expected?

docker-compose logs --follow

How can I open a shell in the container?

docker exec -it <name> bash

How can I see running services in the container?

supervisorctl status

How can I control services in the container?

supervisorctl start/stop/restart all/<name>

My personal workflow

Are you familiar with utilizing containers for development? Just stop reading here. This section describes my personal preferences and you might not endorse them. That is fine, I am not trying to force you to do it my way. However, I think that it is a good idea to describe them, so new team members (or even the current ones) can inspire themselves. Also, if everyone described their setup, we would be clear on what we need to support.

In case that you haven’t read the post about my vagrant setup, you should do it. The workflow remains exactly the same, just the tools changed. Let’s have a frontend as an example.

Once we have a running container for the frontend, we can open a shell in it and do

supervisorctl stop httpd
python /opt/copr/frontend/coprs_frontend/manage.py runserver -p 80 -h 0.0.0.0

to stop the service from a pre-installed package and run a built-in server from the live code. It allows us to try uncommitted changes (duh) or use tools like ipdb.

Alternatively, for distgit, we can use

supervisorctl stop copr-dist-git
PYTHONPATH=/opt/copr/dist-git /opt/copr/dist-git/run/importer_runner.py

Resources

  1. https://developer.fedoraproject.org/tools/docker/about.html
  2. https://docs.docker.com/compose/overview/
  3. https://devcenter.heroku.com/articles/local-development-with-docker-compose

What I have found interesting in Fedora during the week 41 of 2017

Posted by Fedora Community Blog on October 16, 2017 06:49 PM

After a week I would like to share some activities which happened during the past week:

Fedora 27 Server Beta is No-Go

On Thursday, 2017-Oct-12, we had Go/No-Go meeting for the delayed F27 Beta release of the Server (modular) edition.  Result of the meeting is No-Go due to missing Release Candidate compose. We are going to run another round of the Go/No-Go meeting on Thursday, 2017-Oct-19 at 17:00 UTC, where we are going to determine the readiness of the F27 Server edition for Beta release. On Friday 2017-Oct-13 FESCo has allowed use of the Rain/Target date scheduling for the F27 Server Beta, so even we slip the F27 Server Beta for one week, the Final F27 Server release is not affected, for now.

New OpenStack SIG

Haïkel has announced a new SIG focused on OpenStack.

Firefox 57 update

Planned update of Firefox browser to version 57 seems to has provoked interesting discussions on devel@ mailing list (“Why is Fx 57 in Updates Testing?“, “Call for testing – Firefox 57“) and was even broad to FESCo. Reading the whole discussion reminds me how difficult is to balance on the edge of the latest updates and stability.

And of course, the list above is not exhaustive and there is much more going on in Fedora community. The list above just summarizing some tasks which has drawn my attention.

 

The post What I have found interesting in Fedora during the week 41 of 2017 appeared first on Fedora Community Blog.

Episode 66 - Objects in mirror are less terrible than they appear

Posted by Open Source Security Podcast on October 16, 2017 03:59 PM
Josh and Kurt talk about Equifax again, Kaspersky, TLS CAs, coming change, social security numbers, and Minecraft.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/5840636/height/90/width/640/theme/custom/autonext/no/thumbnail/yes/autoplay/no/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="640"></iframe>

Show Notes



Shaking the tin for LVFS: Asking for donations!

Posted by Richard Hughes on October 16, 2017 03:50 PM

tl;dr: If you feel like you want to donate to the LVFS, you can now do so here.

Nearly 100 million files are downloaded from the LVFS every month, the majority being metadata to know what updates are available. Although each metadata file is very small it still adds up to over 1TB in transfered bytes per month. Amazon has kindly given the LVFS a 2000 USD per year open source grant which more than covers the hosting costs and any test EC2 instances. I really appreciate the donation from Amazon as it allows us to continue to grow, both with the number of Linux clients connecting every hour, and with the number of firmware files hosted. Before the grant sometimes Red Hat would pay the bandwidth bill, and other times it was just paid out my own pocket, so the grant does mean a lot to me. Amazon seemed very friendly towards this kind of open source shared infrastructure, so kudos to them for that.

At the moment the secure part of the LVFS is hosted in a dedicated Scaleway instance, so any additional donations would be spent on paying this small bill and perhaps more importantly buying some (2nd hand?) hardware to include as part of our release-time QA checks.

I already test fwupd with about a dozen pieces of hardware, but I’d feel a lot more comfortable testing different classes of device with updates on the LVFS.

One thing I’ve found that also works well is taking a chance and buying a popular device we know is upgradable and adding support for the specific quirks it has to fwupd. This is an easy way to get karma from a previously Linux-unfriendly vendor before we start discussing uploading firmware updates to the LVFS. Hardware on my wanting-to-buy list includes a wireless network card, a fingerprint scanner and SSDs from a couple of different vendors.

If you’d like to donate towards hardware, please donate via LiberaPay or ask me for PayPal/BACS details. Even if you donate €0.01 per week it would make a difference. Thanks!

Upgrade Fedora Workstation to Fedora 27 Beta

Posted by Fedora Magazine on October 16, 2017 08:00 AM

In case you missed the news, Fedora 27 Beta was released last week. If you’re running Fedora Workstation, it’s easy to upgrade to the Beta release. Then you can try out some of the new features early. This article explains how.

Some helpful advice

The Fedora 27 Beta is still just what it says it is: a beta. That means some features are still being tuned up before the final release. However, it works well for many users, especially those who are technically skilled. You might be one of them. Before you upgrade, here are some things to keep in mind.

First, back up your user data. While there are no problems currently known that would risk your data, it’s a good idea to have a recent backup for safety.

Second, remember this process downloads all the update data over your internet connection. It will take some time, based on your connection speed. Upgrading the system also requires a reboot, and takes some time to install the updated packages. Don’t perform this operation unless you have time to wait for it to finish.

If you move to the Beta, you’ll receive updates for testing during the prerelease period. When the Beta goes to Final, you’ll receive an update to the fedora-release package. This will shut off the updates-testing stream. Your system will then automatically follow the Fedora 27 stable release. You don’t need to do anything to make this happen.

Upgrading your system

Open a Terminal and type the following command:

gsettings set org.gnome.software show-upgrade-prerelease true

This setting lets the Software application detect the availability of a prerelease, in this case Fedora 27 Beta.

Normally you have to wait for the Software service to refresh its information. However, you can force it to do this in several ways. One is to kill the service and restart it manually:

pkill gnome-software

Now open the Software app. Visit the Updates tab. After a short time, the Software app retrieves fresh information about the prerelease and advertises it to you.

Use the Download Now button to download the upgrade data for Fedora 27 Beta. Follow the prompts to reboot and install the upgrade, which will take some time. When your system restarts after the upgrade, you’ll be running the Fedora 27 Beta.

Copr - Vagrant development

Posted by Jakub Kadlčík on October 16, 2017 12:00 AM

[OUTDATED] This article explains my local setup and personal workflow for developing Copr. This doesn’t necessarily mean that it is the best way how to do it, but it is the way that suits best to my personal preferences. Other team members probably approach this differently.

Theory

Developing in vagrant has a lot of advantages, but it also brings us few unpleasant things. You can basically set up your whole environment just by running vagrant up which allows you to test your code on a production-like machine. This is absolutely awesome. The bad thing (while developing) is that on such machine you can’t do things like “I am gonna change this line and see what happens” or interactive debugging via ipdb.

Actually what you have to do is committing the change first, building a package from your commit, installing it and then restarting your server. Or if you are lazy, committing the change and reloading the whole virtual machine. However it doesn’t matter, it will be slow and painful either way. In this article I am going to explain how you can benefit from Vagrant features but still develop comfortably and “interactively”.

Prerequisites

# You should definitely not turn off your firewall
# I am lazy to configure it though
$ sudo systemctl stop firewalld
$ sudo dnf install vagrant

Example workflow

Let’s imagine that we want to make some change in frontend code. First of all, we have to setup and start our dev environment. The following command will run virtual machines for frontend and distgit.

$ vagrant up

Then we will connect to the machine, that we want to modify - in this case, frontend.

$ vagrant ssh frontend

Now, as it is described in frontend section, we will stop production server and run development one from /vagrant folder which is synchronized with our host machine. It means, that every change from your IDE is immediately projected to your web server. For instance, try to put import ipdb; ipdb.set_trace() somewhere to the code and reload copr-frontend in the browser. You can see the debugger in your terminal.

ipdb>

Similarly, you can use such workflow for distgit.

Frontend

# [frontend]
sudo systemctl stop httpd
sudo python /vagrant/frontend/coprs_frontend/manage.py runserver -p 80 -h 0.0.0.0

Dist-git

# [dist-git]
sudo systemctl stop copr-dist-git
sudo su copr-service
cd
PYTHONPATH=/vagrant/dist-git /vagrant/dist-git/run/importer_runner.py

Backend

There is no vagrant support for the backend. We rather use docker image for it. Let’s leave this topic for another post.

Follow up

I’ve been using this setup for over a year now and it served me quite well. Right until I wanted to run several machines at once, IDE and browser on a laptop with limited RAM capacity. That is one of the reasons why I decided to dockerize the whole Copr stack and move away from Vagrant. See my current workflow in a newer post - The whole Copr stack dockerized!

FAF URL setup of ABRT client Ansible role

Posted by ABRT team on October 15, 2017 09:31 AM

We recently added a new option to ABRT client Ansible role to setup URL of the FAF server, i.e., where crash reports from ABRT client are reported.

This small improvement will mostly appreciate people using or willing to use their installation of FAF for gathering crash reports from ABRT client for custom analysis, having their FAF installation running on custom server (easy to do using FAF ansible role) or in docker container.

Usage

To use ABRT client Ansible role it is needed to declare it in your playbook, as:

   ...
   roles:
     - ansible-abrt-client-role
   ...

By default, FAF URL is set to https://retrace.fedoraproject.org/faf, which is main FAF installation of Fedora. To adjust it, just put to your playbook:

   ...
   roles:
     - { role: ansible-abrt-client-role, faf_url: 'your.faf.url' }
   ...

Or, using the newer syntax:

   ...
   tasks:
   - include_role:
       name: ansible-abrt-client-role
     vars:
       faf_url: 'your.faf.url'
   ...

tmux config

Posted by Paul Mellors [MooDoo] on October 15, 2017 09:02 AM

I’ve just started playing with tmux in the i3 window manager, just to try something new.  For my benefit incase i need to reinstall, this is what I have in it so far. It’s nowhere complete, and just a start, but it works for me 🙂

Feel free to comment with additions or what you use in yours.

#change binding key
unbind-key C-b
set-option -g prefix C-a

bind-key C-a send prefix

bind-key v split-window -v
bond-key h split-window -h

set -g mouse on

bind -n M-Left select-pane -L
bind -n M-Right select-pane -R
bind -n M-Up select-pane -U
bind -n M-Down select-pan -D

set -g status off
 

 


Reproducible Copr builds

Posted by Jakub Kadlčík on October 14, 2017 12:00 AM

Well, sort of. Has your package failed to build in Copr? We introduce a new tool called copr-rpmbuild which allows you to reproduce it locally and make the debugging process much easier.

Behold copr-rpmbuild

copr-rpmbuild is a simple tool for reproducing Copr builds. Upon your needs, it can produce SRPM or RPM package. The best thing is that we use this tool internally within Copr infrastructure, so you can be sure that it reproduces the build under exactly same conditions.

The basic usage is straightforward

copr-rpmbuild --build-id <id> --chroot <name>

This will obtain a task definition from Copr and attempt to build RPM package into /var/lib/copr-rpmbuild/results/ directory. Except the binary package itself, there are also generated mock configs and logs.

If you are interested only in SRPM package, use

copr-rpmbuild --srpm --build-id <id>

Disclaimer

Did I get you on the buzzword reproducible builds? Well, let me clarify what does it mean in this context. Copr stores a definition of every build. We call such definition a build task, and it contains information needed to create the desired buildroot and produce a package in it. For instance, there is a name of mock chroot that should be used, what repositories should be allowed there, what packages should be installed, … and of course information about what is going to be built in it.

The meaning of reproducing a build is creating a local build from the same task as the original one. It is not guaranteed that the output will always be a 100% same. It may vary when using a different mock version or non-standard configuration on a client side and in situations when the package operates with build timestamp of itself.

Configuration

When no other config is specified, the pre-installed /etc/copr-rpmbuild/main.ini is used. This is also a configuration file used in Copr stack. You can specify a different config file by --config <path> parameter. Such config doesn’t necessarily have to contain all the possible options, just the ones that you want to change. Let me suggest two alternative configurations

User-friendly paths

Do not touch system directories.

[main]
resultdir = ~/copr-rpmbuild/results
lockfile = ~/copr-rpmbuild/lockfile
logfile = ~/copr-rpmbuild/main.log
pidfile = ~/copr-rpmbuild/pid

Different Copr instance

Use Copr staging instance as an example.

[main]
frontend_url = http://copr-fe-dev.cloud.fedoraproject.org
distgit_lookaside_url = http://copr-dist-git-dev.fedorainfracloud.org/repo/pkgs
distgit_clone_url = http://copr-dist-git-dev.fedorainfracloud.org/git

Examples

# Default usage
copr-rpmbuild --build-id 123456 --chroot fedora-27-x86_64

# Build only SRPM package
copr-rpmbuild --srpm --build-id 123456

# Use different config
copr-rpmbuild -c ~/my-copr-rpmbuild.ini --build-id 123456 --chroot fedora-27-x86_64

Policy hacking

Posted by Allan Day on October 13, 2017 02:48 PM

Last week I attended the first ever GNOME Foundation hackfest in Berlin, Germany. The hackfest was part of an effort to redefine how the GNOME Foundation operates and is perceived. There are a number of aspects to this process:

  1. Giving the Board of Directors a higher-level strategic oversight role.
  2. Empowering our staff to take more executive action.
  3. Decentralising the Foundation, so that authority and power is pushed out to the community.
  4. Engaging in strategic initiatives that benefit the GNOME project.

Until now, the board has largely operated in an executive mode: each meeting we decide on funding requests, trademark questions and whatever other miscellaneous issues come our way. While some of this decision-making responsibility is to be expected, it is also fair to say that the board spends too much time on small questions and not enough on bigger ones.

One of the reasons for last week’s hackfest was to try and shift the board from its executive role to a more legislative one. To do this, we wrote and approved spending policies, so that expenditure decisions don’t have to be made on a case-by-case basis. We also approved a budget for the financial year and specified budget holders for some lines of expenditure.

With these in place the board is now in a position to relinquish control over trivial spending decisions and to take up a high-level budget oversight role. Going forward the board will have have its eye on the big budget picture and not on the detail. Smaller spending decisions will be pushed out to our staff, to individual budget holders from the community and to committees.

It is hoped that these changes will allow us to play a more strategic role in the future. This transition will probably take some time yet, and there are some other areas that still need to be addressed. However, with the Berlin hackfest we have made a major step forward.

Huge thanks to the good people at Kinvolk for providing the venue for the event, and to the GNOME Foundation for sponsoring me to attend.

PHP version 7.0.25RC1 and 7.1.11RC1

Posted by Remi Collet on October 13, 2017 08:20 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.1.11RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26-27 or remi-php71-test repository for Fedora 24-25 and Enterprise Linux.

RPM of PHP version 7.0.24RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 24 and Enterprise Linux.

PHP version 5.6 is now in security mode only, so no more RC will be released.

PHP version 7.2 is in development phase, version 7.2.0RC2 is also available.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.1.11RC1 is also available in Fedora 27 (updates-testing) and version 7.2.0RC4 in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.4.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php70, php71)

Base packages (php)

Check disk usage at the command line with du

Posted by Fedora Magazine on October 13, 2017 08:00 AM

End users and system administrators sometimes struggle to get exact disk usage numbers by folder (directory) or file. The du command can help. It stands for disk usage, and is one of the most useful commands to report disk usage. This utility ships in the coreutils package included by default in Fedora.

You can list the size of a file:

$ du anaconda-ks.cfg
4 anaconda-ks.cfg

The -h switch changes the output to use human readable numbers:

$ du -h anaconda-ks.cfg
4.0K anaconda-ks.cfg

In most cases, your goal is to find disk usage in and under a folder, or its contents. Keep in mind this command is subject to the file and folder permissions that apply to those contents. So if you’re working with system folders, you should probably use the sudo command to avoid running into permission errors.

This example prints a list of contents and their sizes under the root (/) folder:

sudo du -shxc /*

Here’s what the options represent:

  • -s = summarize
  • -h = human readable
  • -x = one file system — don’t look at directories not on the same partition. For example, on most systems this command will mainly ignore the contents of /dev, /proc, and /sys.
  • -c = grand total

You can also use the –exclude option to ignore a particular directory’s disk usage:

sudo du -shxc /* --exclude=proc

You can provide file extensions to exclude, like .iso, .txt, or *.pdf. You can also exclude entire folders and their contents:

sudo du -sh --exclude=*.iso

You can also limit the depth to walk the directory structure using –max-depth. You can print the total for a directory (or file, with –all) only if it is N or fewer levels below the command line argument. If you use –max-depth=0, you’ll get the same result as with the -s option.

sudo du /home/ -hc --max-depth=2

Permisos básicos en GNU/Linux con chmod

Posted by Fernando Espinoza on October 12, 2017 07:30 PM

 Estructura básica de permisos en archivos Hay 3 atributos básicos para archivos simples: lectura, escritura y ejecutar. >> Permiso de lectura (read) Si tienes permiso de lectura de un archivo, puedes ver su contenido. >> Permiso de escritura (write) Si tienes permiso de escritura de un archivo, puedes modificar el archivo. Puedes agregar, sobrescribir o... Seguir leyendo →


Fedora 27 bekommt Support für AAC

Posted by Fedora-Blog.de on October 12, 2017 05:59 PM

Wie Christian Schaller in seinem Blog schreibt, wird Fedora 27 (Workstation) AAC-Audiodateien ohne Pakete aus Fremd-Repositories abspielen können.

Dafür wird eine von Google modifizierte Version der AAC-Implementierung sowie die dazugehörigen GStreamer-Plugins in Fedora 27 integriert.

Näheres dazu, wann die Pakete verfügbar sein werden und wie sie heißen, hat er jedoch noch nicht mitgeteilt.

Taking Stock, Making Plans.

Posted by Susan Lauber on October 12, 2017 05:02 PM
My company has a couple of projects that are about to wrap up. In both cases the client has hired a full time employee to pick up the work. This is great for them and normal for my business but it does mean finding "the next big thing" around the holidays.

My company is small. Really small. OK, it is just me. So I have the flexibility to take my time finding the next big project. I still have smaller, recurring contracts to carry through.

Before I get into what kind of excitement I want from my next big thing, I am looking forward to taking a few weeks off and maybe getting to a few of the many "if only I had the time" projects that are on my list. At least spending *some* time on wish items in between searching for the next big thing.

I often wish I could be more diligent about writing  and presenting. Writing here and even contributing to opensource.com. Presenting at conferences which I have done in the past, but also at local meetups. The small groups are a lot more fun! I had a couple of conference proposals that did not make the cut recently but that I think are still valuable. One I even planned to write an article on and still just have not gotten it done. It is on the list.

As I have watched my Goddaughter grow up, I have meant to get more involved in sharing my knowledge with kids. I took her to a Kid's Day event before a Red Hat Summit one year and we had a blast. Since I first explored the CISSP certification I have had the interest to go through the Safe and Secure Online training so I can look for volunteer opportunities. I also think the Techgirlz program is awesome (I might be a bit biased since a fellow instructor went to work there) and they have a local chapter. It is on the list.

When I got started contributing to open source communities it was with the Fedora Project and specifically the Docs team. I have not been anywhere near as active with Fedora lately and I miss it. I still consider myself an active Ambassador with each class I teach but I have not really contributed through content or formal activities lately. I am actually looking for a new challenge though, rather than returning to an old stomping ground, and probably with a smaller project. I dabbled in an Apache Hadoop ecosystem project for a bit and I still follow that mailing list but I never really got into that community. Melding open source and security is ideal, though I have really enjoyed the past year where I jumped into automation with Ansible and containers with OpenShift. The search continues.

Then of course there is the true time off - something that never really happens when you own your own business - where I can get things done around the house. The builtin bookcase that is already planned, the office cleaned out with all the old equipment donated, the yard spruced up, some light reading, etc.  Also all on the list.

-SML

AAC support will be available in Fedora Workstation 27!

Posted by Christian F.K. Schaller on October 12, 2017 04:34 PM

So I am really happy to announce another major codec addition to Fedora Workstation 27 namely the addition of the codec called AAC. As you might have seen from Tom Callaways announcement this has just been cleared for inclusion in Fedora.

For those not well versed in the arcane lore of audio codecs AAC is the codec used for things like iTunes and is found in a lot of general media files online. AAC stands for Advanced Audio Coding and was created by the MPEG working group as the successor to mp3. Especially due to Apple embracing the format there is a lot of files out there using it and thus we wanted to support it in Fedora too.

What we will be shipping in Fedora is a modified version of the AAC implementation released by Google, which was originally written by Frauenhoffer. On top of that we will of course be providing GStreamer plugins to enable full support for playing and creating AAC files for GStreamer applications.

Be aware though that AAC is a bit of an umbrella term for a lot of different technologies and thus you might be able to come across files that claims to use AAC, but which we can not play back. The most likely reason for that would be that it requires a AAC profile we do not support. The version of AAC that we will be shipping has also be carefully created to fit within the requirements for software in Fedora, so if you are a packager be aware that unlike with for instance mp3, this change does not mean you can package and ship any AAC implementation you want to in Fedora.

I am expecting to have more major codec announcements soon, so stay tuned :)

Getting Started with Flatpak

Posted by Fedora Magazine on October 12, 2017 05:42 AM

Fedora is a distribution that does not shy away from emerging technology (after all, one of its founding principles is First). So it comes as no surprise that Fedora is on the leading edge of a revolutionary new software package system. This is not the first time Flatpak has been mentioned on the Magazine, but since Flatpak is so new, a lot has changed since last time.

What is Flatpak?

Flatpak can roughly be described as a modern replacement for RPMs, but its impact is far more significant than simply offering a new packaging format. Before we go into what Flatpak offers, first consider the current software delivery flow present in Fedora today.

  1. Upstream authors work away at a new version, ultimately producing an archive containing source code (frobnicate-0.4.2.tar.gz)
  2. Distribution packagers receive a notice that their upstream team has released a new version
  3. Distribution packagers download the source archive, and build the new version for all supported distributions, producing a binary RPM (frobnicate-0.4.2-1.fc26.x86_64.rpm) for each distribution
  4. Distribution packagers submit the binary RPMs to the appropriate update system, pushing the RPMs through the workflow so that…
  5. Finally a user can download the new version of frobnicate (dnf upgrade frobnicate)

This process from start to finish can take anywhere from a few days to months or more. Flatpak provides tooling to both the upstream developer and the end user that dramatically shortens the time between an upstream release and a binary arriving in a user’s dnf update. Now let’s revisit frobnicate, but with Flatpak in place both with the upstream developer and the end user.

  1. Upstream authors work away at a new version, ultimately producing an archive containing source code (frobnicate-0.4.2.tar.gz)
  2. Upstream authors build a Flatpak repository using flatpak-builder
  3. Upstream authors push the new repository to a URL already known to their users and/or advertised on the project website
  4. End user receives the new version of frobnicate during flatpak update

Flatpak directly connects the upstream author  with the end user; there are no distribution intermediaries involved. Flatpak uses OSTree to build a file-system containing all the dependent libraries and files necessary to run the desired program. This means a single Flatpak repository can run on all Linux distributions capable of running the flatpak program. In addition, because OSTree repositories can be branched, different versions of the same program can be installed at the same time (Imagine having both a stable, released version installed as well as a nightly developer version!). Finally, Flatpak runs each program inside a sandbox environment, requesting permission from the user before accessing hardware devices or files.

Set up repositories

Anyone can host a Flatpak repository, but it requires a server and some tooling to maintain. As a result, a few software teams have coalesced around a couple different major repositories.

GNOME

The GNOME development team hosts a repository containing nightly builds of all the core GNOME apps, as well as many additional applications. To add the gnome-nightly repository, open Terminal and run:

flatpak remote-add --if-not-exists gnome-nightly https://sdk.gnome.org/gnome-nightly.flatpakrepo
flatpak remote-add --if-not-exists gnome-apps-nightly https://sdk.gnome.org/gnome-apps-nightly.flatpakrepo

Flathub

A team of Flatpak developers have started a project known as Flathub. Flathub aims to provide a centralized repository for making Flatpak applications available to users. Flathub covers more than the GNOME application suite, and is regularly adding new applications. To add the Flathub repository, open Terminal and run:

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

Applications

GNOME Software already supports Flatpak repositories, so applications can be installed either with GNOME Software or with the flatpak command. Launch Software from the Overview, click the Search button and search for the desired application. If it is available in the traditional Fedora repositories as an RPM, there are two results.

  1. The entry labeled Source: fedoraproject.org is the RPM.
  2. The entry labeled Source: sdk.gnome.org is the Flatpak.

Select the Flatpak entry and click Install.

Once installed, Polari can be launched from the Overview like any other application; the GNOME shell already supports Flatpak applications.

The flatpak command also lists and installs apps and runtimes. To list all apps available in a specific repository, run the remote-ls command:

flatpak remote-ls flathub --app

Install an app with the install command:

flatpak install flathub com.valvesoftware.Steam

Once installed, the run command will run the application:

flatpak run com.valvesoftware.Steam

Build your own

Can’t find your favorite applications on Flathub or elsewhere as a Flatpak? Building your own is actually fairly straightfoward. If you are comfortable compiling software “by hand”, creating a Flatpak repository will seem quite familar. Flatpak repositories can be built a couple different ways, but the simplest method is to create a JSON formatted file called a “manifest”. For example, take the GNOME Dictionary:

{
  "app-id": "org.gnome.Dictionary",
  "runtime": "org.gnome.Platform",
  "runtime-version": "3.22",
  "sdk": "org.gnome.Sdk",
  "command": "gnome-dictionary",
  "finish-args": [
     "--socket=x11",
     "--share=network"
  ],
  "modules": [
    {
      "name": "gnome-dictionary",
      "sources": [
        {
          "type": "archive",
          "url": "https://download.gnome.org/sources/gnome-dictionary/3.20/gnome-dictionary-3.20.0.tar.xz",
          "sha256": "efb36377d46eff9291d3b8fec37baab2355f9dc8bc7edb791b6a625574716121"
        }
      ]
    }
  ]
}

Save this to a file, and run flatpak-builder to create a repository.

$ flatpak-builder gnome-dictionary-app-dir org.gnome.Dictionary.json

Resources

There are a growing number of useful resources for building and using Flatpaks.

“The Designership”

Posted by Suzanne Hillman (Outreachy) on October 12, 2017 12:06 AM

“The Designership

Will check that out!

I’ve also been fond of UX Mastery, and the Junior UX Community slack.

I’m in the middle of a career change, and turned 40 a week ago.

Posted by Suzanne Hillman (Outreachy) on October 11, 2017 11:55 PM

I’m in the middle of a career change, and turned 40 a week ago. I’ve been working on UX projects in my own time, and trying to get paid work in UX has been quite difficult. I think part of the problem is that there are a huge number of new UXers in the area I’m in, which makes it harder to stick out as worth someone’s time.

Finding a mentor is _hard_!

Posted by Suzanne Hillman (Outreachy) on October 11, 2017 11:50 PM

Finding a mentor is _hard_!

Then again, I’m not entirely clear on what a mentor is supposed to offer a mentee. I’m currently working on a simple project with someone currently in school for technical communications, which feels like mentoring even though I’ve not yet managed to get a paid UX position yet (I’m a career changer).

Just a strange position to be in!

Bodhi 2.12.2 released

Posted by Bodhi on October 11, 2017 11:47 PM

Bugs

  • Positive karma on stable updates no longer sends them back to batched (#1881).
  • Push to batched buttons now appear on pushed updates when appropriate (#1875).

Release contributors

The following developers contributed to Bodhi 2.12.2:

  • Randy Barlow

How to Send Messages in Private

Posted by Fernando Espinoza on October 11, 2017 05:25 PM

There are many times when you want to have a chat in private, such as when you have something confidential to say or you simply don't want the world listening in. Hey, that's what social media's for! Imagine you have a personal butler, eager to run errands on your behalf. You want to send a... Seguir leyendo →


Bodhi 2.12.1 released

Posted by Bodhi on October 11, 2017 01:38 PM

Bugs

  • Use separate directories to clone the comps repositories (#1885).

Release contributors

The following developers contributed to Bodhi 2.12.1:

  • Patrick Uiterwijk
  • Randy Barlow

Unattended Upgrades

Posted by Paul Mellors [MooDoo] on October 11, 2017 08:01 AM

** Update 11/10/17 [Thanks James] **

It looks like there is an issue if you had dnf-automatic in Fedora 25 or earlier as Fedora 26 jumps from DNF 1.x to 2.x, so please read THIS post before you install it in 25 and upgrade.

Blatantly stolen from here for my records – Thanks Iain R. Learmonth

Debian

Make sure that unattended-upgrades is installed and then enable the installation of updates (as root):

apt install unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades

Fedora 22 or later

Beginning with Fedora 22, you can enable automatic updates via:

dnf install dnf-automatic

In /etc/dnf/automatic.conf set:

apply_updates = yes

Now enable and start automatic updates via:

systemctl enable dnf-automatic.timer
systemctl start dnf-automatic.timer

(Thanks to Enrico Zini I know all about these timer units in systemd now.)

RHEL or CentOS

For CentOS, RHEL, and older versions of Fedora, the yum-cron package is the preferred approach:

yum install yum-cron

In /etc/yum/yum-cron.conf set:

apply_updates = yes

Enable and start automatic updates via:

systemctl start yum-cron.service

 


Participez à la journée de test consacrée à la mise à niveau vers F27

Posted by Charles-Antoine Couret on October 11, 2017 06:08 AM

Aujourd'hui, ce mercredi 11 octobre, est une journée dédiée à un test précis : sur la mise à niveau de Fedora vers le futur Fedora 27. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Nous sommes proches de la diffusion de Fedora 27 final (prévu pour début novembre). Et pour que ce lancement soit un succès, il est nécessaire de s'assurer que le mécanisme de mise à niveau fonctionne correctement. C'est-à-dire que votre Fedora 25 ou 26 devienne Fedora 27 sans réinstallation, en conservant vos documents, vos paramètres et vos programmes. Une très grosse mise à jour en somme.

Les tests du jour couvrent :

  • Mise à niveau depuis Fedora 25 ou 26, avec un système chiffré ou non ;
  • Même que précédemment mais avec KDE comme environnement ;
  • De même avec la version Server ou Minimal au lieu de Workstation ;
  • En utilisant GNOME Logiciels plutôt que dnf.

En effet, Fedora propose depuis quelques temps déjà la possibilité de faire la mise à niveau graphiquement avec GNOME Logiciels et en ligne de commande avec dnf. Dans les deux cas le téléchargement se fait en utilisation normale de votre ordinateur, une fois que ce sera prêt l'installation se déroulera lors du redémarrage.

Pour ceux qui veulent bénéficier de F27 avant sa sortie officielle, profitez-en pour réaliser ce test, que cette expérience bénéficie à tout le monde. :-)

Personnellement j'ai déjà testé un peu en avance sur ma machine professionnelle et j'ai rapporté mon premier bogue bloquant pour Fedora. En effet, dnf plantait lors du redémarrage pour procéder à l'installation des nouveaux paquets. Mais ayant eu lieu en début de procédure, cela n'a pas d'effet négatif. Bogue très courant apparemment pour l'instant mais dont un correctif est en test.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-day et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Announce: Entangle “Lithium“ release 1.0 – an app for tethered camera control & capture

Posted by Daniel Berrange on October 10, 2017 09:32 PM

I am pleased to announce a new release 1.0 of Entangle is available for download from the usual location:

  https://entangle-photo.org/download/

This release brings some significant changes to the application build system and user interface

  • Requires Meson + Ninja build system instead of make
  • Switch to 2-digit version numbering
  • Fix corruption of display when drawing session browser
  • Register application actions for main operations
  • Compile UI files into binary
  • Add a custom application menu
  • Switch over to using header bar, instead of menu bar and tool bar.
  • Enable close button for about dialog
  • Ensure plugin panel fills preferences dialog
  • Tweak UI spacing in supported cameras dialog
  • Add keyboard shortcuts overlay

What I have found interesting in Fedora during the week 40 of 2017

Posted by Fedora Community Blog on October 10, 2017 08:42 PM

Interesting events or issues I was involved in or I noticed in the Fedora project and community which happened during the last week:

Fedora 27 Beta Release

On Tuesday October 3rd, we have released the F27 Beta Release. For more information please check the Announcement. To download the Beta please go to GetFedora page.

Fedora 27 Beta Freeze is over

As a subsequent step after the F27 Beta release, the freeze applied on updates has been lifted. The Final freeze is planned on October 17th.  For more information please check the F27 Schedule.

A new “batched” state in Bodhi

Randy Barlow has announced a new state implemented in Bodhi called “batched”. The purpose of this state is to have a state for packages waiting for weekly batch update push.  This will help in gating of updates if needed, so we might have a choice on planning of updates.

“What can I do for Fedora” is back

Thanks to Ralph and Patrick we have back the “What can I do for Fedora” application.  Anyone, interested in helping with Fedora, can now find some work more easily.

And of course, the list above is not exhaustive and there is much more going on in Fedora community. The list above just summarizing some tasks which has drawn my attention.

The post What I have found interesting in Fedora during the week 40 of 2017 appeared first on Fedora Community Blog.

Bodhi 2.12.0 released

Posted by Bodhi on October 10, 2017 07:42 PM

Features

  • Bodhi now asks Pagure to expand group membership when Pagure is used for ACLs (#1810).
  • Bodhi now displays Atomic CI pipeline results (#1847).

Bugs

  • Use generic superclass models where possible (#1793).

Release contributors

The following developers contributed to Bodhi 2.12.0:

  • Pierre-Yves Chibon
  • Randy Barlow

Qt2 ported for modern systems with cmake

Posted by Helio Chissini de Castro on October 10, 2017 04:27 PM

So, to continue my archeology process to revive old software, again i´m preparing my next step to revive KDE 2, on the so indirect baptized KDE restoration project.

Despite KDE 1 last year, KDE 2 is a complete different beast and will take me some time to made it ready.

The very base foundation, though is Qt2, the this time i decided do a better treatment to Qt to easier my further work. I based my work on clang compiler.

Result is far from perfect, i decided publish on the very first stage of usage, but some strategies on the port still not here yet. but is perfectly usable, all examples compiles and runs.

Qt designer has some funny bugs though, and i decided not investigate it yet. New ported png code is not 100% reliable ( png pure documentation is horrible )

So, the F.A.Q. for the curious

  • Why ????
    • Because i was motivated and i really believe we need to restore our memory code wise.
  • Don’t you have better things to do ?
    • Yes, so what ?
  • Can i compile on Windows ?
    • Well, yes but not yet. I focused only on *nix platforms for now, mostly Linux
  • Can i use with Wayland ?
    • Nope, an i doubt about future.
  • Can i compile applications with Qt2
    • Yes, perfect plausible
  • Do you accept patches ?
    • It depends. If is for fix or improve the buildsystem or fix a bug on code, yes. Otherwise i want to keep the code as most as original possible. Remember, the intention is archeological. And i will be happy if anyone tackle the crazy designer ( or themes example ) before me 🙂
  • Are you joking with us ?

The mandatory screenshot !!

Stable GNOME Photos Flatpaks moved to Flathub

Posted by Debarshi Ray on October 10, 2017 02:20 PM

Starting from version 3.26, the stable GNOME Photos Flatpaks have been moved to Flathub. They are no longer available from GNOME’s Flatpak repository.

To migrate, first delete the old build:

$ flatpak uninstall org.gnome.Photos/x86_64/stable

Then install it from Flathub:

$ flatpak remote-add --from flathub https://flathub.org/repo/flathub.flatpakrepo
$ flatpak install flathub org.gnome.Photos

Note that this is only about the stable build. The nightly continues to be available from its existing location in GNOME’s repository. You can keep updating it with:

$ flatpak update --user org.gnome.Photos/x86_64/master


Protected: Fleet Commander: production ready!

Posted by Alberto Ruiz on October 10, 2017 02:03 PM

This post is password protected. You must visit the website and enter the password to continue reading.

Advertisements
<script type="text/javascript"> (function(g){if("undefined"!=typeof g.__ATA){g.__ATA.initAd({collapseEmpty:'after', sectionId:26942, width:300, height:250});}})(window); </script>
<script type="text/javascript"> var o = document.getElementById('crt-1845096301'); if ("undefined"!=typeof Criteo) { var p = o.parentNode; p.style.setProperty('display', 'inline-block', 'important'); o.style.setProperty('display', 'block', 'important'); Criteo.DisplayAcceptableAdIfAdblocked({zoneid:388248,containerid:"crt-1845096301",collapseContainerIfNotAdblocked:true,"callifnotadblocked": function () {var o = document.getElementById('crt-1845096301'); o.style.setProperty('display','none','important');o.style.setProperty('visbility','hidden','important'); } }); } else { o.style.setProperty('display', 'none', 'important'); o.style.setProperty('visibility', 'hidden', 'important'); } </script>
<script type="text/javascript"> var o = document.getElementById('crt-680167532'); if ("undefined"!=typeof Criteo) { var p = o.parentNode; p.style.setProperty('display', 'inline-block', 'important'); o.style.setProperty('display', 'block', 'important'); Criteo.DisplayAcceptableAdIfAdblocked({zoneid:837497,containerid:"crt-680167532",collapseContainerIfNotAdblocked:true,"callifnotadblocked": function () {var o = document.getElementById('crt-680167532'); o.style.setProperty('display','none','important');o.style.setProperty('visbility','hidden','important'); } }); } else { o.style.setProperty('display', 'none', 'important'); o.style.setProperty('visibility', 'hidden', 'important'); } </script>

DevConf.CZ 2018 CfP is OPEN

Posted by Radek Vokál on October 10, 2017 09:13 AM

Hey everyone, the annual developer conference in Brno is seeking for great speakers again! For those who don't know DevConf - it is annual, free, Red Hat sponsored community conference for developers, admins, DevOps engineers, testers, documentation writers, project leaders and other contributors to Open Source Linux, middleware, virtualization, storage, cloud and mobile technologies. It is conference where FLOSS communities sync, share, and hack on upstream projects together and it is all hosted at a beautiful venue of Brno University of Technology in Brno, Czech Republic. Last year the conference had around 1500 attendees and more than 200 talks and sessions and most of these great talks can be found on our YouTube channel.

At this point we're looking for content contirubutors. Whether you want to talk about your new cool project, dive deep into technology aspects of your favorite thing, run a community meetup and show cool stuff to everyone else - we want you. Call for Participation is now open, just head over to devconf.cz/cfp and hit Submit! :-)

Important dates
- CfP closes: *November 17th , 2017*
- Accepted speakers confirmation: *December 1, 2017*
- Event dates: Friday January 26 to Sunday January 28, 2018

Using Octave on Fedora 26

Posted by Fedora Magazine on October 10, 2017 08:00 AM

Octave is a free alternative to Matlab. It processes numerical computation and offers built-in plotting and visualization tools to evaluate the behavior of formulas and powerful equations. Octave is a multi-platform tool that also contains many scripts compatible with Matlab. These features make it useful to students, teachers and researchers, who can demonstrate and make interpretations of their calculations graphically. This article shows basic usage of Octave, using an example graph of two trigonometric functions in one figure.

Installing and starting Octave

On Fedora Workstation, open the Software tool and type octave to find the application. Or you can use a terminal. Type octave and if it’s not installed, a message appears to ask you if it’s OK to install.

To check what version of Octave you have installed, read the command window. You can also open a terminal any time, and run this command:

octave --version

Due to an unaddressed bug in Octave, modern Linux systems using Wayland, like Fedora, may not show you graphs. To deal with this, first set Octave to use a different toolkit for graphing. Open a terminal and type this command:

echo 'graphics_toolkit ("gnuplot")' >> ~/.octaverc

This writes a one-line startup file telling Octave to start with a safe toolkit each time it runs.

If you’re using an Xorg session, you may not have to run the above command. The figure windows on your screen may vary depending on whether you’re running Wayland or Xorg.

Graphing a simple trigonometric function

To graph the basic sine function, type the following in the command window:

x = -10:0.1:10;
plot (x, sin (x));

Press Enter and a figure of the sine function appears with the range and parameters previously defined (-10,10). Try fine tuning the graphic by changing the 0.1 parameter.

Adding axis and text labels

Axes in the graph help explain its content. So, next set the labels of both axes by defining the range of X and the value of Y:

xlabel ("-10 ≤ x ≤ 10");
ylabel ("Y = sin (x)");

Afterward, click on the Refresh button to update the graphic of the sine formula:

In addition, you might want to draw the viewer’s attention to specific points on the graph. The commands below label points at specific x and y coordinates:

text(1,sin(1),'This is my Point')
text(6,sin(6),'This is another Point')

Finally, add a title on this graphic with the title function:

title('Sine Wave')

Adding a second function

Start by creating a new figure.

figure

Now use cos to plot a graph of the trigonometric cosine function:

z = cos(x);
plot(x,z);

You can join the function with the previous one, in order to present them together. To illustrate this, use the hold on command and then plot the sine function as you did previously:

hold on
plot(x,sin(x));

Additional formatting

Colors allow readers to more easily follow the visual path of a line. Changing the width and adding a legend can help present the work for publication. For example, run the following commands to better format the graph:

plot (x,sin(x),'b', 'LineWidth',4); 
plot (x,z,'r', 'LineWidth',4); 
legend('cosine','sine') 
title('Sine and Cosine Wave on Fedora 26')

Final thoughts

Octave also lets you define math elements as matrices, equations, or vectors, and you can use other values and equations to plot them. You can plot graphics in a 2D or 3D dimensional space. These and other powerful features make Octave a free alternative that can successfully support the work of students, educators, and other professionals.

Do not do the math with expr

Posted by Ding-Yi Chen on October 10, 2017 06:04 AM

expr EXPERSSION returns 1 when the EXPERSSION is empty or 0.  Thus, you cannot really use it to do arithmetic.

For example, you may find your script doing  expr 1 - 1  returns as error.

To be fair, man page expr actually mentions this. However, instead of a dedicate section EXIT STATUSES, it is hidden in the second last paragraph of DESCRIPTION.

So do yourself a favour, use BASH arithmetic like $((var+1)).


Definitive solution to libvirt guest naming

Posted by Lukas "lzap" Zapletal on October 10, 2017 12:00 AM

Definitive solution to libvirt guest naming

The answer is libvirt NSS. This is Fedora 26:

yum install libvirt-nss

And enable the NSS module with two “libvirt” keywords:

# egrep ^host /etc/nsswitch.conf
hosts: files libvirt libvirt_guest dns myhostname

DNS resolution just works for all my libvirt guests now. NSS will figure it out according to dnsmasq DHCP records (hostname entry). If that’s not advertised by a guest, then it will use VM name. For FQDN you need to rename your VM names to include domain too, e.g. vm1.home.lan.

For more visit documentation.

Hurray! No more fiddling with /etc/hosts, no more dnsmasq split setups or hacks via virsh. This is elegant solution.

Via Kamil Páral’s blog.

New Adventures: Ansible Edition

Posted by Adam Miller on October 09, 2017 08:24 PM
<object class="align-center" data="https://maxamillion.sh/images/Ansible-Mark-RGB_Black.svg" style="width: 800px; height: 400px;" type="image/svg+xml"> Ansible Logo</object>

I am honored, excited, nervous, humbled, and over all elated to announce that starting December 1, 2017 I will be a member of the Ansible Core Development Team at Red Hat where I will work primarily on Upstream Ansible (what most people would likely know as just 'Ansible', 'Ansible Core', or 'The Ansible Project' depending on who you talk to).

This was without a doubt the most difficult decision I've ever made in my professional career to date. Working on the Fedora Engineering Team had been a dream of mine since I was in college and I finally achieved it a few years ago. I love the team, I love work we do, and I genuinely believe we're making a positive impact on the greater open source community. I take a lot of pride in that. I never foresaw a future in which I'd find something that drew my interests in another direction, but then Ansible came into existence. I've been a big fan of Ansible since the project's beginnings and have upstream contributions dating back to it's early versions. It's become a newfound driving passion of mine and when I was approached with the opportunity to work on it full time, it was something I was extremely excited about and I simply couldn't pass it up.

I want to be clear: I will not be abandoning the Fedora Project by any means, I've been a community contributor to Fedora since 2008 and have done so under employment of my three previous employers as well as two Teams since joining Red Hat. I have no plans of changing that now. I'm simply going to have to scale back my daily responsibilities due to new priorities. I have every intention of continuing to serve as an elected member of the Fedora Engineering Steering Committee and will seek re-election when the time comes. I will also remain active in the Fedora Atomic Working Group as much as possible.

While this was extremely bitter sweet because of my lineage with Fedora and my earnest enjoyment of working on the Fedora Engineering Team, I'm extremely excited and am really looking forward to joining the ranks of the Ansible Team.

A big thank you to the entire Fedora Engineering Team, they are a absolutely stellar group of people and I will always look back on our time together fondly.

Until next time...

How To Protect Your Privacy On Linux

Posted by Fernando Espinoza on October 09, 2017 07:40 PM

Don’t be complacent because you’re running Linux! It’s easy to have a false sense of security, thinking that other operating system might be more targeted than Linux, but there are plenty of risks and vulnerabilities for all types of Linux devices. Keep your guard up regardless of your OS. 2. Ensure you use a password... Seguir leyendo →


A few fall conferences

Posted by Laura Abbott on October 09, 2017 06:00 PM

At some point in time, I decided it was a good idea to attend three conferences in September. This was a busy but fairly productive month.

Early September was Open Source Summit and Linux Plumbers Conference. Open Source Summit is the renamed LinuxCon. Matthew Garrett gave a talk about signing binaries with the Integrity Measurment Architecture (IMA) subsystem. This is a subsystem that ties hashes to the TPM module available on many modern systems. Two of the kernel Outreachy interns gave a talk on their projects involving the radix trees. I always enjoy listening to Outreachy interns talk about their projects and it sounded like they made good progress on improving the radix tree. Dawn Foster gave a talk on collaboration in kernel mailing lists. This was focusing on how kernel developers collaborate via e-mail. There were graphs showing the network via e-mail, which mostly showed that there are a handful of people who tend to e-mail each other frequently. This work was part of a PhD program so I look forward to seeing future work. Sarah Sharp gave a talk on code of conducts and enforcement. My favorite take away from this talk was thinking of code of conduct violations in terms of a threat model and what they mean for the well-being of your participants. I was also on the kernel panel with several other developers. This is usually a chance for people to come and ask questions about whatever kernel stuff. I didn't put my foot in my mouth so I consider it a success.

I was on the planning committee for Linux Plumbers so I ended up doing a bunch of behinds the scenes work in addition to the hallway track and going to the occasional session. Jon Corbet gave a talk about the kernel's limits to growth. He gave a version of this talk at Linaro Connect and I gave some thoughts about this previously. Most of my opinions there still stand. Grant Likely held a BoF about the upcoming ARM EBBR. If you've been following the arm64 server space, you may have heard of SBBR which is a boot specification for arm64 servers. The EBBR is something similar for embedded devices. As a Fedora maintainer, I'm happy to see this moving forward to make booting arm64 SBCs easier. There was a discussion about contiguous memory allocation for DMA. Some hardware vendors have discovered they get better performance if they use a contiguous block of memory despite possibly having an IOMMU. The proposal was to use the MAP_CONTIG with mmap to get appropriate memory. There wasn't a conclusion and discussion is ongoing on the mailing list.

The next week was XDC. This is nominally a graphics conference and was out of my typical scope. My primary purpose was to present on Ion. This was a one-track conference so it was nice to not have to make hard choices. Since I'm not a graphics developer, many of the details of some talks went over my head. The discussion about the Intel Graphics CI was useful to hear. Intel has put a lot of effort in terms of machines and tests to graphics testing. A big takeaway from me was the need to add tests slowly to make sure that bugs can actually be fixed when they are found. This sounds obvious but without doing this, CI becomes noisy and is worthless.

The final week was Linaro Connect. My primary reason for attending was (again) to talk about Ion but as always I attended some talks in between doing my regular work. I went to session about Secure Boot on ARM with an update of "not much has changed". There was a session about the Linaro common kernels and how those are maintained. The Linaro kernels once contained a large number of backports but there's been a lot of progress made towards getting code upstream. These days, there's more focus on testing and validating stable kernels upstream. Linaro should be providing automated testing to stable updates in the near future. There was a cross distro BoF which mostly touched on some discussion with toolchains because everything is pretty okay in distro land! Illyan Malchev from Google gave a keynote about Project Treble and announced that LTS kernels are now 6 years instead of 2. I talked recently about some of these Android announcements and the move to a 6 year cycle is a very good thing. This will make it easier to give updates without having to rebase to a brand new kernel.

During these several weeks, I talked a lot about Ion. There's been some good progress this past year towards moving Ion out of mainline. I removed a bunch of code and greatly simplified the ABI. The biggest issue keeping Ion in staging is making sure the ABI is stable. There was agreement that we could look to move to a split ion (/dev/ion0, /dev/ion1 etc.) to better restrict heap access. I got some feedback that the existing 32-bit flags field may not be enough for expanding use cases so I'm going to look to utilize some of the existing fields we have for padding. There was a session on Ion integration challenges which was very useful to me. One of the hardest parts of doing Ion work has been finding a direction. Hearing what problems people are actually having makes this easier. Some of the problems are because I took the obvious (read slow) approach at things like cache maintenance. I tried to encourage people to submit patches for the features that are mission so hopefully I'll see those coming in the near future.

This past weekend was SeaGL, a local Seattle conference. Because I'm local, it was easy for me to attend. Unfortunately three weeks of conferencing caught up with me so I didn't attend as many parts of it as I would have liked. I attended the keynotes Friday and Saturday. Both Nithya Ruff and Rikki Endsley gave fantastic talks which were recorded. I gave a talk on Saturday afternoon called "Creating Fresh Kernels". This was a variation of the talk I gave at DevConf last year, focusing more on the choices distributions might make with their kernels instead of just why Fedora is awesome. I was fairly happy with how it went; the room was full and everyone asked good questions. I'd like to present again next year if I can come up with a good topic.

I have a little bit of downtime before Open Source Summit Europe/Kernel Summit in a few weeks. We'll see if I can follow up on some of the discussions in the meantime.

fwupd hits 1.0.0

Posted by Richard Hughes on October 09, 2017 12:57 PM

Today I released fwupd version 1.0.0, a version number most Open Source projects seldom reach. Unusually it bumps the soname so any applications that link against libfwupd will need to be rebuilt. The reason for bumping is that we removed a lot of the cruft we’ve picked up over the couple of years since we started the project, and also took the opportunity to rename some public interfaces that are now used differently to how they were envisaged. Since we started the project, we’ve basically re-architected the way the daemon works, re-imagined how the metadata is downloaded and managed, and changed core ways we’ve done the upgrades themselves. It’s no surprise that removing all that crufty code makes the core easier to understand and maintain. I’m intending to support the 0_9_X branch for a long time, as that’s what’s going to stay in Fedora 26 and the upcoming Fedora 27.

Since we’ve started we now support 72 different kinds of hardware, with support for another dozen-or-so currently being worked on. Lots of vendors are now either using the LVFS to distribute firmware, or are testing with one or two devices in secret. Although we have 10 (!) different ways of applying firmware already, vendors are slowly either switching to a more standard mechanism for new products (UpdateCapsule/DFU/Redfish) or building custom plugins for fwupd to update existing hardware.

Every month 165,000+ devices get updated using fwupd using the firmware on the LVFS; possibly more as people using corporate mirrors and caching servers don’t show up in the stats. Since we started this project there are now at least 600,000 items of hardware with new firmware. Many people have updated firmware, fixing bugs and solving security issues without having to understand all the horrible details involved.

I guess I should say thanks; to all the people both uploading firmware, and the people using, testing, and reporting bugs. Dell have been a huge supporter since the very early days, and now smaller companies and giants like Logitech are also supporting the project. Red Hat have given me the time and resources that I need to build something as complicated and political as shared infrastructure like this. There is literally no other company on the planet that I would rather work for.

So, go build fwupd 1.0.0 in your distro development branch and report any problems. 1.0.1 will follow soon with fixes I’m sure, and hopefully we can make some more vendor announcements in the near future. There are a few big vendors working on things in secret that I’m sure you’ll all know :)

Running Sapphire Radeon RX 560 on Fedora 27 beta (follow up)

Posted by Luya Tshimbalanga on October 09, 2017 05:45 AM
Following the previous blog and some investigation, it turned out the kernel package from Mystro256 COPR repository based on agd5f kernel branch (one of AMD developers) resolves the blank screen issue. That could trigger a problem for users having a new AMD graphic hardware so perhaps a warning should be written on the release. Perhaps having one of contributors be part of kernel team bringing these improvement until those patches arrive to the mainline kernel for a better user experience.

Past the issue, the desktop experience with Radeon RX 560 was tremendously improved compared to the retired GTX 460 v2. Gnome on Wayland on Fedora runs smooth showing how far the open source amdgpu driver went through compared to previous years. That was also the opportunity to run a vulkan based smoke test demo on RADV, which is a counterpart of glxgears.

Overall the card is excellent once missing software are installed.

Episode 65 - Will aliens overthrow us before AI?

Posted by Open Source Security Podcast on October 09, 2017 12:47 AM
Josh and Kurt talk about Apple, Equifax, passwords, AI, and aliens.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/5819992/height/90/width/640/theme/custom/autonext/no/thumbnail/yes/autoplay/no/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="640"></iframe>

Show Notes



IP Accounting and Access Lists with systemd

Posted by Lennart Poettering on October 08, 2017 10:00 PM

TL;DR: systemd now can do per-service IP traffic accounting, as well as access control for IP address ranges.

Last Friday we released systemd 235. I already blogged about its Dynamic User feature in detail, but there's one more piece of new functionality that I think deserves special attention: IP accounting and access control.

Before v235 systemd already provided per-unit resource management hooks for a number of different kinds of resources: consumed CPU time, disk I/O, memory usage and number of tasks. With v235 another kind of resource can be controlled per-unit with systemd: network traffic (specifically IP).

Three new unit file settings have been added in this context:

  1. IPAccounting= is a boolean setting. If enabled for a unit, all IP traffic sent and received by processes associated with it is counted both in terms of bytes and of packets.

  2. IPAddressDeny= takes an IP address prefix (that means: an IP address with a network mask). All traffic from and to this address will be prohibited for processes of the service.

  3. IPAddressAllow= is the matching positive counterpart to IPAddressDeny=. All traffic matching this IP address/network mask combination will be allowed, even if otherwise listed in IPAddressDeny=.

The three options are thin wrappers around kernel functionality introduced with Linux 4.11: the control group eBPF hooks. The actual work is done by the kernel, systemd just provides a number of new settings to configure this facet of it. Note that cgroup/eBPF is unrelated to classic Linux firewalling, i.e. NetFilter/iptables. It's up to you whether you use one or the other, or both in combination (or of course neither).

IP Accounting

Let's have a closer look at the IP accounting logic mentioned above. Let's write a simple unit /etc/systemd/system/ip-accounting-test.service:

[Service]
ExecStart=/usr/bin/ping 8.8.8.8
IPAccounting=yes

This simple unit invokes the ping(8) command to send a series of ICMP/IP ping packets to the IP address 8.8.8.8 (which is the Google DNS server IP; we use it for testing here, since it's easy to remember, reachable everywhere and known to react to ICMP pings; any other IP address responding to pings would be fine to use, too). The IPAccounting= option is used to turn on IP accounting for the unit.

Let's start this service after writing the file. Let's then have a look at the status output of systemctl:

# systemctl daemon-reload
# systemctl start ip-accounting-test
# systemctl status ip-accounting-test
● ip-accounting-test.service
   Loaded: loaded (/etc/systemd/system/ip-accounting-test.service; static; vendor preset: disabled)
   Active: active (running) since Mon 2017-10-09 18:05:47 CEST; 1s ago
 Main PID: 32152 (ping)
       IP: 168B in, 168B out
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/ip-accounting-test.service
           └─32152 /usr/bin/ping 8.8.8.8

Okt 09 18:05:47 sigma systemd[1]: Started ip-accounting-test.service.
Okt 09 18:05:47 sigma ping[32152]: PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
Okt 09 18:05:47 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=1 ttl=59 time=29.2 ms
Okt 09 18:05:48 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=2 ttl=59 time=28.0 ms

This shows the ping command running — it's currently at its second ping cycle as we can see in the logs at the end of the output. More interesting however is the IP: line further up showing the current IP byte counters. It currently shows 168 bytes have been received, and 168 bytes have been sent. That the two counters are at the same value is not surprising: ICMP ping requests and responses are supposed to have the same size. Note that this line is shown only if IPAccounting= is turned on for the service, as only then this data is collected.

Let's wait a bit, and invoke systemctl status again:

# systemctl status ip-accounting-test
● ip-accounting-test.service
   Loaded: loaded (/etc/systemd/system/ip-accounting-test.service; static; vendor preset: disabled)
   Active: active (running) since Mon 2017-10-09 18:05:47 CEST; 4min 28s ago
 Main PID: 32152 (ping)
       IP: 22.2K in, 22.2K out
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/ip-accounting-test.service
           └─32152 /usr/bin/ping 8.8.8.8

Okt 09 18:10:07 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=260 ttl=59 time=27.7 ms
Okt 09 18:10:08 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=261 ttl=59 time=28.0 ms
Okt 09 18:10:09 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=262 ttl=59 time=33.8 ms
Okt 09 18:10:10 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=263 ttl=59 time=48.9 ms
Okt 09 18:10:11 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=264 ttl=59 time=27.2 ms
Okt 09 18:10:12 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=265 ttl=59 time=27.0 ms
Okt 09 18:10:13 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=266 ttl=59 time=26.8 ms
Okt 09 18:10:14 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=267 ttl=59 time=27.4 ms
Okt 09 18:10:15 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=268 ttl=59 time=29.7 ms
Okt 09 18:10:16 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=269 ttl=59 time=27.6 ms

As we can see, after 269 pings the counters are much higher: at 22K.

Note that while systemctl status shows only the byte counters, packet counters are kept as well. Use the low-level systemctl show command to query the current raw values of the in and out packet and byte counters:

# systemctl show ip-accounting-test -p IPIngressBytes -p IPIngressPackets -p IPEgressBytes -p IPEgressPackets
IPIngressBytes=37776
IPIngressPackets=449
IPEgressBytes=37776
IPEgressPackets=449

Of course, the same information is also available via the D-Bus APIs. If you want to process this data further consider talking proper D-Bus, rather than scraping the output of systemctl show.

Now, let's stop the service again:

# systemctl stop ip-accounting-test

When a service with such accounting turned on terminates, a log line about all its consumed resources is written to the logs. Let's check with journalctl:

# journalctl -u ip-accounting-test -n 5
-- Logs begin at Thu 2016-08-18 23:09:37 CEST, end at Mon 2017-10-09 18:17:02 CEST. --
Okt 09 18:15:50 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=603 ttl=59 time=26.9 ms
Okt 09 18:15:51 sigma ping[32152]: 64 bytes from 8.8.8.8: icmp_seq=604 ttl=59 time=27.2 ms
Okt 09 18:15:52 sigma systemd[1]: Stopping ip-accounting-test.service...
Okt 09 18:15:52 sigma systemd[1]: Stopped ip-accounting-test.service.
Okt 09 18:15:52 sigma systemd[1]: ip-accounting-test.service: Received 49.5K IP traffic, sent 49.5K IP traffic

The last line shown is the interesting one, that shows the accounting data. It's actually a structured log message, and among its metadata fields it contains the more comprehensive raw data:

# journalctl -u ip-accounting-test -n 1 -o verbose
-- Logs begin at Thu 2016-08-18 23:09:37 CEST, end at Mon 2017-10-09 18:18:50 CEST. --
Mon 2017-10-09 18:15:52.649028 CEST [s=89a2cc877fdf4dafb2269a7631afedad;i=14d7;b=4c7e7adcba0c45b69d612857270716d3;m=137592e75e;t=55b1f81298605;x=c3c9b57b28c9490e]
    PRIORITY=6
    _BOOT_ID=4c7e7adcba0c45b69d612857270716d3
    _MACHINE_ID=e87bfd866aea4ae4b761aff06c9c3cb3
    _HOSTNAME=sigma
    SYSLOG_FACILITY=3
    SYSLOG_IDENTIFIER=systemd
    _UID=0
    _GID=0
    _TRANSPORT=journal
    _PID=1
    _COMM=systemd
    _EXE=/usr/lib/systemd/systemd
    _CAP_EFFECTIVE=3fffffffff
    _SYSTEMD_CGROUP=/init.scope
    _SYSTEMD_UNIT=init.scope
    _SYSTEMD_SLICE=-.slice
    CODE_FILE=../src/core/unit.c
    _CMDLINE=/usr/lib/systemd/systemd --switched-root --system --deserialize 25
    _SELINUX_CONTEXT=system_u:system_r:init_t:s0
    UNIT=ip-accounting-test.service
    CODE_LINE=2115
    CODE_FUNC=unit_log_resources
    MESSAGE_ID=ae8f7b866b0347b9af31fe1c80b127c0
    INVOCATION_ID=98a6e756fa9d421d8dfc82b6df06a9c3
    IP_METRIC_INGRESS_BYTES=50880
    IP_METRIC_INGRESS_PACKETS=605
    IP_METRIC_EGRESS_BYTES=50880
    IP_METRIC_EGRESS_PACKETS=605
    MESSAGE=ip-accounting-test.service: Received 49.6K IP traffic, sent 49.6K IP traffic
    _SOURCE_REALTIME_TIMESTAMP=1507565752649028

The interesting fields of this log message are of course IP_METRIC_INGRESS_BYTES=, IP_METRIC_INGRESS_PACKETS=, IP_METRIC_EGRESS_BYTES=, IP_METRIC_EGRESS_PACKETS= that show the consumed data.

The log message carries a message ID that may be used to quickly search for all such resource log messages (ae8f7b866b0347b9af31fe1c80b127c0). We can combine a search term for messages of this ID with journalctl's -u switch to quickly find out about the resource usage of any invocation of a specific service. Let's try:

# journalctl -u ip-accounting-test MESSAGE_ID=ae8f7b866b0347b9af31fe1c80b127c0
-- Logs begin at Thu 2016-08-18 23:09:37 CEST, end at Mon 2017-10-09 18:25:27 CEST. --
Okt 09 18:15:52 sigma systemd[1]: ip-accounting-test.service: Received 49.6K IP traffic, sent 49.6K IP traffic

Of course, the output above shows only one message at the moment, since we started the service only once, but a new one will appear every time you start and stop it again.

The IP accounting logic is also hooked up with systemd-run, which is useful for transiently running a command as systemd service with IP accounting turned on. Let's try it:

# systemd-run -p IPAccounting=yes --wait wget https://cfp.all-systems-go.io/en/ASG2017/public/schedule/2.pdf
Running as unit: run-u2761.service
Finished with result: success
Main processes terminated with: code=exited/status=0
Service runtime: 878ms
IP traffic received: 231.0K
IP traffic sent: 3.7K

This uses wget to download the PDF version of the 2nd day schedule of everybody's favorite Linux user-space conference All Systems Go! 2017 (BTW, have you already booked your ticket? We are very close to selling out, be quick!). The IP traffic this command generated was 231K ingress and 4K egress. In the systemd-run command line two parameters are important. First of all, we use -p IPAccounting=yes to turn on IP accounting for the transient service (as above). And secondly we use --wait to tell systemd-run to wait for the service to exit. If --wait is used, systemd-run will also show you various statistics about the service that just ran and terminated, including the IP statistics you are seeing if IP accounting has been turned on.

It's fun to combine this sort of IP accounting with interactive transient units. Let's try that:

# systemd-run -p IPAccounting=1 -t /bin/sh
Running as unit: run-u2779.service
Press ^] three times within 1s to disconnect TTY.
sh-4.4# dnf update
…
sh-4.4# dnf install firefox
…
sh-4.4# exit
Finished with result: success
Main processes terminated with: code=exited/status=0
Service runtime: 5.297s
IP traffic received: …B
IP traffic sent: …B

This uses systemd-run's --pty switch (or short: -t), which opens an interactive pseudo-TTY connection to the invoked service process, which is a bourne shell in this case. Doing this means we have a full, comprehensive shell with job control and everything. Since the shell is running as part of a service with IP accounting turned on, all IP traffic we generate or receive will be accounted for. And as soon as we exit the shell, we'll see what it consumed. (For the sake of brevity I actually didn't paste the whole output above, but truncated core parts. Try it out for yourself, if you want to see the output in full.)

Sometimes it might make sense to turn on IP accounting for a unit that is already running. For that, use systemctl set-property foobar.service IPAccounting=yes, which will instantly turn on accounting for it. Note that it won't count retroactively though: only the traffic sent/received after the point in time you turned it on will be collected. You may turn off accounting for the unit with the same command.

Of course, sometimes it's interesting to collect IP accounting data for all services, and turning on IPAccounting=yes in every single unit is cumbersome. To deal with that there's a global option DefaultIPAccounting= available which can be set in /etc/systemd/system.conf.

IP Access Lists

So much about IP accounting. Let's now have a look at IP access control with systemd 235. As mentioned above, the two new unit file settings, IPAddressAllow= and IPAddressDeny= maybe be used for that. They operate in the following way:

  1. If the source address of an incoming packet or the destination address of an outgoing packet matches one of the IP addresses/network masks in the relevant unit's IPAddressAllow= setting then it will be allowed to go through.

  2. Otherwise, if a packet matches an IPAddressDeny= entry configured for the service it is dropped.

  3. If the packet matches neither of the above it is allowed to go through.

Or in other words, IPAddressDeny= implements a blacklist, but IPAddressAllow= takes precedence.

Let's try that out. Let's modify our last example above in order to get a transient service running an interactive shell which has such an access list set:

# systemd-run -p IPAddressDeny=any -p IPAddressAllow=8.8.8.8 -p IPAddressAllow=127.0.0.0/8 -t /bin/sh
Running as unit: run-u2850.service
Press ^] three times within 1s to disconnect TTY.
sh-4.4# ping 8.8.8.8 -c1
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=59 time=27.9 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 27.957/27.957/27.957/0.000 ms
sh-4.4# ping 8.8.4.4 -c1
PING 8.8.4.4 (8.8.4.4) 56(84) bytes of data.
ping: sendmsg: Operation not permitted
^C
--- 8.8.4.4 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
sh-4.4# ping 127.0.0.2 -c1
PING 127.0.0.1 (127.0.0.2) 56(84) bytes of data.
64 bytes from 127.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms

--- 127.0.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms
sh-4.4# exit

The access list we set up uses IPAddressDeny=any in order to define an IP white-list: all traffic will be prohibited for the session, except for what is explicitly white-listed. In this command line, we white-listed two address prefixes: 8.8.8.8 (with no explicit network mask, which means the mask with all bits turned on is implied, i.e. /32), and 127.0.0.0/8. Thus, the service can communicate with Google's DNS server and everything on the local loop-back, but nothing else. The commands run in this interactive shell show this: First we try pinging 8.8.8.8 which happily responds. Then, we try to ping 8.8.4.4 (that's Google's other DNS server, but excluded from this white-list), and as we see it is immediately refused with an Operation not permitted error. As last step we ping 127.0.0.2 (which is on the local loop-back), and we see it works fine again, as expected.

In the example above we used IPAddressDeny=any. The any identifier is a shortcut for writing 0.0.0.0/0 ::/0, i.e. it's a shortcut for everything, on both IPv4 and IPv6. A number of other such shortcuts exist. For example, instead of spelling out 127.0.0.0/8 we could also have used the more descriptive shortcut localhost which is expanded to 127.0.0.0/8 ::1/128, i.e. everything on the local loopback device, on both IPv4 and IPv6.

Being able to configure IP access lists individually for each unit is pretty nice already. However, typically one wants to configure this comprehensively, not just for individual units, but for a set of units in one go or even the system as a whole. In systemd, that's possible by making use of .slice units (for those who don't know systemd that well, slice units are a concept for organizing services in hierarchical tree for the purpose of resource management): the IP access list in effect for a unit is the combination of the individual IP access lists configured for the unit itself and those of all slice units it is contained in.

By default, system services are assigned to system.slice, which in turn is a child of the root slice -.slice. Either of these two slice units are hence suitable for locking down all system services at once. If an access list is configured on system.slice it will only apply to system services, however, if configured on -.slice it will apply to all user processes of the system, including all user session processes (i.e. which are by default assigned to user.slice which is a child of -.slice) in addition to the system services.

Let's make use of this:

# systemctl set-property system.slice IPAddressDeny=any IPAddressAllow=localhost
# systemctl set-property apache.service IPAddressAllow=10.0.0.0/8

The two commands above are a very powerful way to first turn off all IP communication for all system services (with the exception of loop-back traffic), followed by an explicit white-listing of 10.0.0.0/8 (which could refer to the local company network, you get the idea) but only for the Apache service.

Use-cases

After playing around a bit with this, let's talk about use-cases. Here are a few ideas:

  1. The IP access list logic can in many ways provide a more modern replacement for the venerable TCP Wrapper, but unlike it it applies to all IP sockets of a service unconditionally, and requires no explicit support in any way in the service's code: no patching required. On the other hand, TCP wrappers have a number of features this scheme cannot cover, most importantly systemd's IP access lists operate solely on the level of IP addresses and network masks, there is no way to configure access by DNS name (though quite frankly, that is a very dubious feature anyway, as doing networking — unsecured networking even – in order to restrict networking sounds quite questionable, at least to me).

  2. It can also replace (or augment) some facets of IP firewalling, i.e. Linux NetFilter/iptables. Right now, systemd's access lists are of course a lot more minimal than NetFilter, but they have one major benefit: they understand the service concept, and thus are a lot more context-aware than NetFilter. Classic firewalls, such as NetFilter, derive most service context from the IP port number alone, but we live in a world where IP port numbers are a lot more dynamic than they used to be. As one example, a BitTorrent client or server may use any IP port it likes for its file transfer, and writing IP firewalling rules matching that precisely is hence hard. With the systemd IP access list implementing this is easy: just set the list for your BitTorrent service unit, and all is good.

    Let me stress though that you should be careful when comparing NetFilter with systemd's IP address list logic, it's really like comparing apples and oranges: to start with, the IP address list logic has a clearly local focus, it only knows what a local service is and manages access of it. NetFilter on the other hand may run on border gateways, at a point where the traffic flowing through is pure IP, carrying no information about a systemd unit concept or anything like that.

  3. It's a simple way to lock down distribution/vendor supplied system services by default. For example, if you ship a service that you know never needs to access the network, then simply set IPAddressDeny=any (possibly combined with IPAddressAllow=localhost) for it, and it will live in a very tight networking sand-box it cannot escape from. systemd itself makes use of this for a number of its services by default now. For example, the logging service systemd-journald.service, the login manager systemd-logind or the core-dump processing unit systemd-coredump@.service all have such a rule set out-of-the-box, because we know that neither of these services should be able to access the network, under any circumstances.

  4. Because the IP access list logic can be combined with transient units, it can be used to quickly and effectively sandbox arbitrary commands, and even include them in shell pipelines and such. For example, let's say we don't trust our curl implementation (maybe it got modified locally by a hacker, and phones home?), but want to use it anyway to download the the slides of my most recent casync talk in order to print it, but want to make sure it doesn't connect anywhere except where we tell it to (and to make this even more fun, let's minimize privileges further, by setting DynamicUser=yes):

    # systemd-resolve 0pointer.de
    0pointer.de: 85.214.157.71
                 2a01:238:43ed:c300:10c3:bcf3:3266:da74
    -- Information acquired via protocol DNS in 2.8ms.
    -- Data is authenticated: no
    # systemd-run --pipe -p IPAddressDeny=any \
                         -p IPAddressAllow=85.214.157.71 \
                         -p IPAddressAllow=2a01:238:43ed:c300:10c3:bcf3:3266:da74 \
                         -p DynamicUser=yes \
                         curl http://0pointer.de/public/casync-kinvolk2017.pdf | lp
    

So much about use-cases. This is by no means a comprehensive list of what you can do with it, after all both IP accounting and IP access lists are very generic concepts. But I do hope the above inspires your fantasy.

What does that mean for packagers?

IP accounting and IP access control are primarily concepts for the local administrator. However, As suggested above, it's a very good idea to ship services that by design have no network-facing functionality with an access list of IPAddressDeny=any (and possibly IPAddressAllow=localhost), in order to improve the out-of-the-box security of our systems.

An option for security-minded distributions might be a more radical approach: ship the system with -.slice or system.slice configured to IPAddressDeny=any by default, and ask the administrator to punch holes into that for each network facing service with systemctl set-property … IPAddressAllow=…. But of course, that's only an option for distributions willing to break compatibility with what was before.

Notes

A couple of additional notes:

  1. IP accounting and access lists may be mixed with socket activation. In this case, it's a good idea to configure access lists and accounting for both the socket unit that activates and the service unit that is activated, as both units maintain fully separate settings. Note that IP accounting and access lists configured on the socket unit applies to all sockets created on behalf of that unit, and even if these sockets are passed on to the activated services, they will still remain in effect and belong to the socket unit. This also means that IP traffic done on such sockets will be accounted to the socket unit, not the service unit. The fact that IP access lists are maintained separately for the kernel sockets created on behalf of the socket unit and for the kernel sockets created by the service code itself enables some interesting uses. For example, it's possible to set a relatively open access list on the socket unit, but a very restrictive access list on the service unit, thus making the sockets configured through the socket unit the only way in and out of the service.

  2. systemd's IP accounting and access lists apply to IP sockets only, not to sockets of any other address families. That also means that AF_PACKET (i.e. raw) sockets are not covered. This means it's a good idea to combine IP access lists with RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6 in order to lock this down.

  3. You may wonder if the per-unit resource log message and systemd-run --wait may also show you details about other types or resources consumed by a service. The answer is yes: if you turn on CPUAccounting= for a service, you'll also see a summary of consumed CPU time in the log message and the command output. And we are planning to hook-up IOAccounting= the same way too, soon.

  4. Note that IP accounting and access lists aren't entirely free. systemd inserts an eBPF program into the IP pipeline to make this functionality work. However, eBPF execution has been optimized for speed in the last kernel versions already, and given that it currently is in the focus of interest to many I'd expect to be optimized even further, so that the cost for enabling these features will be negligible, if it isn't already.

  5. IP accounting is currently not recursive. That means you cannot use a slice unit to join the accounting of multiple units into one. This is something we definitely want to add, but requires some more kernel work first.

  6. You might wonder how the PrivateNetwork= setting relates to IPAccessDeny=any. Superficially they have similar effects: they make the network unavailable to services. However, looking more closely there are a number of differences. PrivateNetwork= is implemented using Linux network name-spaces. As such it entirely detaches all networking of a service from the host, including non-IP networking. It does so by creating a private little environment the service lives in where communication with itself is still allowed though. In addition using the JoinsNamespaceOf= dependency additional services may be added to the same environment, thus permitting communication with each other but not with anything outside of this group. IPAddressAllow= and IPAddressDeny= are much less invasive. First of all they apply to IP networking only, and can match against specific IP addresses. A service running with PrivateNetwork= turned off but IPAddressDeny=any turned on, may enumerate the network interfaces and their IP configured even though it cannot actually do any IP communication. On the other hand if you turn on PrivateNetwork= all network interfaces besides lo disappear. Long story short: depending on your use-case one, the other, both or neither might be suitable for sand-boxing of your service. If possible I'd always turn on both, for best security, and that's what we do for all of systemd's own long-running services.

And that's all for now. Have fun with per-unit IP accounting and access lists!

Why I still choose Ruby

Posted by Mo Morsi on October 08, 2017 07:45 PM

With the plethora of languages available to developers, I wanted to do a quick follow-up post as to why given my experience in many different environments, Ruby is still the goto language for all my computational needs!

Prg mtn

While different languages offer different solutions in terms of syntax support, memory managment, runtime guarantees, and execution flows; the underlying arithmatic, logical, and I/O hardware being controlled is the same. Thus in theory, given enough time and optimization the performance differences between languages should go to 0 as computational power and capacity increases / goes to infinity (yes, yes, Moore's law and such, but lets ignore that for now).

Of course different classes of problem domains impose their own requirements,

  • real time processing depends low level optimizations that can only be done in assembly and C,
  • data crunching and process parallelization often needs minimal latency and optimized runtimes, something which you only get with compiled/static-typed languages such as C++ and Java,
  • and higher level languages such as Ruby, Python, Perl, and PHP are great for rapid development cycles and providing high level constructs where complicated algorithms can be invoked via elegant / terse means.

But given the rapid rate of hardware performance in recent years, whole classes of problems which were previously limited to 'lower-level' languages such as C and C++ are now able to be feasbily implemented in higher level languages.

Computer power

(source)

Thus we see high performance financial applications being implemented in Python, major websites with millions of users a day being implemented in Ruby and Javascript, massive data sets being crunched in R, and much more.

So putting the performance aspect of these environments aside we need to look at the syntactic nature of these languages as well as the features and tools they offer for developers. The last is the easiest to tackle as these days most notable languages come with compilers/interpreters, debuggers, task systems, test suites, documentation engines, and much more. This was not always the case though as Ruby was one of the first languages that pioneered builtin package management through rubygems, and integrated dependency solutions via gemspecs, bundler, etc. CPAN and a few other language-specific online repositories existed before, but with Ruby you got integration that was a core part of the runtime environment and community support. Ruby is still known to be on the leading front of integrated and end-to-end solutions.

Syntax differences is a much more difficult subject to objectively dicuss as much of it comes down to programmer preference, but it would be hard to object to the statement that Ruby is one of the most Object Oriented languages out there. It's not often that you can call the string conversion or type identification methods on ALL constructs, variables, constants, types, literals, primitives, etc:

  > 1.to_s
  => "1"
  > 1.class
  => Integer

Ruby also provides logical flow control constructs not seen in many other languages. For example in addition to the standard if condition then dosomething paradigm, Ruby allows the user to specify the result after the predicate, eg dosomething if condition. This simple change allows developers to express concepts in a natural manner, akin to how they would often be desrcibed between humans. In addition to this, other simple syntax conveniences include:

  • The unless keyword, simply evaluating to if not
          file = File.open("/tmp/foobar", "w")
          file.write("Hello World") unless file.exist?
    
  • Methods are allowed to end with ? and ! which is great for specifying immutable methods (eg. Socket.open?), mutable methods and/or methods that can thrown an exception (eg. DBRecord.save!)
  • Inclusive and exclusive ranges can be specified via parenthesis and two or three elipses. So for example:
          > (1..4).include?(4)
          => true
          > (1...4).include?(4)
          => false
    
  • The yield keywork makes it trivial for any method to accept and invoke a callback during the course of its lifetime
  • And much more

Expanding upon the last, blocks are a core concept in Ruby, once which the language nails right on the head. Not only can any function accept an anonymous callback block, blocks can be bound to parameters and operated on like any other data. You can check the number of parameters the callbacks accept by invoking block.arity, dynamically dispatch blocks, save them for later invokation and much more.

Due to the asynchronous nature of many software solutions (many problems can be modeled as asynchronous tasks) blocks fit into many Ruby paradigms, if not as the primary invocation mechanism, then as an optional mechanism so as to enforce various runtime guarantees:

  File.open("/tmp/foobar"){ |file|
    # do whatever with file here
  }

  # File is guaranteed to be closed here, we didn't have to close it ourselves!

By binding block contexts, Ruby facilitates implementing tightly tailored solutions for many problem domains via DSLs. Ruby DSLs exists for web development, system orchestration, workflow management, and much more. This of course is not to mention the other frameworks, such as the massively popular Rails, as well as other widely-used technologies such as Metasploit

Finally, programming in Ruby is just fun. The language is condusive to expressing complex concepts elegantly, jives with many different programming paradigms and styles, and offers a quick prototype to production workflow that is intuitive for both novice and seasoned developers. Nothing quite scratches that itch like Ruby!

Doge ruby

Running Sapphire Pulse Radeon RX 560 4GB on Fedora 27 Beta

Posted by Luya Tshimbalanga on October 08, 2017 07:08 PM
I bought a Sapphire Pulse Radeon RX 560 4GB to replace the broken Nvidia GTX 460 v2 after a long years of service. It is then my first ever dedicated AMD based video-card for a desktop.

The boot sequence on Fedora 27 hit a problem: a plain blank screen suggesting the card is not yet supported. Looking at Phoronix website revealed one of possible requirement missed: LLVM 5.0 which is currently not available to Fedora repository save a failed built. I filed a bug report to address the issue. Hopefully that will land on time for the official release of Fedora 27.

A step change in managing your calendar, without social media

Posted by Daniel Pocock on October 08, 2017 05:36 PM

Have you been to an event recently involving free software or a related topic? How did you find it? Are you organizing an event and don't want to fall into the trap of using Facebook or Meetup or other services that compete for a share of your community's attention?

Are you keen to find events in foreign destinations related to your interest areas to coincide with other travel intentions?

Have you been concerned when your GSoC or Outreachy interns lost a week of their project going through the bureaucracy to get a visa for your community's event? Would you like to make it easier for them to find the best events in the countries that welcome and respect visitors?

In many recent discussions about free software activism, people have struggled to break out of the illusion that social media is the way to cultivate new contacts. Wouldn't it be great to make more meaningful contacts by attending more a more diverse range of events rather than losing time on social media?

Making it happen

There are already a number of tools (for example, Drupal plugins and Wordpress plugins) for promoting your events on the web and in iCalendar format. There are also a number of sites like Agenda du Libre and GriCal who aggregate events from multiple communities where people can browse them.

How can we take these concepts further and make a convenient, compelling and global solution?

Can we harvest event data from a wide range of sources and compile it into a large database using something like PostgreSQL or a NoSQL solution or even a distributed solution like OpenDHT?

Can we use big data techniques to mine these datasources and help match people to events without compromising on privacy?

Why not build an automated iCalendar "to-do" list of deadlines for events you want to be reminded about, so you never miss the deadlines for travel sponsorship or submitting a talk proposal?

I've started documenting an architecture for this on the Debian wiki and proposed it as an Outreachy project. It will also be offered as part of GSoC in 2018.

Ways to get involved

If you would like to help this project, please consider introducing yourself on the debian-outreach mailing list and helping to mentor or refer interns for the project. You can also help contribute ideas for the specification through the mailing list or wiki.

Mini DebConf Prishtina 2017

This weekend I've been at the MiniDebConf in Prishtina, Kosovo. It has been hosted by the amazing Prishtina hackerspace community.

Watch out for future events in Prishtina, the pizzas are huge, but that didn't stop them disappearing before we finished the photos:

PHPUnit 6.4

Posted by Remi Collet on October 08, 2017 03:47 PM

RPM of PHPUnit version 6.4 are available in remi repository for Fedora ≥ 24 and for Enterprise Linux (CentOS, RHEL...).

Documentation :

emblem-notice-24.pngThis new major version requires PHP ≥ 7.0 and is not backward compatible with previous version, so the package is designed to be installed beside version 5.

Installation, Fedora:

dnf --enablerepo=remi install phpunit6

Installation, Enterprise Linux:

yum --enablerepo=remi install phpunit6 php-phpunit-dbunit3

Notice: this tool is an essential component of PHP QA in Fedora. I quickly plan an update in Fedora 26, as soon as php-sebastian-diff2 will be approved (review #1478358).

KDE connect makes your mobile life easier

Posted by Ding-Yi Chen on October 08, 2017 01:16 PM

KDE Connect connects between your mobile and Linux, wirelessly.

You can copy photos, videos, or other files from mobile, or vise versa.

You can use your mobile as remote control of Linux media player, or even wireless mouse and keyboard. On the other hands, the clipboard of mobile and Linux are shared, so you can use your favorite desktop keyboard and input methods on mobile applications, like Clash-of-Clans and WeChat. The mobile and Linux should be at the same subnet, though.

Another interesting way to use KDE Connect is replacing Yubikey. I use the Yubikey almost everyday. Consequently, it become loose contact, so I need to wiggle it to get the contact. KDE Connect with Google authenticator or FreeOTP Authenticator might to the trick.

AirDroid is feature richer, however, you need to register to AirDroid to use Airdroid, but with KDE Connect you just need to pair it, and the connection is encrypted. Most importantly, AirDroid is not open source.

So far, KDE connect is available in Google play and most major Linux distribution, including Fedora. iOS version is not yet in app store though.


Data &amp; Market Analysis in C++, R, and Python

Posted by Mo Morsi on October 08, 2017 01:25 AM

In recent years, since efforts on The Omega Project and The Guild sort of fizzled out, I've been exploring various areas of interest with no particular intent other than to play around with some ideas. Data & Financial Engineering was one of those domains and having spent some time diving into the subject (before once again moving on to something else altogether) I'm sharing a few findings here.

My journey down this path started not too long after the Bitcoin Barber Shop Pole was completed, and I was looking for a new project to occupy my free time (the little of it that I have). Having long since stepped down from the SIG315 board, but still renting a private office at the space, I was looking for some way to incorporate that into my next project (besides just using it as the occasional place to work). Brainstorming a bit, I settled on a data visualization idea, where data relating to any number of categories would be aggregated, geotagged, and then projected onto a virtual globe. I decided to use the Marble widget library, built ontop of the QT Framework and had great success:

Datachoppa

The architecture behind the DataChoppa project was simple, a generic 'Data' class was implemented using smart pointers ontop of which the Facet Pattern was incorporated, allowing data to be recorded from any number of sources in a generic manner and represented via convenient high level accessors. This was all collected via synchronization and generation plugins which implement a standarized interface whose output was then fed onto a queue on which processing plugings were listening, selecting the data that they were interested in to be operated on from there. The Processors themselves could put more data onto the queue after which the whole process was repeated ad inf., allowing each plugin to satisfy one bit of data-related functionality.

Datachoppa arch

Core Generic & Data Classes

namespace DataChoppa{
  // Generic value container
  class Generic{
      Map<std::string, boost::any> values;
      Map<std::string, std::string> value_strings;
  };

  namespace Data{
    /// Data representation using generic values
    class Data : public Generic{
      public:
        Data() = default;
        Data(const Data& data) = default;

        Data(const Generic& generic, TYPES _types, const Source* _source) :
          Generic(generic), types(_types), source(_source) {}

        bool of_type(TYPE type) const;

        Vector to_vector() const;

      private:
        TYPES types;

        const Source* source;
    }; // class Data
  }; // namespace Data
}; // namespace DataChoppa

The Process Loop

  namespace DataChoppa {
    namespace Framework{
      void Processor::process_next(){
        if(to_process.empty()) return;
  
        Data::Data data = to_process.first();
        to_process.pop_front();
  
        Plugins::Processors::iterator plugin = plugins.begin();
  
        while(plugin != plugins.end()) {
          Plugins::Meta* meta = dynamic_cast<Plugins::Meta*>(*plugin);
          //LOG(debug) << "Processing " << meta->id;
  
          try{
            queue((*plugin)->process(data));
  
          }catch(const Exceptions::Exception& e){
            LOG(warning) << "Error when processing: " << e.what()
                         << " via " << meta->id;
          }
  
          plugin++;
        }
      }
    }; /// namespace Framework
  }; /// namespace DataChoppa

The HTTP Plugin (abridged)

namespace DataChoppa {
  namespace Plugins{
    class HTTP : public Framework::Plugins::Syncer,
                 public Framework::Plugins::Job,
                 public Framework::Plugins::Meta {
      public:
        /// ...

        /// sync - always return data to be added to queue, even on error
        Data::Vector sync(){
          String _url = url();
          Network::HTTP::SyncRequest request(_url, request_timeout);

          for(const Network::HTTP::Header& header : headers())
            request.header(header);

          int attempted = 0;
          Network::HTTP::Response response(request);

          while(attempts == -1 || attempted &lt; attempts){
            ++attempted;

            try{
              response.update_from(request.request(payload()));

            }catch(Exceptions::Timeout){
              if(attempted == attempts){
                Data::Data result = response.to_error_data();
                result.source = &source;
                return result.to_vector();
              }
            }

            if(response.has_error()){
              if(attempted == attempts){
                Data::Data result = response.to_error_data();
                result.source = &source;
                return result.to_vector();
              }

            }else{
              Data::Data result = response.to_data();
              result.source = &source;
              return result.to_vector();
            }
          }

          /// we should never get here
          return Data::Vector();
        }
    };
  }; // namespace Plugins
}; // namespace DataChoppa

Overall I was pleased with the result (and perhaps I should have stopped there...). The application collected and aggregated data from many sources including RSS feeds (google news, reddit, etc), weather sources (yahoo weather, weather.com), social networks (facebook, twitter, meetup, linkedin), chat protocols (IRC, slack), financial sources, and much more. While exploring the last I discovered the world of technical analysis and began incorporating many various market indicators into a financial analysis plugin for the project.

The Market Analysis Architecture

Datachoppa extractors Datachoppa annotators

Aroon Indicator (for example)

namespace DataChoppa{
  namespace Market {
    namespace Annotators {
      class Aroon : public Annotator {
        public:
          double aroon_up(const Quote& quote, int high_offset, double range){
            return ((range-1) - high_offset) / (range-1) * 100;
          }

          DoubleVector aroon_up(const Quotes& quotes, const Annotations::Extrema* extrema, int range){
            return quotes.collect<DoubleVector>([extrema, range](const Quote& q, int i){
                     return aroon_up(q, extrema->high_offsets[i], range);
                   });
          }

          double aroon_down(const Quote& quote, int low_offset, double range){
            return ((range-1) - low_offset) / (range-1) * 100;
          }

          DoubleVector aroon_down(const Quotes& quotes, const Annotations::Extrema* extrema, int range){
            return quotes.collect<DoubleVector>([extrema, range](const Quote& q, int i){
                     return aroon_down(q, extrema->low_offsets[i], range);
                   });
          }

          AnnotationList annotate() const{
            const Quotes& quotes = market->quotes;
            if(quotes.size() < range) return AnnotationList();

            const Annotations::Extrema* extrema = aroon_extrema(market, range);
                    Annotations::Aroon* aroon = new Annotations::Aroon(range);
                                        aroon->upper = aroon_up(market->quotes, extrema, range);
                                        aroon->lower = aroon_down(market->quotes, extrema, range);
            return aroon->to_list();
          }
      }; /// class Aroon
    }; /// namespace Annotators
  }; /// namespace Market
}; // namespace DataChoppa

The whole thing worked great, data was pulled in both real time and historical from yahoo finance (until they discontinued it... from then it was google finance), the indicators were run, and results were output. Of course, making $$$ is not as simple as just crunching numbers, and being rather naive I just tossed the results of the indicators into weighted "buckets" and backtested based on simple boolean flags based on the computed signals against threshold values. Thankfully I backtested though as the performance was horrible as losses greatly exceed profits :-(

At this point I should take a step back and note that my progress so far was the result of the availibilty of alot of great resources (we really live in the age of accelerated learning). Specifically the following are indispensible books & sites for those interested in this subject:

  • stockcharts.com - Information on any indicator can be found on this site with details on how it is computed and how it can be used
  • investopedia - Sort of the Wikipedia of investment knowledge, offers great high level insights into how market works and the financial world as it stands
  • Beyond Candlesticks - Though candlestick patterns have limited use, this is great intro to the subject, and provides a good into to reading charts.
  • Nerds on Wall Street - A great book detailing the history of computational finance. Definetly must read if you are new to the domain as it provides a concise high level history on how markets have worked the last few centuries and various computations techniques employed to Seek Alpha
  • High Probability Trading - Provides insights as to the mentality and common pitfalls when trading.
Beyond candlesticksNerds on wallstreetHigh prob trading

The last book is an excellent resource which conveys the importance of money and risk management, as well as the necessity to combine in all factors, or as many factors as you can, when making financial decisions. In the end, I feel this is the gist of it, it's not soley a matter of luck (though there is an aspect of that to this), but rather patience, discipline, balance, and most importantly focus (similar to Aikido but that's a topic for another time). There is no shorting it (unless you're talking about the assets themselves!), and if one does not have / take the necessary time to research and properly plan and out and execute strategies, they will most likely fail (as most do according to the numbers).

It was at this point that I decided to take a step back and restrategize, and having reflected and discussed it over with some acquaintances, I hedged my bets, cut my losses (tech-wise) and switched from C++ to another platform which would allow me prototype and execute ideas quicker. A good amount of time has gone into the C++ project and it worked great, but it did not make sense to continue via a slower development cycle when faster options are available (and afterall every engineer knows time is our most precious resource).

Python and R are the natural choices for this project domain, as there is extensive support in both languages for market analysis, backtesting, and execution. I have used Python at various, points in the past so it was easy to hit the ground running; R was new but by this time no language really poses a serious surprise, the best way I can describe it is spreadsheets on steroids (not exactly, as rather than spreadsheets, data frames and matrixes are the core components, but one can imagine R as being similar to the central execution environment behind Excel, Matlab, or other statistical-software).

I quickly picked up quantmod and prototyped some volatility, trend-following, momentum, and other analysis signal generators in R, plotting them using the provided charting interface. R is a great language for this sort of data manipulation, one can quickly load up structured data from CSV files or online resources, splice it and dice it, chunk it and dunk it, organize it and prioritize it, according to any arithmatic, statistical, or linear/non-linear means which they desire. Quickly loading a new 'view' on the data is as simply as a line of code, and operations can quickly be chained together at high performance.

Volatility indicator in R (consolidated)

quotes <- load_default_symbol("volatility")

quotes.atr <- ATR(quotes, n=ATR_RANGE)

quotes.atr$tr_atr_ratio <- quotes.atr$tr / quotes.atr$atr
quotes.atr$is_high      <- ifelse(quotes.atr$tr_atr_ratio > HIGH_LEVEL, TRUE, FALSE)

# Also Generate ratio of atr to close price
quotes.atr$atr_close_ratio <- quotes.atr$atr / Cl(quotes)

# Generate rising, falling, sideways indicators by calculating slope of ATR regression line
atr_lm       <- list()
atr_lm$df    <- data.frame(quotes.atr$atr, Time = index(quotes.atr))
atr_lm$model <- lm(atr ~ poly(Time, POLY_ORDER), data = atr_lm$df) # polynomial linear model

atr_lm$fit   <- fitted(atr_lm$model)
atr_lm$diff  <- diff(atr_lm$fit)
atr_lm$diff  <- as.xts(atr_lm$diff)

# Current ATR / Close Ratio
quotes.atr.abs_per <- median(quotes.atr$atr_close_ratio[!is.na(quotes.atr$atr_close_ratio)])

# plots
chartSeries(quotes.atr$atr)
addLines(predict(atr_lm$model))
addTA(quotes.atr$tr, type="h")
addTA(as.xts(as.logical(quotes.atr$is_high), index(quotes.atr)), col=col1, on=1)

While it all works great, the R language itself offers very little syntactic sugar for operations not related to data-processing. While there are libraries for most common functionality found in many other execution environments, languages such as Ruby and Python, offer a "friendlier" experience to both novice and seasoned developers alike. Furthermore the process of data synchronization was a tedious step, I was looking for something that offered the flexability of DataChoppa to pull in and process live and historical data from a wide variety of sources, caching results on the fly, and using those results and analysis for subsequent operations.

This all led me to developing a series of Python libraries targeted towards providing a configurable high level view of the market. Intelligence Amplification (IA) as opposed to Artifical Intelligence (AI) if you will (see Nerds on Wall Street).

marketquery.py is a high level market querying library, which implements plugins used to resolve generic market queries for ticker time based data. One can used the interface to query for the lastest quotes or a specific range of them from a particular source, or allow the framework to select one for you.

Retrieve first 3 months of the last 5 years of GBPUSD data

  from marketquery.querier        import Querier
  from marketbase.query.builder   import QueryBuilder
  
  sym = "GBPUSD"
  
  first_3mo_of_last_5yr = (QueryBuilder().symbol(sym)
                                         .first("3months_of_year")
                                         .last("5years")
                                         .query)
  
  querier = Querier()
  res     = querier.run(first_3mo_of_last_5yr)
  
  for query, dat in res.items():
      print(query)
      print(dat.raw[:1000] + (dat.raw[1000:] and '...'))

Retrieve last two month of hourly EURJPY data

  from marketquery.querier        import Querier
  from marketbase.query.builder   import QueryBuilder
  
  sym = "EURJPY"
  
  two_months_of_hourly = (QueryBuilder().symbol(sym)
                                        .last("2months")
                                        .interval("hourly")
                                        .query)
  
  querier = Querier()
  res     = querier.run(two_months_of_hourly).raw()
  print(res[:1000] + (res[1000:] and '...'))

This provides a quick way to both lookup market data according to specific criteria, as well as cache it so that network resources are used effectively. All caching is configurable, and the user can define timeouts based on the target query, source, and/or data retrieved.

From there the next level up is the technical analysis is was trivial to whip up the tacache.py module which uses the marketquery.py interface to retrieve raw data before feeding it into TALib caching the results. The same caching mechanisms offering the same flexability is employed, if one needs to process a large data set and/or subsets multiple times in a specified period, computational resources are not wasted (important when running on a metered cloud)

Computing various technical indicators

  from marketquery.querier       import Querier
  from marketbase.query.builder  import QueryBuilder
  
  from tacache.runner            import TARunner
  from tacache.source            import Source
  from tacache.indicator         import Indicator
  from talib                     import SMA
  from talib                  import MACD
  
  ###
  
  res = Querier().run(QueryBuilder().symbol("AUDUSD")
                                    .query)
  
  ###
  
  ta_runner = TARunner()
  analysis  = ta_runner.run(Indicator(SMA),
                            query_result=res)
  print(analysis.raw)
  
  analysis  = ta_runner.run(Indicator(MACD),
                            query_result=res)
  macd, sig, hist = analysis.raw
  print(macd)

Finally ontop of all this I wrote a2m.py, a high level querying interface consisting of modules reporting on market volatility and trends as well as other metrics; python scripts which I could quickly execute to report the current and historical market state, making used of the underlying cached query and technical analysis data, periodically invalidated to pull in new/recent live data.

Example using a2m to compute volatility

  sym = "EURUSD"
  self.resolver  = Resolver()
  self.ta_runner = TARunner()

  daily = (QueryBuilder().symbol(sym)
                         .interval("daily")
                         .last("year")
                         .query)

  hourly = (QueryBuilder().symbol(sym)
                          .interval("hourly")
                          .last("3months")
                          .latest()
                          .query)

  current = (QueryBuilder().symbol(sym)
                           .latest()
                           .data_dict()
                           .query)

  daily_quotes   = resolver.run(daily)
  hourly_quotes  = resolver.run(hourly)
  current_quotes = resolver.run(current)

  daily_avg  = ta_runner.run(Indicator(talib.SMA, timeperiod=120),  query_result=daily_quotes).raw[-1]
  hourly_avg = ta_runner.run(Indicator(talib.SMA, timeperiod=30),  query_result=hourly_quotes).raw[-1]

  current_val    = current_quotes.raw()[-1]['Close']
  daily_percent  = current_val / daily_avg  if current_val &lt; daily_avg  else daily_avg  / current_val
  hourly_percent = current_val / hourly_avg if current_val &lt; hourly_avg else hourly_avg / current_val
Awesome to the max

I would go onto use this to execute some Forex trades, again not in an algorithmic / automated manner, but rather based on combined knowledge from fundamentals research, as well as the high level technical data, and what was the result...

Poor squidward

I jest, though I did lose a little $$$, it wasn't that much, and to be honest I feel this was due to lack of patience/discipline and other "novice" mistakes as discussed above. I did make about 1/2 of it back, and then lost interest. This all requires alot of focus and time, and I had already spent 2+ years worth of free time on this. With many other interests pulling my strings, I decided to sideline the project(s) alltogether and focus on my next crazy venture.

TLDR;

After some of consideration, I decided to release the R code I wrote under the MIT license. They are rather simple expirements though could be useful as a starting point for others new to the subject. As far as the Python modules and DataChoppa, I intended to eventually release them but aim to take a break first to focus on other efforts and then go back to the war room, to figure out the next stage of the strategy.

And that's that! Enough number crunching, time to go out for a hike!

Hiking meme