Fedora People

News: Linux kernel 4.9 LTS.

Posted by mythcat on January 20, 2017 11:09 PM
According to this email the linux kernel will come with longterm supported kernel version.
From: Greg Kroah-Hartman <gregkh xxxxxxxxxxxxxxxxxxx="">

Might as well just mark it as such now, to head off the constant
questions. Yes, 4.9 is the next longterm supported kernel version.

Signed-off-by: Greg Kroah-Hartman <gregkh xxxxxxxxxxxxxxxxxxx=""></gregkh></gregkh>

News: PulseAudio 10.0 released.

Posted by mythcat on January 20, 2017 10:33 PM
Readre about this news here.
  • Automatically switch Bluetooth profile when using VoIP applications
  • New module for prioritizing passthrough streams (module-allow-passthrough)
  • Fixed hotplugging support for USB surround sound cards
  • Separate volumes for Bluetooth A2DP and HSP profiles
  • memfd-based shared memory mechanism enabled by default
  • Removed module-xenpv-sink
  • Dropped dependency to json-c
  • When using systemd to start PulseAudio, pulseaudio.socket is always started first
  • Compatibility with OpenSSL 1.1.0
  • Clarified qpaeq licence

GitHub + Gmail — Filtering for Review Requests and Mentions

Posted by Tim Bielawa on January 20, 2017 07:43 PM

The Problem

I’ve been looking for a way to filter my GitHub Pull Request lists under the condition that a review is requested of me. The online docs didn’t show any filter options for this, so I checked out the @GitHubHelp twitter account. The answer was there on the front page — they don’t support filtering PRs by review-requested-by:me yet:

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js?x72940"></script>

So what is one to do? I’m using Gmail so I began considering what filter options were available to me there. My objectives were to clearly label and highlight:

  •  PRs where review has been requested
  • Comments where I am @mention‘d

Interested in knowing more? Read on after the break for all the setup details.

Review Requested

Applying labels for PRs where a review is requested of me is a little hacky, but the solution I came up with works well enough. When your review is requested you should receive an email from GitHub with a predictable message in it

@kwoodson requested your review on: openshift/openshift-ansible#3130 Adding oc_version to lib_openshift..

That highlighted part there, requested your review on:, is the key.

In Gmail we’re going to add a new filter. You can reach the new filter menu through the settings interface or by hitting the subtle little down-triangle (▾) left of the magnifying glass (🔍) button in the search bar.

  • In the “Has the words” input box put (in quotes!): "requested your review on:" (You can pick a specific repo if you wish by including it in the search terms)
  • Press the Create filter with this search » link

  • Use the “Apply the label” option to create a new label, for example, “Review Requested”
  • You might want to check the “Also apply filter to X matching conversations” box
  • Create the new filter

Mentions

Labeling @mention‘s in Gmail is a little easier and less prone to error than the review request filter could be. It also follows a similar process.

  1. Create a new filter
  2. In the “To” input box put: Mention <mention@noreply.github.com>
  3. Press the Create filter with this search » link
  4. Continue from step 4 in the previous example

 

 

A response to ‘Strong Encryption and Death’

Posted by Eric "Sparks" Christensen on January 20, 2017 07:10 PM

I recently read an article on the TriLUG blog mirror discussing access to data after the death of the owner.  I’ve also given this a lot of thought as well and had previously come to the same conclusion as the original author of the article has:

“I created a file called “deathnote.txt” which I then encrypted using GPG.  This will encrypt the file so that both Bob and Alice can read it (and I can too). I then sent it to several friends unrelated to them with instructions that, upon my death (but not before), please send this file to Bob and Alice.”

–Tarus

To be honest, I didn’t actually go through with this project as there were just too many variables that I hadn’t figured out.  There is a lot of trust involved in this that potentially requires a very small number of people (2) to really hose things up.  It’s not that I wouldn’t trust my “trusted friends” with the responsibility but it potentially makes them targets and two is just a really low threshold for an adversary to recover this information.

What really threw me was that the author also included a copy of his private key in case they couldn’t locate it on his computer to, I’m assuming here, access other data.  I have one word for this: NOPE!

Okay, short of the private key thing, what was proposed was quite logical.  Like I said above, I had a very similar idea a while back.  Springboarding from that idea, I’d like to propose another layer of security into this whole process.

Splitting up the data

So you have your encrypted blob of information that goes to person A when you kick off but you don’t want person A to have it before.  Import some trusted friends and you have a means of providing the information to person A upon your demise.  But letting a single person, or even two people, control this information is dangerous.  What if you could split up that data into further encrypted parts and handed those parts out to several friends?  Then, not one single person would hold all the information.  You’d likely want some overlap so that you wouldn’t need ALL the friends to present the information (maybe it got lost, maybe the friend got hit by the same bus that you did, etc) so we’d want to build in a little redundancy.

ssss

Shamir’s Secret Sharing Scheme (ssss) is a neat piece of software that takes some information, encrypts it, and then break it into pieces.  Redundancy can be added so that not all parts are required to reassemble the data (think RAID 5).

“In cryptography, a secret sharing scheme is a method for distributing a secret amongst a group of participants, each of which is allocated a share of the secret. The secret can only be reconstructed when the shares are combined together; individual shares are of no use on their own.”

–From the SSSS website

Implementing the solution

Because ssss can only share relatively small strings (less than 1024 bits), my “death” instructions would likely need to be stored whole as a cipher text and the key (symmetric) being the shared object.

The other piece of this solution would be whom to get to hold the shared bits of keys.  It would likely be best if the individuals were not only trusted but also didn’t know the others involved in the share.  That way there is a smaller chance that these individuals could get together to put the key back together.

Also, if person A is the one holding the cipher text, even if the individuals did find each other they would only have a key and not be able to decode the actual texts.

Conclusion

I’m quite happy that I read the original article and I hope to do the same thing that the author did before I kick the bucket.  I’m quite sure that there are other ways to do what Tarus and I wrote about and actual implementation will vary depending upon the individual, their technical level, and their personal privacy requirements.  This problem, though, is one that deserves to be solved as more and more of our information is kept digitally.


Locations are hard

Posted by Suzanne Hillman (Outreachy) on January 20, 2017 05:17 PM

Turns out that figuring out people’s locations is hard, especially if you want to try to reduce the amount of work someone has to do or if they are likely to be using a mobile phone.

For some reason, I’d thought that this was already a solved problem, so was somewhat surprised when feedback on a mockup made me question that assumption. After pinging Máirín Duffy to find out if we had access to a database of countries, and how they break down into cities/states/provinces/etc, she realized that we needed a longer discussion.

One thing she wondered was if we actually need street addresses. Given that the goal is to be help people find each other nearby, city was almost certainly sufficient for that purpose.

In many cases, especially on mobile, we would have access to GPS information. So, in that case, we can just show them where we think they are — with a map on which we overlay the city and country information and in which we will make sure that that information is appropriately accessible — and they can adjust as necessary.

<figure></figure>

On computers, we may have access to the information provided by the web browser, and from that we can similarly show them a map with their city and country information. In this case, we may end up being wildly inaccurate due to people using VPN connections.

In both mobile and computer cases, people may not want to share that level of detail. So, for this case, we would use their IP address information to guess, and display the same thing.

<figure></figure>

Finally, it is entirely possible that a connection error would prevent us from actually having location information. In that case, we would show a zoomed out map on a computer, with empty city and country fields. On mobile or if we cannot tell where they are, a blank map, with empty city and country fields.

<figure></figure>

In all cases, people can edit the country and city information to make it more accurate. The ‘city’ field will offer type-ahead suggestions which will include any division between city and country that is relevant. For example, if someone is detected as being in Buffalo, NY, but is actually in Boston, MA, we would offer then Boston, NY first, due to proximity, but also show Boston, MA. And anyone can continue typing to get more specificity, or select from a list of visible options. If, however, the country field is incorrect, they will need to change that before the city suggestions will be correct. As with the map location information, type-ahead suggestions need to be appropriately accessible to people who cannot use a mouse or cannot see the suggestions.

The problem with the type-ahead suggestions is that we still need access to a database which contains that information for each country. There are a couple of options, but that problem remains to be solved, and is a large part of making location information actually workable.

This was an unexpectedly complicated discussion, but I’m very glad we had it. For more information, please see issue #286.

Fedora badges: how to

Posted by Maria Leonova on January 20, 2017 04:47 PM

Fedora badges is a perfect place to start if you want to help out the Fedora Design Team. ‘I’m not a designer!’ ‘I can’t draw!’ ‘I’ve never opened Inkscape’ – you might say. And that is totally fine! Everybody can help out, and none of those reasons will stop you from designing your first badge (and getting badges for designing badges ;)).

So let’s look at how to get started! (all of these can be found in our presentation here)

  1. Badges resources

    Inkscape Download: https://inkscape.org/en/download/

    Fedora Badges: https://badges.fedoraproject.org/

    Fedora Badges Trac: https://fedorahosted.org/fedora-badges/

    Fedora Badges Design Resources Zip: https://fedorahosted.org/fedora-badges/attachment/wiki/DesignResources/FedoraBadgesResources.zip

    2. Anatomy of a badge

anatomy

As you can see, badge consists of several elements, all of which will be different for different badges based on how you categorize them.  More on those as we look at Resources.

3. Resources

So now go ahead and download the Fedora Badges design resources

ATTENTION! VERY IMPORTANT! Prior to designing check out the Style Guidelines!  Couple of things to keep in mind here:

  • background and rings colors: it is important to keep badges consistent – please categorize your badge and based on that choose colors from the palette. If you need help categorizing, ask on IRC #fedora-design or during our bi-weekly badges meetings every other Wednesday  7-8 US Eastern on fedora-meeting-1@irc.freenode.net.
  • pallette (pp 12-13): if you need some other color, pick one from the palette. You can even download and install it on your computer to use straight from Inkscape. To import them, save the .gpl files to the ~/.config/inkscape/palettes/ directory.
  • fonts (pp 17-18): use Comfortaa and pay attention to do’s and don’ts listed there.
  • do’s and don’ts: it is very important to keep those in mind while designing, so all our badges are consistent and beautiful.

Another tip for consistency: once you’ve have picked a badge, go look at ALL the badges here: https://badges.fedoraproject.org/explore/badges. If you are just starting, it’s a great place for inspiration; you can see how similar badges have been categorized, and what imagery and patterns have been used. Download one of these badge artwork files and use it as a template or starting point for your badge design. To do that, simply click on a badge and go to its ticket. Usually .svg can be downloaded from there.

Selection_0472.png

4. Design

  • Look at similar badges on badges index.
  • Choose a concept for your badge. Look at similar elements, consider suggested concepts from the ticket, or come up with something yourself if you feel like it!
  • The easiest badges are Conference and event badges. They are all the same colors: purple ring, grey background for conferences and dark blue for presenters. Use the template or even re-use last year’s badge and put your conference logo / year on it – Congratulations! You’re done!selection_048
  • Gather inspiration & resources. This means going on the internet and researching images and concepts. For example, if you want to draw a badger on a bike, you might want to search for a photo or an illustration of a person on a bike to use as a reference. No need to reinvent. This may not be necessary for the simpler badges.
  • Categorize your badge using the Style Guide or ask one of us for help.
  • Open the corresponding template, Save as… your filename and get designing! Here’s a link to some nice Inkscape tuts: Fedora and Inkscape. Keep it simple and pay extra attention to resizing stuff. You don’t want to change background size and positioning, so don’t move it around. That way all the badges look the same. When resizing other elements always hold CTRL to maintain proportions. Also don’t worry too much, we’ll review your badge and help if necessary.
  • Feel free to reuse and remix other badges elements. Also remember to SAVE! Save all the time 🙂
  • Once you’re done with the first draft, go to Export PNG image, select a place where to export, name your file and choose Export area – Page. Check that your badge is 256×256 and there! All done! Congratulations!
  • Upload png to the ticket and ask one of us to review your design.
  • Now work with a mentor to finish it and with a developer to push it.

Debugging a Flatpak application

Posted by Matthias Clasen on January 20, 2017 04:45 PM

Since I’ve been asking people to try the recipes app with Flatpak, I can’t complain too much if I get bug reports back. But how does one create a useful bug report when something goes wrong in a Flatpak sandbox ? Some of the stacktraces I’ve seen have not been very useful, since they are lacking symbols.

This post is a quick attempt to spread some basics about Flatpak debugging.

Normally, you run your Flatpak app like this:

flatpak run org.gnome.Recipes

Well, that’s not quite true; the ”normal” way to launch the Flatpak is just the same as launching a non-Flatpak app: click on the icon, or hit the Super key, type recipes, hit Enter. But lets assume you’re launching flatpak from the commandline.

What happens behind the scenes here is that flatpak finds the metadata for org.gnome.Recipes, determines which runtime it needs, sets up the sandbox by mounting the app in /app and the runtime in /usr, does some more sandboxy stuff, and eventually launches the app.

First problem for bug reporting: we want to run the app under gdb to get a stacktrace when it crashes.  Here is how you do that:

flatpak run --command=sh org.gnome.Recipes

Running this command, you’ll end up with a shell prompt ”inside” the recipes sandbox.  This is great, because we can now launch our app under gdb (note that the application gets installed in the /app prefix):

$ gdb /app/bin/recipes

Except… this fails because there is no gdb. Remember that we are inside the sandbox, so we can only run what is either shipped with the app in /app/bin or with the runtime in /usr/bin.  And gdb is not among either.

Thankfully, for each runtime, there is a corresponding sdk, which is just like the runtime, except it includes the stuff you need to develop and debug: headers, compilers, debuggers and other useful tools. And flatpak has a handy commandline option to use the sdk instead of the regular runtime:

flatpak run --devel --command=sh org.gnome.Recipes

The –devel option tells flatpak to use the sdk instead of the runtime  and do some other things that make debugging in the sandbox work.

Now for the last trick: I was complaining about stacktraces without symbols at the beginning. In rpm-based distributions, the debug symbols are split off into debuginfo packages. Flatpak does something similar and splits all the debug information of runtimes and apps into separate ”runtime extensions”, which by convention have .Debug appended to their name. So the debug info for org.gnome.Recipes is in the org.gnome.Recipes.Debug extension.

When you use the –devel option, flatpak automatically includes the Debug extensions for the application and runtime, if they are available. So, for the most useful stacktraces, make sure that you have the Debug extensions for the apps and runtimes in question installed.

Hope this helps!

Most of this information was taken from the Flatpak wiki.

Desktop environments in my computer

Posted by Kushal Das on January 20, 2017 01:05 PM

I started my Linux journey with Gnome, as it was the default desktop environment in RHL. I took some time to find out about KDE. I guess I found out accidentally during re-installation. It used to be fun to have a desktop that looks different, behaves differently than the normal. During the earlier years in college while I was trying to find out more about Linux, using KDE marked me as a Linux expert. I was powered with the right syntax of mount command to mount the windows partitions and the xmms-mp3 rpm. I spent most of my time in the terminal.

Initial KDE days for me

I started my FOSS contribution as a KDE translator, and it was also my primary desktop environment. Though I have to admit, I had never heard the word “DE or desktop environment” till 2005. Slowly, I started learning about the various differences, and also the history behind KDE and Gnome. I also felt that the KDE UI looked more polished. But I had one major issue. Sometimes by mistake, I used to change something in the UI, wrong click or wrong drag and drop. I never managed to recover from those stupid mistakes. There was no way for me to go back to the default look and feel without deleting the whole configuration. You may find this really stupid, but my desktop usage knowledge was limited (and still is so), due to the usage of terminal based applications. Not sure about the exact date, but sometime during 2010, I became a full-time Gnome user. Not being able to mess around with my settings actually helped me in this case.

The days of Gnome

There aren’t many things to write about my usage of Gnome. I kept using whatever came through as default Fedora Gnome theme. As I spend a lot of time in terminals, it was never a big deal. I was not sure if I liked Gnome Shell, but I kept using it. Meanwhile, I tried LXDE/XFCE for a few days but went back to the default Fedora UI of Gnome every time. This was the story till the beginning of June 2016.

Introduction of i3wm

After PyCon 2016, I had another two-day meet in Raleigh, Fedora Cloud FAD. Adam Miller was my roommate during the four-day stay there. As he sat beside me in the meeting, I saw his desktop looked different. When asked, Adam gave a small demo on i3wm. Later that night, he pointed me to his dotfiles, and I started my journey with a tiling window manager for the first time. I have made a few minor changes to the configuration over time. I also use a .Xmodmap file to make sure that my configuration stays sane even with my Kinesis Advantage keyboard.

The power of using the keyboard for most of the tasks is what pulled me towards i3wm. It is always faster than moving my hand to the trackball mouse I use. I currently use a few different applications on different workspaces. I kept opening the same application in the same workspace every time. It, hence became muscle memory for me to switch to any application as required. Till now, except a few conference projectors, I never had to move to Gnome for anything else. The RAM usage is also very low as expected.

Though a few of my friends told me i3wm is difficult, I had a complete different reaction when I demoed this to Anwesha. She liked it immediately and started using it as her primary desktop. She finds it much easier for her to move between workspaces while working. I know she already demoed it to many others in conferences. :)

The thing which stayed same over the years is my usage of terminal. Learning about many more command line tools made my terminal having more tabs, and more number of tmux sessions in the servers.

The flatpak security model – part 2: Who needs sandboxing anyway?

Posted by Alexander Larsson on January 20, 2017 11:43 AM

The ability to run an application sandboxed is a very important  feature of flatpak. However, its is not the only reason you might want to use flatpak. In fact, since currently very few applications work in a fully sandboxed environment, most of the apps you’d run are not sandboxed.

In the previous part we learned that by default the application sandbox is very limiting. If we want to run a normal application we need to open things up a bit.

Every flatpak application contains a manifest, called metadata. This file describes the details of the application, like its identity (app-id) and what runtime it uses. It also lists the permissions that the application requires.

By default, once installed, an application gets all the permissions that it requested. However, you can override the permissions each time you call flatpak run or globally on a per-application basis by using flatpak override (see manpages for flatpak-run and flatpak-override for details). The handling of application permissions are currently somewhat hidden in the interface, but the long term plan is to show permissions during installation and make it easier to override them.

So, what kind of permissions are there?

First apps need to be able to produce output and get input. To do this we have permissions that allow access to PulseAudio for sound and X11 and/or Wayland for graphical output and input. The way this works is that we just mount the unix domain socket for the corresponding service into the sandbox.

It should be noted that X11 is not very safe when used like this, you can easily use the X11 protocol to do lots of malicious things. PulseAudio is also not very secure, but work is in progress on making it better. Wayland however was designed from the start to isolate clients from each other, so it is pretty secure in a sandbox.

But, secure or not, almost all Linux desktop applications currently in existence use X11, so it is important that we are able to use it.

Another way for application to integrate with the system is to use DBus. Flatpak has a filtering dbus proxy, which lets it define rules for what the application is allowed to do on the bus. By default an application is allowed to own its app-id and subnames of it (i.e. org.gnome.gedit and org.gnome.gedit.*) on the session bus. This means other clients can talk to the application, but it can only talk to the bus itself, not any other clients.

Its interesting to note this connection between the app-id and the dbus name. In fact, valid flatpak app-ids are defined to be the same form as valid dbus names, and when applications export files to the host (such as desktop files, icons and dbus service files), we only allow exporting files that start with the app-id. This ties very neatly into modern desktop app activation were the desktop and dbus service files also have to be named by the dbus name. This rule ensures that applications can’t accidentally conflict with each other, but also that applications can’t attack the system by exporting a file that would be triggered by the user outside the sandbox.

There are also permissions for filesystem access. Flatpak always uses a filesystem namespace, because /usr and /app are never from the host, but other directories from the host can be exposed to the sandbox. The permission here is quite fine grained, starting with access to all host files, to your home-directory only or to individual directories. The directories can also be exposed read-only.

The default sandbox only has a loopback network interface and thius has no connection to the network, but if you grant network access then the app will get full network access. There are no partial access for network access however. For instance one would like to be able to set up a per-application firewall configuration. Unfortunately, it is quite complex and risky to set up networking so we can’t expose it in a safe way for unprivileged use.

There are also a few more specialized permissions, like various levels of hardware device access and some other details. See man flatpak-metadata for the available settings.

All this lets us open up exactly what is needed for each application, which means we can run current Linux desktop applications without modifications. However, the long term goal is to introduce features so that applications can run without opening the sandbox. We’ll get to this plan in the next part.

Until then, happy flatpaking.

PHP version 5.6.30, 7.0.15 and 7.1.1

Posted by Remi Collet on January 20, 2017 06:07 AM

RPM of PHP version 7.1.1 are available in remi-php71 repository for Fedora 23-25 and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 7.0.15 are available in remi repository for Fedora 25 and in remi-php70 repository for Fedora 22-24 and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 5.6.30 are available in remi repository for Fedora 22-24 and  remi-php56 repository for Enterprise Linux.

emblem-important-2-24.pngPHP version 5.5 have reached its end life and is no longer maintained by the project.

These versions are also available as Software Collections.

security-medium-2-24.pngThese versions fix some security bugs, so update is strongly recommended.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update

Parallel installation of version 7.1 as Software Collection (x86_64 only):

yum install php71

Replacement of default PHP by version 7.0 installation (simplest):

yum-config-manager --enable remi-php70
yum update

Parallel installation of version 7.0 as Software Collection (x86_64 only):

yum install php70

Replacement of default PHP by version 5.6 installation (simplest):

yum-config-manager --enable remi-php56
yum update

Parallel installation of version 5.6 as Software Collection (x86_64 only):

yum install php56

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.2
  • EL6 rpm are build using RHEL-6.8
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70)

Fedora Atomic Working Group update from 2017-01-17

Posted by Kushal Das on January 20, 2017 04:18 AM

This is an update from Fedora Atomic Working Group based on the IRC meeting on 2017-01-17. 14 people participated in the meeting, the full log of the meeting can be found here.

OverlayFS partition

We had a decision to have a docker partition in Fedora 26. The root partition sizing will also need to be fixed. One can read all the discussion on the same at the Pagure issue.

We also require help in writing the documentation for migration from Devicemapper -> Overlay -> Back.

How to compose your own Atomic tree?

Jason Brooks will update his document located at Project Atomic docs.

docker-storage-setup patches require more testing

There are pending patches which will require more testing before merging.

Goals and PRD of the working group

Josh Berkus is updating the goals and PRD documentation for the working group. Both short term and long term goals can be seen at this etherpad. The previous Cloud Working Group’s PRD is much longer than most of the other groups’ PRD, so we also discussed trimming the Atomic WG PRD.

Open floor discussion + other items

I updated the working group about a recent failure of the QCOW2 image on the Autocloud. It appears that if we boot the images with only one VCPU, and then after disabling chroyd service when we reboot, there is no defined time to have ssh service up and running.

Misc talked about the hardware plan for FOSP., and later he sent a detailed mail to the list on the same.

Antonio Murdaca (runcom) brought up the discussion about testing the latest Docker (1.13) and pushing it to F25. We decided to spend more time in testing it, and then only push to Fedora 25, otherwise it may break Kubernetes/Openshift. We will schedule a 1.13 testing week in the coming days.

Android apps, IMEIs and privacy

Posted by Matthew Garrett on January 19, 2017 11:36 PM
There's been a sudden wave of people concerned about the Meitu selfie app's use of unique phone IDs. Here's what we know: the app will transmit your phone's IMEI (a unique per-phone identifier that can't be altered under normal circumstances) to servers in China. It's able to obtain this value because it asks for a permission called READ_PHONE_STATE, which (if granted) means that the app can obtain various bits of information about your phone including those unique IDs and whether you're currently on a call.

Why would anybody want these IDs? The simple answer is that app authors mostly make money by selling advertising, and advertisers like to know who's seeing their advertisements. The more app views they can tie to a single individual, the more they can track that user's response to different kinds of adverts and the more targeted (and, they hope, more profitable) the advertising towards that user. Using the same ID between multiple apps makes this easier, and so using a device-level ID rather than an app-level one is preferred. The IMEI is the most stable ID on Android devices, persisting even across factory resets.

The downside of using a device-level ID is, well, whoever has that data knows a lot about what you're running. That lets them tailor adverts to your tastes, but there are certainly circumstances where that could be embarrassing or even compromising. Using the IMEI for this is even worse, since it's also used for fundamental telephony functions - for instance, when a phone is reported stolen, its IMEI is added to a blacklist and networks will refuse to allow it to join. A sufficiently malicious person could potentially report your phone stolen and get it blocked by providing your IMEI. And phone networks are obviously able to track devices using them, so someone with enough access could figure out who you are from your app usage and then track you via your IMEI. But realistically, anyone with that level of access to the phone network could just identify you via other means. There's no reason to believe that this is part of a nefarious Chinese plot.

Is there anything you can do about this? On Android 6 and later, yes. Go to settings, hit apps, hit the gear menu in the top right, choose "App permissions" and scroll down to phone. Under there you'll see all apps that have permission to obtain this information, and you can turn them off. Doing so may cause some apps to crash or otherwise misbehave, whereas newer apps may simply ask for you to grant the permission again and refuse to do so if you don't.

Meitu isn't especially rare in this respect. Over 50% of the Android apps I have handy request your IMEI, although I haven't tracked what they all do with it. It's certainly something to be concerned about, but Meitu isn't especially rare here - there are big-name apps that do exactly the same thing. There's a legitimate question over whether Android should be making it so easy for apps to obtain this level of identifying information without more explicit informed consent from the user, but until Google do anything to make it more difficult, apps will continue making use of this information. Let's turn this into a conversation about user privacy online rather than blaming one specific example.

comment count unavailable comments

Traduction des langues, pays et autres - merci Unicode

Posted by Jean-Baptiste Holcroft on January 19, 2017 11:00 PM

Il arrive parfois d’avoir à traduire le nom de langues dans les logiciels. Et quand vous tombez sur « Fulani », vous êtes un peu perplexe !

Heureusement, des experts sont déjà passés par là et ont déjà fait le travail de traduction !

La débrouille

Les contributeurs wikipédia qui tracent presque tout par des publications sont assez utiles pour les mots spécifiques déjà traduits de façon standardisée. Le Fulani existe sous le nom Fula language qui nous mène à la page française Peul. Merci les contributeurs wikipédia 😏.

Vérifier/automatiser/accélérer avec l’Unicode Common Locale Data Repository

Le consortium Unicode maintien à jour une liste immense, qu’on trouve traduit Wikipédia ont traduit en « Répertoire de données de paramètres régionaux classiques ».

On y trouve toute sorte de traduction de langue, de pays, de mesures, des jours de la semaine, etc. En se rendant sur le site du CLDR on télécharge la dernière version publique 30.0.3 puis on récupère cldr-common-30.0.3.zip.

Dans l’arborescence, on va dans le sous-dossier common/main puis on ouvre le fichier xml fr.xml qui contient ce qu’on cherche. Sans surprise, on trouve dans « languages » les traductions des langues, dans « scripts » la traduction des écritures, etc.

Bon, le Peul tout come le Fulani ont comme code “ff” ou “ful”, donc la traduction est cohérente.

Ami développeurs, si vous avez ce type de vocabulaire à traduire, n’hésitez pas à capitaliser sur l’Unicode !

Le risque de ne pas utiliser Unicode : Roumanie = Romanche

Dans l’application Dictionary qui récupère du contenu du Wiktionnaire, on a des listes de langues traduites. Ayant moi-même besoin de traduire en roumain, j’ai détecté deux problèmes : le nom du dictionnaire est traduit en “Romanche” à la place de “Roumain”, on traduit des listes de valeurs déjà traduites dans android (voir le tableau de support).

Tout est dans le rapport d’anomalie.

Bonne traduction !

Which movie most accurately forecasts the Trump presidency?

Posted by Daniel Pocock on January 19, 2017 07:31 PM

Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?

As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.

Countdown to Looking Glass

On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.

cleaning out the swamp?

The Omen

Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?

LXQt Spin proposed for Fedora 26, new test build available

Posted by Christian Dersch on January 19, 2017 05:58 PM
Around christmas we announced some initial effort for a Fedora LXQt remix/spin. After some weeks of testing and tuning, reworking translation packages and updating whole LXQt to 0.11.x (x>0) the LXQt SIG decided to propose the LXQt Spin for inclusion in Fedora 26.

The current selection of applications:

  • LXQt 0.11.x
  • PCManFM-Qt (LXQt file manager)
  • Ark (archiver, from KDE)
  • Dragon (media player, from KDE)
  • KCalc (calculator, from KDE)
  • KWrite (text editor, from KDE)
  • LXImage-Qt (image viewer)
  • Psi+ (XMPP client)
  • qBittorrent (torrent client)
  • Qlipper (clipboard tool)
  • qpdfview (pdf and ps viewer)
  • Quassel (IRC client)
  • QupZilla (web browser)
  • Trojita (IMAP mail client)
  • Yarock (music player)
The set of applications is not yet fixed, we've chosen some KDE applications as they are Qt5 based and integrate well while having a small dependency footprint. In cases where LXQt provides an application (e.g. LXImage-Qt image viewer), this one has been selected.

For configuration we included the LXQt config tools (lxqt-config and obconf-qt) of course, in addition we added lxappearance to be able to change GTK themes too. The theme itself is the Breeze theme known from KDE, it looks nice and is provided for GTK in addition, so the user can have a unified look. By default we've chosen the Openbox window manager, in addition the spin will contain KWin for those who like to have compositing etc.

For software management we included dnfdragora, a nice graphical frontend for DNF providing a nice Qt based GUI in our case (but as it uses libyui abstraction layer, it can use GTK and curses too, as known from SUSE YaST). This is not yet included in Fedora, but on a good way to arrive soon. Right now Kevin Kofler provides a COPR for it.

A new test build is available in the usual location, comments and ideas (like different applications which may fit better) should be shared in our project on pagure.

Episode 28 - RSA Conference 2017

Posted by Open Source Security Podcast on January 19, 2017 02:00 PM
Josh and Kurt discuss their involvement in the upcoming 2017 RSA conference: Open Source, CVEs, and Open Source CVE. Of course IoT and encryption manage to come up as topics.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/303432626&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Early module development with Fedora Modularity

Posted by Adam Samalik on January 19, 2017 12:01 PM

So you like the Fedora Modularity project – where we separate the lifecycle of software from the distribution – and you want to start with module development early? Maybe to have it ready for the modular Fedora 26 Server preview? Start developing your modulemd on your local system now, and have it ready for later when the Module Build Service is in production!

Defining your module

To have your module build, you need to start with writing a modulemd file which is a definition of your module including the components, API, and all the information necessary to build your module like specifying the build root and a build order for the packages. Let’s have a look at an example Vim module:

document: modulemd
version: 1
data:
    summary: A ubiquitous text editor
    description: >
        Vim is a highly configurable text editor built to make creating and
        changing any kind of text very efficient.
    license:
        module:
            - MIT
        content: []
    xmd: ~
    dependencies:
        buildrequires:
            generational-core: master
        requires:
            generational-core: master
    references:
        community: http://www.vim.org/
        documentation: http://www.vim.org/docs.php
        tracker: https://github.com/vim/vim/issues
    profiles:
        default:
            rpms:
                - vim-enhanced
        minimal:
            rpms:
                - vim-minimal
        graphical:
            rpms:
                - vim-X11
    api:
        rpms:
            - vim-minimal
            - vim-enhanced
            - vim-X11
    filter:
        rpms: ~
    components:
        rpms:
            vim-minimal:
                rationale: The minimal variant of VIM, /usr/bin/vi.
            vim-enhanced:
                rationale: The enhanced variant of VIM.
            vim-X11:
                rationale: The GUI variant of VIM.
            vim-common:
                rationale: Common files needed by all VIM variants.
            vim-filesystem:
                rationale: The directory structure used by VIM packages.

Notice that there is no information about the name or version of the module. That’s because the build system takes this information from the git repository, from which the module is build:

  • Git repository name == module name
  • Git repository branch == module stream
  • Commit timestamp == module version

You can also see my own FTP module for reference.

To build your own module, you need to create a Git repository with the modulemd file. The name of your repo and the file must match the name of your module:

$ mkdir my-module
$ touch my-module/my-module.yml

The core idea about modules is that they include all the dependencies in themselves. Well, except the base packages found in the Base Runtime API – which haven’t been defined yet. But don’t worry, you can use this list of binary packages in the meantime.

So the components list in your modulemd need to include all the dependencies except the ones mentioned above. You can get a list of recursive dependencies for any package by using repoquery:

$ repoquery --requires --recursive --resolve PACKAGE_NAME

When you have this ready, you can start with building your module.

Building your module

To build a modulemd, you need to have the Module Build Service installed on your system. There are two ways of achieving that:

  1. Installing the module-build-service package with all its dependencies.
  2. Using a pre-built docker image and a helper script.

Both options provide the same result, so choose whichever you like better.

Option 1: module-build-service package

On Fedora rawhide, just install it by:

$ sudo dnf install module-build-service

I have also created a Module Build Service copr repo for Fedora 24 and Fedora 25:

$ sudo dnf copr enable asamalik/mbs
$ sudo dnf install module-build-service

To build your modulemd, run a command similar to the following:

$ mbs-manager build_module_locally file:////path/to/my-module?#master

An output will be a yum/dnf repository in the /tmp directory.

Option 2: docker image and a helper script

With this option you don’t need to install all the dependencies on your system, but it requires you to setenforce 0 before the build. :-(

You only need to clone the asamalik/build-module repository on GitHub and use the helper script as follows:

$ build_module ./my-module ./results

An output will be a yum/dnf repository in the patch you have specified.

What’s next?

The next step would be installing your module on the Base Runtime and testing if it works. But as we are doing this pretty early, there is no Base Runtime at the moment I’m writing this. However, you can try your module in a container using a pre-build fake Base Runtime image.

To handcraft your modular container, please follow the Developing and Building Modules guide on our wiki which provides you all the necessary steps while showing you a way how modular containers might be built in the future infrastructure!

DevConf.cz 2017

Are you visiting DevConf.cz in Brno? There is a talk about Modularity and a workshop where you can try building your own module as well. Both can be found in the DevConf.cz 2017 Schedule.

  • Day 1 (Friday) at 12:30 – Fedora Modularity – How does it work?
  • Day 2 (Saturday) at 16:00 – Fedora Modularity – Build your own module!

See you there!

 

spyder 3 for Fedora

Posted by nonamedotc on January 19, 2017 02:56 AM

Spyder 3 was released sometime back and the latest version 3.1.0 was released yesterday. I have working on updating Spyder to 3.x for sometime now. Towards this effort, I got the following packages reviewed and included in Fedora - 

  1. python-QtPy
  2. python-QtAwesome
  3. python-flit
  4. python-entrypoints
  5. python-nbconvert
  6. python-entrypoints
  7. python-pickleshare

In addition to this, the package python-ipykernel had to be reviewed. This was completed sometime towards the end of last year.

Now that all the packages are available (in different forms), I have put together a COPR repo where spyder 3.1.0 package resides. I would like to get these packages tested before I submit it as a big update to Fedora 25.

COPR repo is here - nonamedotc/spyder3 - COPR repo

Of course, this repo can be directly enabled from a terminal -

dnf copr enable nonamedotc/spyder3

To install spyder along with ipython console from this repo, do

dnf install python{2,3}-{spyder,ipython}


Note: ipython package provided by this repo is version 5.1.0 (since ipykernel needs ipython >= 4.0.0). This will necessitate removing the ipython package provided by the Fedora repo. I have requested an update to ipython already [1].


When spyder3 (python3 version of spyder) is launched, there will be a pop-up complaining that rope is not  installed. This is because we do not yet have a python3 version of rope. Ignoring that should not cause major issue.

Obligatory screenshot -


 

Please test these packages and let me know if there are issues so that I can fix and submit an update. I am hoping to submit this as an update as soon as ipython is done.


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1400383

 

The flatpak security model – part 1: The basics

Posted by Alexander Larsson on January 18, 2017 08:59 PM

This is the first part of a series talking about the approach flatpak takes to security and sandboxing.

First of all, a lot of people think of container technology like docker, rkt or systemd-nspawn when they think of linux sandboxing. However, flatpak is fundamentally different to these in that it is unprivileged.

What I mean is that all the above run as root, and to use them you either have to be root, or your access to it is equivalent to root. For instance, if you have access to the docker socket then you can get a full root shell with a command like:

docker run -t -i --privileged -v /:/host fedora chroot /host

Flatpak instead runs everything as the regular user.  To do this it uses a project called bubblewrap which is like a super-powered version of chroot, only you don’t have to be root to run it.

Bubblewrap can do more than just change the root, it lets you construct a custom filesystem mount tree for the process. Additionally it lets you create namespaces to further isolate things from the host. For instance if use –unshare-pid then your process will not see any processes from outside the sandbox.

Now, chroot is a root-only operation. How can it be that bubblewrap lets you do the same thing but doesn’t require root privileges? The answer is that it uses unprivileged user namespaces.

Inside such a user namespace you get a lot of capabilities that you don’t have outside it, such as creating new bind mounts or calling chroot. However, in order to be allowed to use this you have to set up a few process limits. In particular you need to set a process flag called PR_SET_NO_NEW_PRIVS. This causes all forms of privilege escalation (like setuid) to be disabled, which means the normal ways to escape a chroot jail don’t work.

Actually, I lied a bit above. We do use unprivileged user namespaces if we can, but many distributions disable them. The reason is that user namespaces open up a whole new attack surface against the kernel, allowing an unprivileged user access to lots of things that may not be perfectly adapted user access. For instance CVE-2016-3135 was a local root exploit which used a memory corruption in an iptables call. This is normally only accessible by root, but user namespaces made it user exploitable.

If user namespaces are disabled, bubblewrap can be built as a setuid helper instead. This still only lets you use the same features as before, and in many ways it is actually safer this way, because only a limited subset of the full functionality is exposed. For instance you cannot use bubblewrap to exploit the iptable bug above because it doesn’t set up iptable (and if it did it wouldn’t pass untrusted data to it).

Long story short, flatpak uses bubblewrap to create a filesystem namespace for the sandbox. This starts out with a tmpfs as the root filesystem, and in this we bind-mount read-only copies of the runtime on /usr and the application data on /app. Then we mount various system things like a minimal /dev, our own instance of /proc and symlinks into /usr from /lib and /bin. We also enable all the available namespaces so that the sandbox cannot see other processes/users or access the network.

On top of this we use seccomp to filter out syscalls that are risky. For instance ptrace, perf, and recursive use of namespaces, as well as weird network families like DECnet.

In order for the application to be able to write data anywhere we bind mount $HOME/.var/app/$APPID/ into the sandbox, but this is the only persistent writable location.

In this sandbox we then spawn the application (after having dropped all increased permissions). This is a very limited environment, and there isn’t much the application can do. In the next part of this series we’ll start looking into how things can be opened up to allow the app to do more.

Recipes for you and me

Posted by Matthias Clasen on January 18, 2017 08:31 PM

Since I’ve last written about recipes, we’ve started to figure out what we can achieve in time for GNOME 3.24, with an eye towards delivering a useful application. The result is this plan, which should be doable.

But: your help is needed. We need more recipe contributions from the GNOME community to have a well-populated initial experience. Everybody who contributes a recipe before 3.24 will get a little thank-you from us, so don’t delay…

The 0.8.0 release that I’ve just created already contains the first steps of this plan. One thing we decided is that we don’t have the time and resources to make the ingredients view useful by March, so the Ingredients tab is gone for now.

At the same time, there’s a new feature here, and that is the blue tile leading to the shopping list view:

The design for this page is still a bit up in the air, so you should expect this to change in the next releases. I decided to merge it already anyway, since I am impatient, and this view already provides useful functionality. You can print the shopping list:

Beyond this, I’ve spent some time on polishing and fixing bugs. One thing that I’ve discovered to my embarrassment earlier this week is that exporting recipes from the flatpak did not actually work. I had only ever tested this with an un-sandboxed local build.

Sorry to everyone who tried to export their recipe and was left wondering why it didn’t work!

We’ve now fixed all the bugs that were involved here, both in recipes and in the file chooser portal and in the portal infrastructure itself, and exporting recipes works fine with the current flatpak, which, as always, you can install from here:

https://alexlarsson.github.io/test-releases/gnome-recipes.flatpakref

One related issue that became apparent during this bug hunt is that things work less than perfectly if the portals are not present on the host system. Until that becomes less likely, I’ve added a bit of code to make the failure less mysterious, and give you some idea how to fix it:

I think recipes is proving its value as  a test bed and early adopter for flatpak and portals. At this point, it is using the file chooser portal, the account information portal, the print portal, the notification portal, the session inhibit portal, and it would also use the sharing portal, if we had that already.

I shouldn’t close this post without mentioning that you will have a chance to hear a bit from Elvin about the genesis of this application in the Fosdem design devroom. See you there!

Litecoin mining on "whatever"

Posted by Joerg Stephan on January 18, 2017 07:06 PM
I started mining litecoins for fun, so this is really not a post on what the best way would be or how to get rich.

This post is more about what happens if you use "what is lying around", so I installed "minerd" on my raspberry and banana Pi and also looked what phones are still in the desk and installed a mining software from the individual store.

In the last 5 days I managed to get

0.000157304676 LTC

by using some devices, whereas the PIs have been running all time and additional Hardware has been switched on and off sometimes.


  1. Banana pi M1
    ARM Cortex-A7 Dual-core (ARMv7-A) 1 GHz
    Mining average in 24 hrs : 0,8 KH/s
  2. Raspberrys Pi 3 B
    64bit ARMv8 QUAD Core CPU op 1.2GHz
    Mining average in 24 hrs: 1,2 KH/s
  3. Microsoft LUMIA 640
    Quad-core 1.2 GHz Cortex-A7
    Mining average in 24 hrs: 1,3 KH/s
  4. Lenovo A390t
    Dual-core 1.0 GHz Cortex-A9
    Mining average in 24 hrs: 0,6 KH/s
Some devices which have never been running 24 Hrs
  1. Thinkpad Lenovo T410i
    Intel(R) Core(TM) i5 CPU       M 430  @ 2.27GHz
    Mining peeks: 14 KH/s

Secure your Elasticsearch cluster and avoid ransomware

Posted by Peter Czanik on January 18, 2017 04:26 PM

Last week,  news came out that unprotected MongoDB databases are being actively compromised: content copied and replaced by a message asking for a ransom to get it back. As The Register reports: Elasticsearch is next.

Protecting access to Elasticsearch by a firewall is not always possible. But even in environments where it is possible, many admins are not protecting their databases. Even if you cannot use a firewall, you can secure connection to Elasticsearch by using encryption. Elasticsearch by itself does not provide any authentication or encryption possibilities. Still, there are many third-party solutions available, each with its own drawbacks and advantages.

X-pack (formerly: Shield) is the solution developed by Elastic.co, the company behind Elasticsearch. It is a commercial product (on first installation a 30 day trial license is installed) and offers many more possibilities than just securing your Elasticsearch cluster, including monitoring, reporting and alerting. Support is available in syslog-ng for Elasticsearch versions 2.X since version 3.7.

SearchGuard is developed by floragunn. It is a plugin for Elasticsearch offering encryption and authentication. All basic security features are open source and are available for free, enterprise features are available for a fee. Support is available in syslog-ng since version 3.9.1 when using the native Elasticsearch transport protocol. The SearchGuard component utilized by syslog-ng does not require a commercial license.

Right now the HTTP client in syslog-ng does not support encrypted (HTTPS) connections. Proof-of-concept-level code is already available by Fabien Wernli (also known as Faxm0dem) on GitHub, hopefully it will be ready for general use soon.

As you can see, syslog-ng provides many different ways to connect securely to your Elasticsearch cluster. If you have not secured it yet and want to avoid paying a ransom, secure it now!

The post Secure your Elasticsearch cluster and avoid ransomware appeared first on Balabit Blog.

Improve your sleep by using Redshift on Fedora

Posted by Fedora Magazine on January 18, 2017 06:20 AM

The blue light emitted by most electronic devices, is known for having a negative impact on our sleep. We could simply quit using each of our electronic devices after dark, as an attempt to improve our sleep. However, since that is not really convenient for most of us, a better way is to adjusts the color temperature of your screen according to your surroundings. One of the most popular ways to achieve this is with the Redshift utility. Jon Lund Steffensen , the creator of Redshift, describes his program in the following way:

Redshift adjusts the color temperature of your screen according to your surroundings. This may help your eyes hurt less if you are working in front of the screen at night.

The Redshift utility only works in the X11 session on Fedora Workstation. So if you’re using Fedora 24, Redshift will work with the default login session. However, on Fedora 25, the default session at login is Wayland, so you will have to use the GNOME shell extension instead. Note, too that the GNOME Shell extension also works with X11 sessions.

Redshift utility

Installation

Redshift is in the Fedora’s repos, and thus, all we have to do to install is run this command:

sudo dnf install redshift

The package also provides a GUI. To use this, install redshift-gtk instead. Remember, though, that the utility only works on X11 sessions.

Using the Redshift utility

Run the utility from the command line with a command like the following:

redshift -l 23.6980:133.8807 -t 5600:3400

In the above command, the -l 23.6980:133.8807 means we are informing Redshift that our current location is 23.6980° S, 133.8807° E. The -t 5600:3400 declares that during the day you want a colour temperature of 5600, and 3400 at night.

The temperature is proportional to the amount of blue light emitted: a lower temperature, implies a lower amount of blue light.  I prefer to use 5600K (6500K is neutral daylight) during the day, and 3400K at night (anything lower makes me feel like I’m staring at a tomato), but feel free to experiment with it.

If you don’t specify a location, Redshift attempts to use the Geoclue method in order to determine your location coordinates. If this method doesn’t work, you could use multiple websites and online maps to find the coordinates.

screenshot1

Don’t forget to set Redshift as an autostart command, and to check Jon’s website for more information.

Redshift GNOME Shell extension

The utility does not work when running the Wayland display server (which is standard in Fedora 25). Fortunately, there is a handy GNOME Shell extension that will do the same job. To install, run the the following commands:

sudo dnf copr enable mystro256/gnome-redshift
sudo dnf install gnome-shell-extension-redshift

After installing from the COPR repo, log out and log back in of your Fedora Workstation, then enable it in the GNOME Tweak tool. For more information, check the gnome-redshift copr repo, or the github repo.

After enabling the extension, a little sun (or moon) icon appears in the top right of your GNOME shell. The extension also provides a settings dialog to tweak the times of the redshift and the temperature.

screenshot-from-2017-01-18-15-21-47

 

Relative software

F.lux

Redshift could be seen as the open-source variant of F.lux. There is a linux version of F.lux now. You could consider using it if you don’t mind using closed-source software, or if Redshift doesn’t work properly.

Twilight for Android

Twilight is similar to Redshift, but for Android. It makes reading on your smartphone or tablet late at night more comfortable.

Redshift plasmoid

This is the Redshift GUI version for KDE. You can find more information on github.

 

 

 

 

 

 

 

 

 

Save

Save

Save

Save

Save

Save

All systems go

Posted by Fedora Infrastructure Status on January 18, 2017 12:48 AM
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar, Kerberos

Major service disruption

Posted by Fedora Infrastructure Status on January 18, 2017 12:47 AM
New status major: network outage at main DC for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar, Kerberos

Mea Culpa: Fedora Elections

Posted by Stephen Smoogen on January 17, 2017 05:30 PM
As announced here, here , and here, the Fedora Election cycle for the start of the 25 release has been done. Congratulations on the winners. Now if you notice there were less than 250 voters for any of the elections out of multiple thousand of eligible voters.. I am not one of them.

It is not like the elections were announced before, at the start, or right before they ended. Yet somehow.. I missed everyone of these emails. I caught various emails on NFS changing configurations, proposed changes to Fedora 26 and 27, or various retired packages.. but I completely spaced the elections. I was actually sending an email asking when they would be held when someone congratulated Kevin Fenzi on IRC about winning.

So to the winners of this cycle of elections. Congratulations. To all the people who put in the hard work of running elections (and having run several it is a LOT of hard work), my sincere apologies for somehow missing it.

Elections Retrospective, January 2017

Posted by Fedora Community Blog on January 17, 2017 04:49 PM

The results are in! The Fedora Elections for the Fedora 25 release cycle of FESCo, FAmSCo and the Council concluded on Tuesday, January 17th. The results are posted on the Fedora Voting Application and announced on the mailing lists. You can also find the full list of winning candidates below. I would also like to share some interesting statistics in this January 2017 Elections Retrospective.

January 2017 Elections Retrospective Report

In this election cycle, the voter turnout is above its average level. It is great news as it shows increased interest of the Fedora people in community affairs.

This election cycle was hit by some planning issues as we were running the Elections over Christmas 2016 period. At the beginning I was worrying about the turnout due to the Christmas, but fortunately this was odd and we are more than good from this point of view.

Fedora Engineering Steering Committee (FESCo)

We had five vacant seats and seven nominations for the F25 cycle, with 267 voters casting their votes.

FESCo Winning Candidates Votes
Kevin Fenzi (nirik / kevin) [info] 1401
Adam Miller (maxamillion / maxamillion) [info] 1075
Jared Smith (jsmith / jsmith) [info] 988
Justin Forbes (jforbes / jforbes) [info] 735
Kalev Lember (kalev / kalev) [info] 691

Out of the five elected nominees, four (nirik, maxamillion, jsmith, and kalev) have been elected for a repeat term. One elected nominee (jforbes) has been elected for the first time.

 Compared to the historical data, with 267 voters, we are above the average of 215 voters.
fesco-elections-2017-01
The following statistic shows how many people voted each day during the voting period.
fesco-elections-per-day-2017-01
More FESCo statistic data can be found in the voting application.

Fedora Council

We had one vacant seat and five nominations for the Fedora 25 cycle, with 260 voters casting their votes.

Council Winning Candidate Votes
Robert Mayr (robyduck) [info] 743

The Fedora Council came into existence in November 2014, and hence, we do not have much previous data. Historically, before we had a Council, there was a Board. On the chart below you can see the comparison between voter turnout for the Fedora Board elections vs Council Elections. The average voters turnout for Council & Board elections is 223, and for Council only is the average 220.

council-elections-2017-01

The profile for number of voters per day was similar to the one we saw for FESCo.

council-elections-per-day-2017-01

More Council statistic data can be found in the voting application.

Fedora Ambassadors Steering Committee (FAmSCo)

We had seven vacant seat and thirteen nominations for the Fedora 25 cycle, with 247 voters casting their votes.

FAmSCo Winning Candidates Votes
Robert Mayr (robyduck) [info] 1623
Jona Azizaj (jonatoni) [info] 1576
Gabriele Trombini (mailga) [info] 1274
Giannis Konstantinidis (giannisk) [info] 1168
Itamar Reis Peixoto (itamarjp) [info] 1110
Frederico Lima (fredlima) [info] 1010
Sylvia Sanchez (Kohane / lailah) [info] 964

Due to the effort spent during the last several years to convert FAmSCo to FOSCo,  it is difficult to directly compare the  data from election’s turnout. However we can state that during this election cycle we hit the best turnout ever (as far as records are available). The average turnout for FAmSCo is 161 voters. This cycle we hit 247 voters.

famsco-elections-2017-01

Again, we can see the same distribution of voters over the voting period as we have seen in FESCo and Council.

famsco-elections-per-day-2017-01

More statistic data can be found in the Voting application.

Special Thanks

Congratulations to the winning candidates, and thank you to all the candidates who ran this election! Community governance is core to the Fedora Project, and we couldn’t do it without your involvement and support.

A special thanks to bee2502 and jflory7 as well as to the members of the CommOps Team for helping organize another successful round of Elections!

And last but not least, thank YOU to all the Fedora community members who participated and voted this election cycle. Stay tuned for future Elections Retrospective articles for future Elections!

The post Elections Retrospective, January 2017 appeared first on Fedora Community Blog.

negativo17.org nvidia packages should now work out of the box on optimus setups

Posted by Hans de Goede on January 17, 2017 01:31 PM
In this blog post I promised I would get back to people who want to use the nvidia driver on an optimus laptop.

The set of xserver patches I blogged about last time have landed upstream and in Fedora 25 (in xorg-x11-server 1.19.0-3 and newer), allowing the nvidia driver packages to drop a xorg.conf snippet which will make the driver atuomatically work on optimus setups.

The negativo17.org nvidia packages now are using this, so if you install these, then the nvidia driver should just work on your laptop.

Note that you should only install these drivers if you actually have a supported (new enough) nvidia GPU. These drivers replace the libGL implementation, so installing them on a system without a nvidia GPU will cause things to break. This will be fixed soon by switching to libglvnd as the official libGL provider and having both mesa and the nvidia driver provide "plugins" for libglvnd. I've actually just completed building a new enough libglvnd + libglvnd enabled mesa for rawhide, so rawhide users will have libglvnd starting tomorrow.

Factory 2, Sprint 8 Report

Posted by Ralph Bean on January 16, 2017 10:06 PM

Happy New Year from the Factory 2.0 team.

Here's a reminder of our current priorities. We are:

  • Preparing elementary build infrastructure for the Fedora 26 Alpha release.
  • Deserializing pipeline processes that could be done more quickly in parallel.
  • Building a dependency chain database, so that we can build smarter rebuild automation and pipeline analytics.
  • Monitoring pipeline performance metrics, so that as we later improve things we can be sure we had an effect.

We are on track with respect to three of the four priorities: module build infrastructure will be ready before the F26 Alpha freeze. Our VMs are provisioned, we're working through the packaging rituals, and we'll be ready for an initial deployment shortly after devconf. Internally, our MvP of resultsdb and resultsdb-updater are working and pulling data from some early-adopter Platform Jenkins masters and our internal performance measurement work is bearing fruit slowly but steadily: we have two key metrics updating automatically on our kibana dashboard, with two more in progress to be completed in the coming sprints.

We have made a conscious decision to put our work on the internal depedency chain database on hold. We're going to defer our deployment to production for a few months to ensure that our efforts don't collide with a separate release engineering project ongoing now.

Tangentially, we're glad to be assisting with the adoption of robosignatory for automatic rpm signing. It's an excellent example of upstream/downstream cooperation between the Fedora and RHEL services teams.

mbs-optimization, by jkaluza

This demo shows optimizations of module build in module build service comparing the diagrams from old and new version of MBS.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-008//jkaluza-mbs-optimization.ogv"> </video>

resultsdb-updater-pdc-updater-updates, by mprahl

This demo shows the changes in ResultsDB-Updater and how it reflects in ResultsDB. Additionally, progress is shown on pdc-updater working internally.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-008//mprahl-resultsdb-updater-pdc-updater-updates.mp4"> </video>

developers-instance, by fivaldi

In this video, I am presenting local devleoper's instance of MBS using docker-compose. The aim is to provide the simplest way of a custom MBS instance for dev/testing purposes. At the end, I'm showing how to submit a testing module build.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-006//fivaldi-developers-instance.ogv"> </video>

fedpkg-pdc-modulemd, by jkaluza

This demo shows the newly implemented "fedpkg module-build" command workflow and improved storage of module metadata in the PDC.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-006//jkaluza-fedpkg-pdc-modulemd.ogv"> </video>

mbs-scheduler-and-resultsdb-prod, by mprahl

This demo briefly explains the changes in the Module Build Service scheduler to use fedmsg's "Hub-Consumer" approach. Additionally, ResultsDB is briefly shown in production with results populated from ResultsDB-Updater.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-006//mprahl-mbs-scheduler-and-resultsdb-prod.mp4"> </video>

module-checks-in-taskotron, by threebean

In this demo, I show how we've wrapped elementary tests produced by the base-runtime team so that they can be executed and managed by the taskotron CI system in place in Fedora Infrastructure. Benefits include:

  • No lock-in to taskotron. Jenkins-job-builder could wrap the core test in a similar way.
  • An avocado-to-resultsdb translator is written which will be generally useful in future sprints.

Work on taskotron-trigger to automatically respond to dist-git events was implemented and merged upstream, but is pending a release and deployment.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-006//threebean-module-checks-in-taskotron.ogv"> </video>

What does security and USB-C have in common?

Posted by Josh Bressers on January 16, 2017 06:39 PM
I've decided to create yet another security analogy! You can’t tell, but I’m very excited to do this. One of my long standing complaints about security is there are basically no good analogies that make sense. We always try to talk about auto safety, or food safety, or maybe building security, how about pollution. There’s always some sort of existing real world scenario we try warp and twist in a way so we can tell a security story that makes sense. So far they’ve all failed. The analogy always starts out strong, then something happens that makes everything fall apart. I imagine a big part of this is because security is really new, but it’s also really hard to understand. It’s just not something humans are good at understanding.

The other day this article was sent to me by @kurtseifried
How Volunteer Reviewers Are Saving The World From Crummy—Even Dangerous—USB-C Cables

The TL;DR is essentially the world of USB-C cables is sort of a modern day wild west. There’s no way to really tell which ones are good and which ones are bad, so there are some people who test the cables. It’s nothing official, they’re basically volunteers doing this in their free time. Their feedback is literally the only real way to decide which cables are good and which are bad. That’s sort of crazy if you think about it.

This really got me thinking though, it’s has a lot in common with our current security problems. We have a bunch of products and technologies. We don’t have a good way to tell if something is good or bad. There are some people who try to help with good information. But fundamentally most of our decisions are made with bad or incomplete data.

In the case of the cables, I see two practical ways out of this. Either have some sort of official testing lab. If something doesn’t pass testing, it can’t be sold. This makes sense, there are plenty of things on the market today that go through similar testing. If the products fails, it doesn’t get sold. In this case the comparable analogies hold up. Auto safety, electrical safety, hdmi; there are plenty of organizations that are responsible for ensuring the quality and safety of certain products. The cables would be no different.

A possible alternative to deal with this problem is you make sure every device will exist in a way that assumes bad cables are possible and deal with this situation in hardware. This would mean devices being smart enough to not draw too much power, or not provide too much power. To know when there will be some sort of failure mode and disconnect. There are a lot of possibilities here, and to be perfectly honest, no device will be able to do this with 100% accuracy. More importantly though, no manufacturer will be willing to add this functionality because it would add cost, probably a lot of cost. It’s still a remote possibility though, and for the sake of the analogy, we’re going to go with it.

The first example twisted to cybersecurity would mean you need a nice way to measure security. There would be a lab or organization that is capable of doing the testing, then giving some sort of stamp of approval. This has proven to be a really hard thing to do in the past. The few attempts to do this have failed. I suspect it’s possible, just very difficult to do right. Today Mudge is doing some of this with the CITL, but other than that I’m not really aware of anything of substance. It’s a really hard problem to solve, but if anyone can do it right, it’s probably Mudge.

This then leads us to the second possibility which is sort of how things work today. There is a certain expectation that an endpoint will handle certain situations correctly. Each endpoint has to basically assume anything talking to it is broken in some way. All data transferred must be verified. Executables must be signed and safely distributed. The networks the data flows across can’t really be trusted. Any connection to the machine could be an attacker and must be treated as such. This is proving to be very hard though and in the context of the cables, it’s basically the crazy solution. Our current model of security is the crazy solution. I doubt anyone will argue with that.

This analogy certainly isn’t perfect, but the more I think about it the more I like it. I’m sure there are problems thinking about this in such a way, but for the moment, it’s something to think about at least. The goal is to tell a story that normal people can understand so we can justify what we want to do and why. Normal people don’t understand security, but they do understand USB cables.


Do you have a better analogy? Let me know @joshbressers on Twitter.

Episode 27 - Prove to me you are human

Posted by Open Source Security Podcast on January 16, 2017 03:45 PM
Josh and Kurt discuss NTP, authentication issues, network security, airplane security, AI, and Minecraft.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/302981179&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Digest of Fedora 25 Reviews

Posted by Jiri Eischmann on January 16, 2017 02:35 PM

Fedora 25 has been out for 2 months and it seems like a very solid release, maybe the best in the history of the distro. And feedback from the press and users has also been very positive. I took the time and put together a digest of the latest reviews:

Phoronix: Fedora 25 Is Quite Possibly My Most Favorite Release Yet

As a long-time Fedora fan and user going back to Fedora Core, Fedora 25 is quite possibly my most favorite Fedora release yet. With the state as of this week, it feels very polished and reliable and haven’t encountered any glaring bugs on any of my test systems. Thanks in large part due to the heavy lifting on ensuring GNOME 3.22 is a super-polished desktop release, Fedora 25 just feels really mature yet modern when using it.

Phoronix: Fedora 25 Turned Out Great, Definitely My Most Favorite Fedora Release

That’s the first time I’ve been so ambitious with a Fedora release, but in testing it over the past few weeks (and months) on a multitude of test systems, the quality has been excellent and by far is most favorite release going back to the Fedora Core days — and there’s Wayland by default too, as just the icing on the cake.

Distrowatch: Fedora 25 Review

Even when dealing with the various Wayland oddities and issues, Fedora 25 is a great distribution. Everything is reasonably polished and the default software provides a functional desktop for those looking for a basic web browsing, e-mail, and word processing environment. The additional packages available can easily turn Fedora into an excellent development workstation customized for a developer’s specific needs. If you are programming in most of the current major programming languages, Fedora provides you the tools to easily do so. Overall, I am very pleased using Fedora 25, but I am even more excited for future releases of Fedora as the various minor Wayland issues get cleaned up.

ZDNet: Fedora 25 Linux arrives with Wayland display support

Today, Fedora is once more the leading edge Linux distribution.

ArsTechnica: Fedora 25: With Wayland, Linux has never been easier (or more handsome)

Fedora 24 was very close to my favorite distro of the year, but with Fedora 25 I think it’s safe to say that the Fedora Project has finally nailed it. I still run a very minimal Arch install (with Openbox) on my main machine, but everywhere else—family and friends who want to upgrade, clients looking for a stable system and so on—I’ve been recommending Fedora 25.

…I have no qualms recommending both Fedora and Wayland. The best Linux distro of 2016 simply arrived at the last moment.

Hectic Geek: Fedora 25 Review: A Stable Release, But Slightly Slow to Boot (on rotational disks)

If you have a rotational disk, then Fedora 25 will be a little slow to boot and there is nothing you or I can do to fix it. But if you have an SSD, then you shall have no issues here. Other than that, I’m quite pleased with this release actually. Sure the responsiveness sucked the first time on, but as mentioned, it can be fixed, permanently. And the stability is also excellent.

Dedoimedo: And the best distro of 2016 is…

The author prefers Fedora 24 to 25, but Fedora is still the distro of the year for him:

Never once had I believed that Fedora would rise so highly, but rise it did. Not only is the 24th release a child of a long succession of slowly, gradually improving editions, it also washed away my hatred for Gnome 3, and I actually started using it, almost daily, with some fairly good results. Fedora 24 was so good that it broke Fedora. The latest release is not quite as good, but it is a perfectly sane compromise if you want to use the hottest loaf of modern technology fresh from the Linux oven.

OCS-Mag: Best GNOME distro of 2016

The same author, and again not surprisingly prefers 24 which is the best GNOME distro in his opinions:

Fedora 24 is a well-rounded and polished operating system, and with the right amount of proverbial pimping, its Gnome desktop offers a stylish yet usable formula to the common user, with looks and functionality balanced to a fair degree. But, let us not forget the extensions that make all this possible. Good performance, good battery life and everyday stuff aplenty should keep you happy and entertained. Among the Gnome bunch, it’s Funky Fedora that offers the best results overall. And thus we crown it the winner of the garden ornament competition of 2016.

The Register: Fedora 25: You’ve got that Wayland feelin’, oh, that Wayland feelin’

Fedora 25 WorkStation is hands down the best desktop Linux distro I tested in 2016. With Wayland, GNOME 3.22 and the excellent DNF package manager, I’m hard-pressed to think of anything missing. The only downside? Fedora lacks an LTS release, but now that updating is less harrowing, that’s less of a concern.

Bit Cannon: Finding an Alternative to Mac OS X

Wesley Moore was looking for an alternative to Mac OS X and his three picks were: Fedora, Arch Linux, and elementaryOS.

Fedora provided an excellent experience. I installed Fedora 25 just after its release. It’s built on the latest tech like Wayland and GNOME 3.22.

The Huffington Post: How To Break Free From Your Computer Operating System — If You Dare

Fedora is a gorgeous operating system, with a sleek and intuitive interface, a clean aesthetic, and it’s wicked fast.

ArsTechnica: Dell’s latest XPS 13 DE still delivers Linux in a svelte package

Not really a review of Fedora, but the author tried to install Fedora 25 on the new XPS13 and this is what he had to say:

As a final note, I did install and test both Fedora 25 and Arch on the new hardware and had no problems in either case. For Fedora, I went with the default GNOME 3.22 desktop, which, frankly, is what I think Dell should ship out of the box. It’s got far better HiDPI support than Ubuntu, and the developer tools available through Fedora are considerably more robust than most of what you’ll find in Ubuntu’s repos.

Looks like we’re on the right track and I’m sure Fedora 26 will be an even better release. We’ve got very interesting things in the works.


Fedora assembly IDE - SimpleASM.

Posted by mythcat on January 16, 2017 12:35 PM
This integrated development environment (IDE) named SimpleASM (SASM) let you to make application using assembly language.
The good part for linux users is the crossplatform IDE for NASM, MASM, GAS, FASM with syntax highlighting and debugger.
I used fasm so this help me.
The debugger is gdb - GNU Project Debugger and supports working with many opened project.
If you want to use this IDE with Fedora then you can get from Fedora 24
The official web page is here.

Use Docker remotely on Atomic Host

Posted by Fedora Magazine on January 16, 2017 08:00 AM

Atomic Host from Project Atomic is a lightweight container based OS that can run Linux containers. It’s been optimized to use as a container run-time system for cloud environments. For instance, it can host a Docker daemon and containers. At times, you may want to run docker commands on that host and manage the server from elsewhere. This article shows you how to remotely access the Docker daemon of the Fedora Atomic Host, which you can download here. The entire process is automated by Ansible — which is a great tool when it comes to automating everything.

A note on security

We’ll secure the Docker daemon with TLS, since we’re connecting via the network. This process requires a client certificate and server certificate. The OpenSSL package is used to to create the certificate keys for establishing a TLS connection. Here, the Atomic Host is running the daemon, and our local Fedora Workstation acts as a client.

Before you follow these steps, note that any process on the client that can access the TLS certs now has full root access on the server. Thus, the client can do anything it wants to do on the server. Therefore, we need to give cert access only to the specific client host that can be trusted. You should copy the client certificates only to a client host completely under your control. Even in that case, client machine security is critical.

However, this method is only one way to remotely access the daemon. Orchestration tools often provide more secure controls. The simple method below works for personal experimenting, but may not be appropriate for an open network.

Getting the Ansible role

Chris Houseknecht wrote an Ansible role that creates all the certs required. This way you don’t need to run openssl commands manually. These are provided in an Ansible role repository. Clone it to your present working host.

$ mkdir docker-remote-access
$ cd docker-remote-access
$ git clone https://github.com/ansible/role-secure-docker-daemon.git

Create config files

Next, you must create an Ansible configuration file, inventory and playbook file to setup the client and daemon. The following instructions create client and server certs on the Atomic Host. Then, they fetch the client certs to the local machine. Finally, they configure the daemon and client so they talk to each other.

Here is the directory structure you need. Create each of the files below as shown.

$ tree docker-remote-access/
docker-remote-access/
├── ansible.cfg
├── inventory
├── remote-access.yml
└── role-secure-docker-daemon

ansible.cfg

 $ vim ansible.cfg
[defaults]
inventory=inventory

inventory

 $ vim inventory
[daemonhost]
'IP_OF_ATOMIC_HOST' ansible_ssh_private_key_file='PRIVATE_KEY_FILE'

Replace IP_OF_ATOMIC_HOST in the inventory file with the IP of your Atomic Host. Replace PRIVATE_KEY_FILE with the location of the SSH private key file on your local system.

remote-access.yml

$ vim remote-access.yml
---
- name: Docker Client Set up
  hosts: daemonhost
  gather_facts: no
  tasks:
    - name: Make ~/.docker directory for docker certs
      local_action: file path='~/.docker' state='directory'

    - name: Add Environment variables to ~/.bashrc
      local_action: lineinfile dest='~/.bashrc' line='export DOCKER_TLS_VERIFY=1\nexport DOCKER_CERT_PATH=~/.docker/\nexport DOCKER_HOST=tcp://{{ inventory_hostname }}:2376\n' state='present'

    - name: Source ~/.bashrc file
      local_action: shell source ~/.bashrc

- name: Docker Daemon Set up
  hosts: daemonhost
  gather_facts: no
  remote_user: fedora
  become: yes
  become_method: sudo
  become_user: root
  roles:
    - role: role-secure-docker-daemon
      dds_host: "{{ inventory_hostname }}"
      dds_server_cert_path: /etc/docker
      dds_restart_docker: no
  tasks:
    - name: fetch ca.pem from daemon host
      fetch:
        src: /root/.docker/ca.pem
        dest: ~/.docker/
        fail_on_missing: yes
        flat: yes
    - name: fetch cert.pem from daemon host
      fetch:
        src: /root/.docker/cert.pem
        dest: ~/.docker/
        fail_on_missing: yes
        flat: yes
    - name: fetch key.pem from daemon host
      fetch:
        src: /root/.docker/key.pem
        dest: ~/.docker/
        fail_on_missing: yes
        flat: yes
    - name: Remove Environment variable OPTIONS from /etc/sysconfig/docker
      lineinfile:
        dest: /etc/sysconfig/docker
        regexp: '^OPTIONS'
        state: absent

    - name: Modify Environment variable OPTIONS in /etc/sysconfig/docker
      lineinfile:
        dest: /etc/sysconfig/docker
        line: "OPTIONS='--selinux-enabled --log-driver=journald --tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server-cert.pem --tlskey=/etc/docker/server-key.pem -H=0.0.0.0:2376 -H=unix:///var/run/docker.sock'"
        state: present

    - name: Remove client certs from daemon host
      file:
        path: /root/.docker
        state: absent

    - name: Reload Docker daemon
      command: systemctl daemon-reload
    - name: Restart Docker daemon
      command: systemctl restart docker.service

Access the remote Atomic Host

Now, run the Ansible playbook:

$ ansible-playbook remote-access.yml

Make sure that the tcp port 2376 is opened on your Atomic Host. If you’re using Openstack, add TCP port 2376 in your security rule. If you’re using AWS, add it to your security group.

Now, a docker command run as a regular user on your workstation talks to the daemon of the Atomic host, and executes the command there. You don’t need to manually ssh or issue a command on your Atomic host. This allows you to launch containerized applications remotely and easily, yet securely.

If you want to clone the playbook and the config file, there is a git repository available here.

docker-daemon


Image courtesy of Axel Ahoi — originally posted to Unsplash.

Windows Subsystem for Linux (beta)

Posted by mythcat on January 16, 2017 01:10 AM
This is the first release of Bash on Windows and it is branded "beta" deliberately - it's not yet complete! You should expect many things to work and for some things to fail! We greatly appreciate you using Bash on Windows and helping us identify the issues we need to fix in order to deliver a great experience.
Try this tutorial.
The Fedora don't have a docker userspace, but Suse come with this feature.

Fedora - linux and shell.

Posted by mythcat on January 15, 2017 05:59 PM
The Linux command shell is a very useful and powerful tool and that can help you with Fedora.
Let see the common commands:
pwd - show current directory
ls - displays files/directories and this options:
-a show all (including hidden)
-R recursive list
-r reverse order
-t sort by last modified
-S sort by file size
-l long listing format
-1 one file per line
-m comma-separated output
-Q quoted output
cd - change directory
mkdir - create a directory
rmdir - delete directory
cat - display contents of a file
cp - create a copy of a file
mv - rename or move a file
rm - remove a file

Pipes - let you to use multiple commands from one command to another command:
cmd1 | cmd2 stdout of cmd1 to cmd2
cmd1 |$ cmd2 stderr of cmd1 to cmd2
cmd | tee file redirect stdout of cmd to a file and print it to screen

Command lists - commands into list of commands:
cmd1 ; cmd2 run cmd1 then cmd2
cmd1 && cmd2 run cmd2 if cmd1 is successful
cmd1 || cmd2 run cmd2 if cmd1 is not successful
cmd & run cmd in a subshell How to use some lnux commands:
mkdir dir make directory dir
rm file delete file
rm -r dir delete directory dir
rm -f file force delete file
rm -rf dir force delete directory dir - use with extreme CAUTION
cp file1 file2 copy file1 to file2
cp -r dir1 dir2 copy dir1 to dir2; create dir2 if it doesn't exist
mv file1 file2 rename or move file1 to file2 if file2 is an existing directory, moves file1 into directory file2
touch file Create or update file

File Operations
file file1 get type of file1 
cat file1 file2 concatenate file and output
less file1 view and paginate file1 
head file1 show first 10 lines of file1
tail file1 show last 10 lines of file1 
tail -f file1 output last lines of file1 as it changes

Setting up a retro gaming console at home

Posted by Kushal Das on January 15, 2017 06:17 AM

Commodore 64 was the first computer I ever saw in 1989. Twice in a year I used to visit my grandparents’ house in Kolkata, I used to get one or two hours to play with it. I remember, after a few years how I tried to read a book on Basic, with the help of an English-to-Bengali dictionary. In 1993, my mother went for a year-long course for her job. I somehow managed to convince my father to buy me an Indian clone of NES (Little Master) in the same year. That was also a life event for me. I had only one game cartridge, only after 1996 the Chinese NES clones entered our village market.

Bringing back the fun

During 2014, I noticed how people were using Raspberry Pi(s) as NES consoles. I decided to configure my own on a Pi2. Last night, I re-installed the system.

Introducing RetroPie

RetroPie turns your Raspberry Pi into a retro-gaming console. You can either download the pre-installed image from the site, or you can install it on top of the rasbian-lite. I followed the latter path.

As a first step I downloaded Raspbian Lite. It was around 200MB in size.

# dcfldd bs=4M if=2017-01-11-raspbian-jessie-lite.img of=/dev/mmcblk0

I have used the dcfldd command, you can use dd command too. Detailed instructions are here.

After booting up the newly installed Raspberry Pi, I just followed the manual installation instructions from the RetroPie wiki. I chose basic install option on the top of the main installation screen. Note that the screenshot in the wiki is old. It took a few hours for the installation to finish. I have USB gamepads bought from Amazon, which got configured on the first boot screen. For the full instruction set, read the wiki page.

Happy retro gaming everyone :)

Updates from PyCon Pune, 12th January

Posted by Kushal Das on January 14, 2017 05:02 AM

This is a small post about PyCon Pune 2017. We had our weekly volunteers meet on 12th January in the hackerspace. You can view all the open items in the GitHub issue tracker. I am writing down the major updates below:

Registration

We have already reached the ideal number of registrations for the conference. The registration will be closed tomorrow, 15th of January. This will help us to know the exact number of people attending, thus enabling us to provide the better facility.

Hotel and travel updates

All international speakers booked their tickets, the visa application process is going on.

Child care

Nisha made a contact with the Angels paradise academy for providing childcare.

T-shirts for the conference

T-shirts will be ordered by coming Tuesday. We want to have a final look at the material from a different vendor on this Sunday.

Speakers’ dinner

Anwesha is working on identifying the possible places.

Food for the conference

Nisha and Siddhesh identified Rajdhani as a possible vendor. They also tried out the Elite Meal box, and it was sufficient for one person. By providing box lunches, it will be faster and easier to manage the long lunch queues.

Big TODO items

  • Design of the badge.
  • Detailed instruction for the devsprint attendees.

Next meeting time

From now on we will be having two volunteers meet every week. The next meet is tomorrow 3pm, at the hackerspace. The address is reserved-bit, 337, Amanora Chambers (Above Amanora Mall), Hadapsar, Pune.

Asking for help with koji builds

Posted by Dennis Gilmore on January 14, 2017 03:14 AM

People often ask for help in #fedora-releng or #fedora-devel with koji builds. The number 1 mistake people make is to post links to log files. The problem with posting to logs is that quite often the log file people point at is the wrong one.  It helps in debugging when we have full access to all log files ad the parent and other children tasks in koji.

The easiest path to providing assistance is to point at the task in question, sometimes we have to look at other arches and other log files in order to figure out what exactly has gone wrong. So please in order to help you better post links to tasks and not log files.

Thank you

Harry Potter and The Jabber Spam

Posted by Matěj Cepl on January 13, 2017 03:21 PM

After many many years of happy using XMPP we were finally awarded with the respect of spammers and suddenly some of us (especially those who have their JID in their email signature) are getting a lot of spim.

Fortunately, the world of Jabber is not so defenceless, thanks to XEP-0016 (Privacy Lists). Not only it is possible to set up list of known spammers (not only by their complete JIDs, but also by whole domains), but it is also possible to build a more complicated constructs.

Usually these constructs are not very well supported by GUI so most of the work must be done by sending plain XML stanzas to the XMPP stream. For example with pidgin one can open XMPP Console by going to Tools/XMPP Console and selecting appropriate account for which the privacy lists are supposed to be edited.

Whole system of ACLs consists from multiple lists. To get a list of all those privacy lists for the particular server, we need to send this XMPP stanza:

<iq type='get' id='getlist1'>
        <query xmlns='jabber:iq:privacy'/>

</iq>

If the stanza is send correctly and your server supports XEP-0016, then the server replies with the list of all privacy lists:

<iq id='getlist1' type='result'>
        <query xmlns='jabber:iq:privacy'>
                <default name='urn:xmpp:blocking'/>
                <list name='invisible'/>
                <list name='urn:xmpp:blocking'/>
        </query>
</iq>

To get a content of one particular list we need to send this stanza:

<iq type='get' id='getlist2'>
    <query xmlns='jabber:iq:privacy'>
        <list name='urn:xmpp:blocking'/>
    </query>
</iq>

And again the server replies with this list:

<iq id='getlist2' type='result'>
    <query xmlns='jabber:iq:privacy'>
        <list name='urn:xmpp:blocking'>
            <item order='0' action='deny'
                value='talk.mipt.ru' type='jid'/>
            <item order='0' action='deny'
                value='im.flosoft.biz' type='jid'/>
            <item order='0' action='deny'
                value='nius.net' type='jid'/>
            <item order='0' action='deny'
                value='jabber.me' type='jid'/>
            <item order='0' action='deny'
                value='tigase.im' type='jid'/>
            <item order='0' action='deny'
                value='pisem.net' type='jid'/>
            <item order='0' action='deny'
                value='qip.ru' type='jid'/>
            <item order='0' action='deny'
                value='crypt.mn' type='jid'/>
            <item order='0' action='deny'
                value='atteq.com' type='jid'/>
            <item order='0' action='deny'
                value='j3ws.biz' type='jid'/>
            <item order='0' action='deny'
                value='jabber.dol.ru' type='jid'/>
            <item order='0' action='deny'
                value='vpsfree.cz' type='jid'/>
            <item order='0' action='deny'
                value='buckthorn.ws' type='jid'/>
            <item order='0' action='deny'
                value='pandion.im' type='jid'/>
        </list>
    </query>
</iq>

Server goes through every item in the list and decides based on the value of action attribute. If the actual considered stanza does not match any item in the list, the whole system defaults to allow.

I was building a blocking list like this for some time (I have even authored a simple Python script for adding new JID to the list), but it seems to be road to nowhere. Spammers are just generating new and new domains. The only workable solution seems to me to be white-list. Some domains are allowed, but everything else is blocked.

See this list stanza sent to the server (answer should be simple one line empty XML element):

<iq type='set' id='setwl1'>
    <query xmlns='jabber:iq:privacy'>
        <list name='urn:xmpp:whitelist'>
            <item type='jid' value='amessage.de'
                  action='allow' order='1'/>
            <item type='jid' value='ceplovi.cz'
                  action='allow' order='2'/>
            <item type='jid' value='cepl.eu'
                  action='allow' order='3'/>
            <item type='jid' value='dukgo.com'
                  action='allow' order='4'/>
            <item type='jid' value='eischmann.cz'
                  action='allow' order='5'/>
            <item type='jid' value='gmail.com'
                  action='allow' order='7'/>
            <item type='jid' value='gtalk2voip.com'
                  action='allow' order='8'/>
            <item type='jid' value='jabber.at'
                  action='allow' order='9'/>
            <item type='jid' value='jabber.cz'
                  action='allow' order='10'/>
            <item type='jid' value='jabber.fr'
                  action='allow' order='11'/>
            <item type='jid' value='jabber.org'
                  action='allow' order='12'/>
            <item type='jid' value='jabber.ru'
                  action='allow' order='13'/>
            <item type='jid' value='jabbim.cz'
                  action='allow' order='14'/>
            <item type='jid' value='jankratochvil.net'
                  action='allow' order='15'/>
            <item type='jid' value='kde.org'
                  action='allow' order='16'/>
            <item type='jid' value='loqui.im'
                  action='allow' order='17'/>
            <item type='jid' value='mac.com'
                  action='allow' order='18'/>
            <item type='jid' value='metajack.im'
                  action='allow' order='19'/>
            <item type='jid' value='njs.netlab.cz'
                  action='allow' order='20'/>
            <item type='jid' value='stpeter.im'
                  action='allow' order='21'/>
            <item type='jid' value='ucw.cz'
                  action='allow' order='22'/>
            <item action='deny' order='23'/>
        </list>
    </query>
</iq>

Server goes in order through all items on the list, and if it doesn’t match on any item, it hits the last item in the list, which denies the access.

It is also useful to make sure the list which have actually created be default:

<iq type='set' id='default1'>
    <query xmlns='jabber:iq:privacy'>
        <default name='urn:xmpp:whitelist'/>
    </query>
</iq>

So, now I am in the state of testing, how it works (using as server jabberd2 version 2.4.0 from the RHEL-6/EPEL package).

How to install Apache web server on Fedora

Posted by Fedora Magazine on January 13, 2017 08:00 AM

One of the most common uses for any Linux system is as a web server. By far the most prevalent and famous web server is Apache. Apache is readily available in Fedora in pre-packaged form. You can use it to host content and applications for free anywhere you have a server.

Installing Apache

First, install the software packages for the Apache server. The recommended way to install the server is in a group of related packages.

su -c 'dnf group install "Web Server"'

This command installs the entire Web Server package group. The group includes other commonly used tools such as:

  • PHP and Perl support
  • The squid caching proxy
  • Documentation
  • Traffic analysis tools

If for some reason you don’t want these helpful packages, you can install the web server by itself. Use this command:

su -c 'dnf install httpd'

The web server package, httpd, depends on some other packages. They must be installed for the web server to function. This command installs those dependencies.

Configuring the system

Next, you may need to configure the system so other computers can contact the web server. You can skip this step if you only want to test a web server on the same computer you’re on.

Fedora systems have a protective firewall by default. Therefore, you must open specific service ports in that firewall to let other computers connect. To open the specific firewall ports for the web server, run these commands:

su -c 'firewall-cmd --add-service=http --add-service=https --permanent'
su -c 'firewall-cmd --reload'

The two service ports opened are:

  • http — Port 80, used for standard, non-secure web communications
  • https — Port 443, used for secure web communications

Also note the reload command makes these services active. Therefore, if you haven’t made other firewall changes permanent, those changes are lost when you reload. If you only want to open these services temporarily, use this command:

su -c 'firewall-cmd --add-service=http --add-service=https'

Testing the web server

Open a web browser on your system. Go to http://localhost and a web page like this appears:

Apache web server test page on Fedora

Apache web server test page on Fedora

This page confirms your web server is running correctly.

Now what?

The next steps are entirely up to you. Here is one article with ideas for what to do with your new web server.


Image courtesy of Markus Spiske — originally posted to Unsplash.

A 5 year old girl vs. CoderDojo

Posted by Alexander Todorov on January 13, 2017 07:48 AM

Adi @ Hello Ruby

In early December'16 together with my 5 year old daughter we visited an introductory workshop about the Hello Ruby book and another workshop organized by Coder Dojo Bulgaria. Later that month we also visited a Robo League competition in Sofia. The goal was to further Adriana's interest into technical topics and programming in particular and see how she will respond to the topics covered and the workshops and training materials format in general. I have been keeping detailed notes and today I'm publishing some of my observations.

The events that we visited were strictly for small children and there were mentors who worked with the kids. Each mentor, depending on the event works with up to 4 or 5 children. Parents were not allowed to interfere and I have been keeping my distance on purpose, trying to observe and document as much as possible.

Hello Ruby

Hello Ruby is a small book with colorful illustrations about a girl who embarks on adventures in programming. Adriana considers it a fairy tale although the book introduces lots of IT related terms - Ruby and gems, Firefox, Snow Leopard, Django, etc. For a child these don't necessarily mean anything but she was able to recognize my Red Hat fedora which was depicted on one of the pages.

The workshop itself was the introduction of the Bulgarian translation, which I've purchased, and had the kids build a laptop using glue and paper icons. Mentors were explaining to the children what the various icons mean, that viruses are a bad thing for your computer, what a CPU and computer memory are and everything else in between. A month later when Adriana started building a new paper computer on her own (without being provoked by me) she told me that the colored icons were "information" that goes into the computer!

After the story part of the book there are exercises designed to create analytical thinking. We did only a few in the beginning where she had to create a list of action sequence how to make the bed or get dressed up in the morning, etc. At the time Adriana didn't receive the game very well and was having some troubles figuring out the smaller actions that comprise a larger activity. We didn't follow through with the game.

Code.org

At the second event she was exposed to studio.code.org! At the time we were required to bring a working laptop and a mouse. I had no idea how these were going to be used. It turned out mentors gave each child a training course from code.org according to their age. Adriana started with the Course #1 because she can't read on her own!

At first it seemed to me that Adi was a bit bored and didn't know what to do, staring cluelessly at the screen. Btw this was her first session working with a computer on her own. After a while the mentor came and I guess explained what needs to be done, how the controls work and what the objective of the exercise was. After that I noticed she's working more independently and grew interested in the subject. She had a problem working with the mouse and after 2 days I've nudged her to use the TrackPoint and mouse buttons on a ThinkPad laptop. She uses them with both hands, so am I btw, and is much more adept at controlling the cursor that way. If you are going to teach children how to work effectively with a computer you may as well start with teaching them to work effectively with a track pad!

The courses are comprised of games and puzzles (which she's very good at) asking children to perform a very basic programming concept. For example instruct an angry bird to move left or right by using blocks for each instruction. By the time the workshop was over Adriana had completed 4 levels on her own.

Level 5 introduced a button for step-by-step execution of the program also colloquially known as debugging :). The first few exercises she had no idea what to do with this debugging button. Then the 6th exercise introduced a wrong starting sequence and everything snapped into place.

Level 7 introduced additional instructions. There are move left/right instructions as well as visit a flower and make honey instruction. This level also introduces repeating instructions, for example make honey 2 times. At first that was confusing but then she started to take a notice at the numbers shown on screen and started to figure out how to build the proper sequence of blocks to complete the game. When she made mistakes she used the debugging button to figure out which block was not in place and remove it.

After this level Adi started making more mistakes, but more importantly she also started trying to figure them out on her own. My help was limited to asking questions like "what do you need to do", "where are you at the screen now", "what instructions do you need to execute to get where you want to be".

Level 8 introduces a new type of game, drawing shapes on the screen. The hardest part here is that you need to jump from one node to another sometimes. This is great for improving the child spatial orientation skills.

Level 11 is a reading game in English. You need to instruct a bee to fly across different letters to complete a word shown on the screen. However Adriana can't read much less in English, although she understands and speaks English well for her age. In this case I believe she relied on pattern recognition to complete all exercises in this level. She would look at the target word and then identify the letters on the playing board. Next she would stack instruction blocks to program the movements of the bee towards her goal as in previous exercises.

Level 13 introduces loops. It took Adriana 7 exercises to figure out what a loop is, identify the various elements of it and how to construct it properly. She also said that was amusing to her. Almost immediately she was able to identify the length of the loop by herself and construct loops with only 1 block inside their body. Loops with 2 or more blocks inside their body were a bit harder.

Level 14 introduced nested loops, usually one or more instruction blocks paired with a loop block, nested inside another loop block. For example: repeat 3 times(move left, repeat 2 times(move down)). Again it took her about 6 exercises to figure them out. This is roughly at the middle of the level.

Level 16 was quite hard. It had blocks with parameters where you have to type in some words and animal characters will "speak these words" as if in a comic book. I'm not sure if there was supposed to be a text to speech engine integrated for this level but sounds like a good idea. Anyhow this level was not on-par with her skills.

The course completed with free range drawing using instruction blocks and cycles. The image she drew was actually her name where she had to guess how much scribbles the painter needs to do in one direction, then traverse back and go into another direction. She also had to figure out how big each letter needs to be so that it is possible to actually draw it given the game limitations in motion and directions. This final level required lot of my help.

Summary

I have never had any doubts that small children are very clever and capable of understanding enormous amounts of information and new concepts. However I'm amazed by how deep their understanding goes and how fast they are able to apply the new things they learn.

Through games and practical workshops I believe it is very easy to teach children to gain valuable skills for engineering professions. Even if they don't end up in engineering the ability to clearly define goals and instructions, break down complex tasks into small chunks and clearly communicate intentions is a great advantage. So is the ability to analyze the task on your own and use simple building blocks to achieve larger objectives.

I will continue to keep notes on Adi's progress but will very likely write about it less frequently. If you do have small children around you please introduce them to Hello Ruby and studio.code.org and help them learn!

Thanks for reading!

F25-20170111 Updated Lives released

Posted by Ben Williams on January 12, 2017 02:34 PM

I am happy to announce new F25-20170111 Updated Lives.

With F25 we are now using Livemedia-creator to build the updated lives.

Also from now on we will only be releasing updated lives on even kernel releases. (example 4.8.16, next will be 4.9.2)

To build your own please look at  https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

With this new build of F25 Updated Lives will save you 675M of updates after install on Workstation.

As always the isos can be found at http://tinyurl.com/Live-respins2


Episode 26 - Tell your sister, Stallman was right

Posted by Open Source Security Podcast on January 12, 2017 02:03 PM
Josh and Kurt end up discussing video game speed running, which is really just hacking. We also end up discussing the pitfalls of the modern world where you don't own your software or services. Stallman was right!

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/302260581&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes





Exploring long JSON files with jq

Posted by Adam Young on January 12, 2017 01:39 PM

The JSON file format is used for marshalling data in lots of different applications. If you are new to an application, and don’t know the data, it might be hard to visually parse the JSON and understand what you are seeing.  The jq command line utility can help make it easier to scope in to a section of the file.  This is a starting point.

Kubelet, the daemon that runs on a Kuberenets node, has a web API for returning stats.  To query it from that node:

curl -k https://localhost:10250/stats/

However, the amount of text returned is several thousand lines.  The first few lines look like this:

$ curl -sk https://localhost:10250/stats/ | head 
{
 "name": "/",
 "subcontainers": [
 {
 "name": "/machine.slice"
 },
 {
 "name": "/system.slice"
 },
 {

Since the JSON top level construct is a dictionary, we can use the function keys from jq to enumerate just the keys.

$ curl -sk https://localhost:10250/stats/ | jq keys
[
 "name",
 "spec",
 "stats",
 "subcontainers"
]

To view the subcontainers, use that key:

$ curl -sk https://localhost:10250/stats/ | jq .subcontainers
[
 {
 "name": "/machine.slice"
 },
 {
 "name": "/system.slice"
 },
 {
 "name": "/user.slice"
 }
]

The stats key returns an array:

$ curl -sk https://localhost:10250/stats/ | jq .stats | head
[
 {
 "timestamp": "2017-01-12T13:23:45.301168504Z",
 "cpu": {
 "usage": {
 "total": 420399104294,
 "per_cpu_usage": [
 202178115170,
 218220989124
 ],

How long is it?  use the length function.  Note that jq functions are piped one into the next.

$ curl -sk https://localhost:10250/stats/ | jq ".stats | length"
9

Want to see the keys of an element?  Index it as an array:

$ curl -sk https://localhost:10250/stats/ | jq ".stats[0] | keys"
[
 "cpu",
 "diskio",
 "filesystem",
 "memory",
 "network",
 "task_stats",
 "timestamp"
]

To see a subelement, use the pipe format.  For example, to see the timestamp of the top element,

$ curl -sk https://localhost:10250/stats/ | jq ".stats[0] | .timestamp"
"2017-01-12T13:29:16.162797308Z"

To see a value for all elements, remove the index from the array. Again, use the pipe notation:

$ curl -sk https://localhost:10250/stats/ | jq ".stats[] | .timestamp"
"2017-01-12T13:32:13.732338602Z"
"2017-01-12T13:32:25.713656307Z"
"2017-01-12T13:32:43.443936137Z"
"2017-01-12T13:33:02.796007138Z"
"2017-01-12T13:33:14.53537449Z"
"2017-01-12T13:33:32.540031699Z"
"2017-01-12T13:33:42.732536856Z"
"2017-01-12T13:33:53.235774027Z"
"2017-01-12T13:34:10.351984713Z"

Which shows that the last element of the array is the latest.  Use the index of -1 to reference this value:

$ curl -sk https://localhost:10250/stats/ | jq ".stats[-1] | .timestamp"
"2017-01-12T13:33:53.235774027Z"

 

Edit: added below.

To find an element of a list based on the value of a key, or the value of a sub element, use the pipe notation within the parameter list of a the call to select. I use a slightly different curl query here, note the summary element at the end. I want to get the pod entry that matches a section of a particular pod name.

curl -sk https://localhost:10250/stats/summary | jq ‘.pods[] | select(.podRef | .name | contains(“virt-launcher-testvm”))’

Hackerspace in Pune

Posted by Kushal Das on January 12, 2017 09:31 AM

More than 10 years back, I met some genius minds from CCC Berlin at foss.in, Harald Welte, Milosch Meriac and Tim Pritlove. I was in complete awe mode by seeing the knowledge they have, by looking at how helpful they are. Milosch became my mentor, and I learned a lot of things from him, starting from the first steps of soldering to how to compile firmware on my laptop.

In 2007, I visited Berlin for LinuxTag. I was staying with Milosch & Brita. I managed to spend 2 days in the CCC Berlin and that was an unbelievable experience for me. All of the knowledge on display and all of the approachable people there made me ask Milosch about how can we have something similar? The answer was simple, if there is no such club/organization, then create one with friends. That stayed in my mind forever.

In 2014, I gave a presentation in Hackerspace Santa Barbara, thanks to Gholms. I also visited Hackerspace SG during FOSSASIA 2015 and made many few friends. I went back to the Hackerspace SG again in 2016 after dropping our bags at the hotel, I just fall in love with the people and the place there.

Meanwhile, over the years, we tried to have a small setup downstairs in the flat where Sayan and Chandan used to live. We had our regular Fedora meetups there, and some coding sessions for new projects. But it was not a full-time place for everyone to meet and have fun by building things. We could not afford to rent out space anywhere nearby, even with 4-5 friends joining in together. I discussed the dream of having a local hackerspace with our friends, Siddhesh and Nisha Poyarekar, and that made a difference.

Birth of Hackerspace Pune

Nisha went ahead with the idea and founded https://reserved-bit.com, a hackerspace/makerspace in Pune. Last month I had to visit Kolkata urgently, it also means that I missed the initial days of setting up space. But as soon as we came back to Pune, I visited the space.

It is located at 337, the Amanora Chambers, just on top of the East Block at Amanora Mall. The location is great for another reason - if you are tired of hacking on things, you can go down and roam around in the mall. That also means we have Starbucks, KFC, and other food places just downstairs :)

There are lockers available for annual members. There is no place for cooking, but there is a microwave. You can find many other details on the website. For now, I am working from there in the afternoons and evenings. So, if you have free time, drop by to this new space, and say hi :)

Note1: Today we are meeting there at 3pm for the PyCon Pune volunteers meet.

Flock 2017 bids are now being accepted (due 28 Feb 2017)

Posted by Fedora Community Blog on January 12, 2017 08:15 AM

It is time to start the bid process for this year’s Flock.  This year we are back in North America for Flock 2017. If you’d like to help host the event in your city, it’s time to start putting together a bid.  To find out what you need to do, read the wiki page. Bids are due by February 28, 2017, so do not wait to start.  It takes more time than you may realize to compile all the required information for a good bid.

Tips and advice for Flock 2017 planning

Keep in mind that committing to help plan a conference is a lot of work and shouldn’t be approached lightly. It’s a big time commitment, and as the local contact, you’re critical to the success of the event. Flock has been held successfully on college campuses and in hotels.  We need to make sure that the space will work for both the conference and be affordable.  Details are on the wiki page.

Not sure where to begin? You can view some of the previous winning bids for past years as a reference point for building your own bid. Check out some of these for examples:

Feel free to let me know if you have any other questions or need help getting your bid together.  If you’re not already subscribed to the flock-planning email list, you should also do so.

The post Flock 2017 bids are now being accepted (due 28 Feb 2017) appeared first on Fedora Community Blog.

OCSP: What, why, how?

Posted by Ingvar Hagelund on January 12, 2017 07:30 AM

While debugging a problem with OCSP, I had to sit down and understand what it really does and why. So What is OCSP, and why do we use it?

Read the rest of this entry

The Tale Of The Two-Day, One-Character Patch

Posted by Adam Williamson on January 12, 2017 02:57 AM

I’m feeling like writing a very long explanation of a very small change again. Some folks have told me they enjoy my attempts to detail the entire step-by-step process of debugging some somewhat complex problem, so sit back, folks, and enjoy…The Tale Of The Two-Day, One-Character Patch!

Recently we landed Python 3.6 in Fedora Rawhide. A Python version bump like that requires all Python-dependent packages in the distribution to be rebuilt. As usually happens, several packages failed to rebuild successfully, so among other work, I’ve been helping work through the list of failed packages and fixing them up.

Two days ago, I reached python-deap. As usual, I first simply tried a mock build of the package: sometimes it turns out we already fixed whatever had previously caused the build to fail, and simply retrying will make it work. But that wasn’t the case this time.

The build failed due to build dependencies not being installable – python2-pypandoc, in this case. It turned out that this depends on pandoc-citeproc, and that wasn’t installable because a new ghc build had been done without rebuilds of the set of pandoc-related packages that must be rebuilt after a ghc bump. So I rebuilt pandoc, and ghc-aeson-pretty (an updated version was needed to build an updated pandoc-citeproc which had been committed but not built), and finally pandoc-citeproc.

With that done, I could do a successful scratch build of python-deap. I tweaked the package a bit to enable the test suites – another thing I’m doing for each package I’m fixing the build of, if possible – and fired off an official build.

Now you may notice that this looks a bit odd, because all the builds for the different arches succeeded (they’re green), but the overall ‘State’ is “failed”. What’s going on there? Well, if you click “Show result”, you’ll see this:

BuildError: The following noarch package built differently on different architectures: python-deap-doc-1.0.1-2.20160624git232ed17.fc26.noarch.rpm
rpmdiff output was:
error: cannot open Packages index using db5 - Permission denied (13)
error: cannot open Packages database in /var/lib/rpm
error: cannot open Packages database in /var/lib/rpm
removed     /usr/share/doc/python-deap/html/_images/cma_plotting_01_00.png
removed     /usr/share/doc/python-deap/html/examples/es/cma_plotting_01_00.hires.png
removed     /usr/share/doc/python-deap/html/examples/es/cma_plotting_01_00.pdf
removed     /usr/share/doc/python-deap/html/examples/es/cma_plotting_01_00.png

So, this is a good example of where background knowledge is valuable. Getting from step to step in this kind of debugging/troubleshooting process is a sort of combination of logic, knowledge and perseverance. Always try to be logical and methodical. When you start out you won’t have an awful lot of knowledge, so you’ll need a lot of perseverance; hopefully, the longer you go on, the more knowledge you’ll pick up, and thus the less perseverance you’ll need!

In this case the error is actually fairly helpful, but I also know a bit about packages (which helps) and remembered a recent mailing list discussion. Fedora allows arched packages with noarch subpackages, and this is how python-deap is set up: the main packages are arched, but there is a python-deap-docs subpackage that is noarch. We’re concerned with that package here. I recalled a recent mailing list discussion of this “built differently on different architectures” error.

As discussed in that thread, we’re failing a Koji check specific to this kind of package. If all the per-arch builds succeed individually, Koji will take the noarch subpackage(s) from each arch and compare them; if they’re not all the same, Koji will consider this an error and fail the build. After all, the point of a noarch package is that its contents are the same for all arches and so it shouldn’t matter which arch build we take the noarch subpackage from. If it comes out different on different arches, something is clearly up.

So this left me with the problem of figuring out which arch was different (it’d be nice if the Koji message actually told us…) and why. I started out just looking at the build logs for each arch and searching for ‘cma_plotting’. This is actually another important thing: one of the most important approaches to have in your toolbox for this kind of work is just ‘searching for significant-looking text strings’. That might be a grep or it might be a web search, but you’ll probably wind up doing a lot of both. Remember good searching technique: try to find the most ‘unusual’ strings you can to search for, ones for which the results will be strongly correlated with your problem. This quickly told me that the problematic arch was ppc64. The ‘removed’ files were not present in that build, but they were present in the builds for all other arches.

So I started looking more deeply into the ppc64 build log. If you search for ‘cma_plotting’ in that file, you’ll see the very first result is “WARNING: Exception occurred in plotting cma_plotting”. That sounds bad! Below it is a long Python traceback – the text starting “Traceback (most recent call last):”.

So what we have here is some kind of Python thing crashing during the build. If we quickly compare with the build logs on other arches, we don’t see the same thing at all – there is no traceback in those build logs. Especially since this shows up right when the build process should be generating the files we know are the problem (the cma_plotting files, remember), we can be pretty sure this is our culprit.

Now this is a pretty big scary traceback, but we can learn some things from it quite easily. One is very important: we can see quite easily what it is that’s going wrong. If we look at the end of the traceback, we see that all the last calls involve files in /usr/lib64/python2.7/site-packages/matplotlib. This means we’re dealing with a Python module called matplotlib. We can quite easily associate that with the package python-matplotlib, and now we have our next suspect.

If we look a bit before the traceback, we can get a bit more general context of what’s going on, though it turns out not to be very important in this case. Sometimes it is, though. In this case we can see this:

+ sphinx-build-2 doc build/html
Running Sphinx v1.5.1

Again, background knowledge comes in handy here: I happen to know that Sphinx is a tool for generating documentation. But if you didn’t already know that, you should quite easily be able to find it out, by good old web search. So what’s going on is the package build process is trying to generate python-deap’s documentation, and that process uses this matplotlib library, and something is going very wrong – but only on ppc64, remember – in matplotlib when we try to generate one particular set of doc files.

So next I start trying to figure out what’s actually going wrong in matplotlib. As I mentioned, the traceback is pretty long. This is partly just because matplotlib is big and complex, but it’s more because it’s a fairly rare type of Python error – an infinite recursion. You’ll see the traceback ends with many, many repetitions of this line:

  File "/usr/lib64/python2.7/site-packages/matplotlib/mathtext.py", line 861, in _get_glyph
    return self._get_glyph('rm', font_class, sym, fontsize)

followed by:

  File "/usr/lib64/python2.7/site-packages/matplotlib/mathtext.py", line 816, in _get_glyph
    uniindex = get_unicode_index(sym, math)
  File "/usr/lib64/python2.7/site-packages/matplotlib/mathtext.py", line 87, in get_unicode_index
    if symbol == '-':
RuntimeError: maximum recursion depth exceeded in cmp

What ‘recursion’ means is pretty simple: it just means that a function can call itself. A common example of where you might want to do this is if you’re trying to walk a directory tree. In Python-y pseudo-code it might look a bit like this:

def read_directory(directory):
    print(directory.name)
    for entry in directory:
        if entry is file:
            print(entry.name)
        if entry is directory:
            read_directory(entry)

To deal with directories nested in other directories, the function just calls itself. The danger is if you somehow mess up when writing code like this, and it winds up in a loop, calling itself over and over and never escaping: this is ‘infinite recursion’. Python, being a nice language, notices when this is going on, and bails after a certain number of recursions, which is what’s happening here.

So now we know where to look in matplotlib, and what to look for. Let’s go take a look! matplotlib, like most everything else in the universe these days, is in github, which is bad for ecosystem health but handy just for finding stuff. Let’s go look at the function from the backtrace.

Well, this is pretty long, and maybe a bit intimidating. But an interesting thing is, we don’t really need to know what this function is for – I actually still don’t know precisely (according to the name it should be returning a ‘glyph’ – a single visual representation for a specific character from a font – but it actually returns a font, the unicode index for the glyph, the name of the glyph, the font size, and whether the glyph is italicized, for some reason). What we need to concentrate on is the question of why this function is getting in a recursion loop on one arch (ppc64) but not any others.

First let’s figure out how the recursion is actually triggered – that’s vital to figuring out what the next step in our chain is. The line that triggers the loop is this one:

                return self._get_glyph('rm', font_class, sym, fontsize)

That’s where it calls itself. It’s kinda obvious that the authors expect that call to succeed – it shouldn’t run down the same logical path, but instead get to the ‘success’ path (the return font, uniindex, symbol_name, fontsize, slanted line at the end of the function) and thus break the loop. But on ppc64, for some reason, it doesn’t.

So what’s the logic path that leads us to that call, both initially and when it recurses? Well, it’s down three levels of conditionals:

    if not found_symbol:
        if self.cm_fallback:
            <other path>
        else:
            if fontname in ('it', 'regular') and isinstance(self, StixFonts):
                return self._get_glyph('rm', font_class, sym, fontsize)

So we only get to this path if found_symbol is not set by the time we reach that first if, then if self.cm_fallback is not set, then if the fontname given when the function was called was ‘it’ or ‘regular’ and if the class instance this function (actually method) is a part of is an instance of the StixFonts class (or a subclass). Don’t worry if we’re getting a bit too technical at this point, because I did spend a bit of time looking into those last two conditions, but ultimately they turned out not to be that significant. The important one is the first one: if not found_symbol.

By this point, I’m starting to wonder if the problem is that we’re failing to ‘find’ the symbol – in the first half of the function – when we shouldn’t be. Now there are a couple of handy logical shortcuts we can take here that turned out to be rather useful. First we look at the whole logic flow of the found_symbol variable and see that it’s a bit convoluted. From the start of the function, there are two different ways it can be set True – the if self.use_cmex block and then the ‘fallback’ if not found_symbol block after that. Then there’s another block that starts if found_symbol: where it gets set back to False again, and another lookup is done:

    if found_symbol:
    (...)
        found_symbol = False
        font = self._get_font(new_fontname)
        if font is not None:
            glyphindex = font.get_char_index(uniindex)
            if glyphindex != 0:
                found_symbol = True

At first, though, we don’t know if we’re even hitting that block, or if we’re failing to ‘find’ the symbol earlier on. It turns out, though, that it’s easy to tell – because of this earlier block:

    if not found_symbol:
        try:
            uniindex = get_unicode_index(sym, math)
            found_symbol = True
        except ValueError:
            uniindex = ord('?')
            warn("No TeX to unicode mapping for '%s'" %
                 sym.encode('ascii', 'backslashreplace'),
                 MathTextWarning)

Basically, if we don’t find the symbol there, the code logs a warning. We can see from our build log that we don’t see any such warning, so we know that the code does initially succeed in finding the symbol – that is, when we get to the if found_symbol: block, found_symbol is True. That logically means that it’s that block where the problem occurs – we have found_symbol going in, but where that block sets it back to False then looks it up again (after doing some kind of font substitution, I don’t know why, don’t care), it fails.

The other thing I noticed while poking through this code is a later warning. Remember that the infinite recursion only happens if fontname in ('it', 'regular') and isinstance(self, StixFonts)? Well, what happens if that’s not the case is interesting:

            if fontname in ('it', 'regular') and isinstance(self, StixFonts):
                return self._get_glyph('rm', font_class, sym, fontsize)
            warn("Font '%s' does not have a glyph for '%s' [U+%x]" %
                 (new_fontname,
                  sym.encode('ascii', 'backslashreplace').decode('ascii'),
                  uniindex),
                 MathTextWarning)

that is, if that condition isn’t satisfied, instead of calling itself, the next thing the function does is log a warning. So it occurred to me to go and see if there are any of those warnings in the build logs. And, whaddayaknow, there are four such warnings in the ppc64 build log:

/usr/lib64/python2.7/site-packages/matplotlib/mathtext.py:866: MathTextWarning: Font 'rm' does not have a glyph for '1' [U+1d7e3]
  MathTextWarning)
/usr/lib64/python2.7/site-packages/matplotlib/mathtext.py:867: MathTextWarning: Substituting with a dummy symbol.
  warn("Substituting with a dummy symbol.", MathTextWarning)
/usr/lib64/python2.7/site-packages/matplotlib/mathtext.py:866: MathTextWarning: Font 'rm' does not have a glyph for '0' [U+1d7e2]
  MathTextWarning)
/usr/lib64/python2.7/site-packages/matplotlib/mathtext.py:866: MathTextWarning: Font 'rm' does not have a glyph for '-' [U+2212]
  MathTextWarning)
/usr/lib64/python2.7/site-packages/matplotlib/mathtext.py:866: MathTextWarning: Font 'rm' does not have a glyph for '2' [U+1d7e4]
  MathTextWarning)

but there are no such warnings in the logs for other arches. That’s really rather interesting. It makes one possibility very unlikely: that we do reach the recursed call on all arches, but it fails on ppc64 and succeeds on the other arches. It’s looking far more likely that the problem is the “re-discovery” bit of the function – the if found_symbol: block where it looks up the symbol again – is usually working on other arches, but failing on ppc64.

So just by looking at the logical flow of the function, particularly what happens in different conditional branches, we’ve actually been able to figure out quite a lot, without knowing or even caring what the function is really for. By this point, I was really focusing in on that if found_symbol: block. And that leads us to our next suspect. The most important bit in that block is where it actually decides whether to set found_symbol to True or not, here:

        font = self._get_font(new_fontname)
        if font is not None:
            glyphindex = font.get_char_index(uniindex)
            if glyphindex != 0:
                found_symbol = True

I didn’t actually know whether it was failing because self._get_font didn’t find anything, or because font.get_char_index returned 0. I think I just played a hunch that get_char_index was the problem, but it wouldn’t be too difficult to find out by just editing the code a bit to log a message telling you whether or not font was None, and re-running the test suite.

Anyhow, I wound up looking at get_char_index, so we need to go find that. You could work backwards through the code and figure out what font is an instance of so you can find it, but that’s boring: it’s far quicker just to grep the damn code. If you do that, you get various results that are calls of it, then this:

src/ft2font_wrapper.cpp:const char *PyFT2Font_get_char_index__doc__ =
src/ft2font_wrapper.cpp:    "get_char_index()\n"
src/ft2font_wrapper.cpp:static PyObject *PyFT2Font_get_char_index(PyFT2Font *self, PyObject *args, PyObject *kwds)
src/ft2font_wrapper.cpp:    if (!PyArg_ParseTuple(args, "k:get_char_index", &ccode)) {
src/ft2font_wrapper.cpp:        {"get_char_index", (PyCFunction)PyFT2Font_get_char_index, METH_VARARGS, PyFT2Font_get_char_index__doc__},

Which is the point at which I started mentally buckling myself in, because now we’re out of Python and into C++. Glorious C++! I should note at this point that, while I’m probably a half-decent Python coder at this point, I am still pretty awful at C(++). I may be somewhat or very wrong in anything I say about it. Corrections welcome.

So I buckled myself in and went for a look at this ft2font_wrapper.cpp thing. I’ve seen this kind of thing a couple of times before, so by squinting at it a bit sideways, I could more or less see that this is what Python calls an extension module: basically, it’s a Python module written in C or C++. This gets done if you need to create a new built-in type, or for speed, or – as in this case – because the Python project wants to work directly with a system shared library (in this case, freetype), either because it doesn’t have Python bindings or because the project doesn’t want to use them for some reason.

This code pretty much provides a few classes for working with Freetype fonts. It defines a class called matplotlib.ft2font.FT2Font with a method get_char_index, and that’s what the code back up in mathtext.py is dealing with: that font we were dealing with is an FT2Font instance, and we’re using its get_char_index method to try and ‘find’ our ‘symbol’.

Fortunately, this get_char_index method is actually simple enough that even I can figure out what it’s doing:

static PyObject *PyFT2Font_get_char_index(PyFT2Font *self, PyObject *args, PyObject *kwds)
{
    FT_UInt index;
    FT_ULong ccode;

    if (!PyArg_ParseTuple(args, "I:get_char_index", &ccode)) {
        return NULL;
    }

    index = FT_Get_Char_Index(self->x->get_face(), ccode);

    return PyLong_FromLong(index);
}

(If you’re playing along at home for MEGA BONUS POINTS, you now have all the necessary information and you can try to figure out what the bug is. If you just want me to explain it, keep reading!)

There’s really not an awful lot there. It’s calling FT_Get_Char_Index with a couple of args and returning the result. Not rocket science.

In fact, this seemed like a good point to start just doing a bit of experimenting to identify the precise problem, because we’ve reduced the problem to a very small area. So this is where I stopped just reading the code and started hacking it up to see what it did.

First I tweaked the relevant block in mathtext.py to just log the values it was feeding in and getting out:

        font = self._get_font(new_fontname)
        if font is not None:
            glyphindex = font.get_char_index(uniindex)
            warn("uniindex: %s, glyphindex: %s" % (uniindex, glyphindex))
            if glyphindex != 0:
                found_symbol = True

Sidenote: how exactly to just print something out to the console when you’re building or running tests can vary quite a bit depending on the codebase in question. What I usually do is just look at how the project already does it – find some message that is being printed when you build or run the tests, and then copy that. Thus in this case we can see that the code is using this warn function (it’s actually warnings.warn), and we know those messages are appearing in our build logs, so…let’s just copy that.

Then I ran the test suite on both x86_64 and ppc64, and compared. This told me that the Python code was passing the same uniindex values to the C code on both x86_64 and ppc64, but getting different results back – that is, I got the same recorded uniindex values, but on x86_64 the resulting glyphindex value was always something larger than 0, but on ppc64, it was sometimes 0.

The next step should be pretty obvious: log the input and output values in the C code.

index = FT_Get_Char_Index(self->x->get_face(), ccode);
printf("ccode: %lu index: %u\n", ccode, index);

Another sidenote: one of the more annoying things with this particular issue was just being able to run the tests with modifications and see what happened. First, I needed an actual ppc64 environment to use. The awesome Patrick Uiterwijk of Fedora release engineering provided me with one. Then I built a .src.rpm of the python-matplotlib package, ran a mock build of it, and shelled into the mock environment. That gives you an environment with all the necessary build dependencies and the source and the tests all there and prepared already. Then I just copied the necessary build, install and test commands from the spec file. For a simple pure-Python module this is all usually pretty easy and you can just check the source out and do it right in your regular environment or in a virtualenv or something, but for something like matplotlib which has this C++ extension module too, it’s more complex. The spec builds the code, then installs it, then runs the tests out of the source directory with PYTHONPATH=BUILDROOT/usr/lib64/python2.7/site-packages , so the code that was actually built and installed is used for the tests. When I wanted to modify the C part of matplotlib, I edited it in the source directory, then re-ran the ‘build’ and ‘install’ steps, then ran the tests; if I wanted to modify the Python part I just edited it directly in the BUILDROOT location and re-ran the tests. When I ran the tests on ppc64, I noticed that several hundred of them failed with exactly the bug we’d seen in the python-deap package build – this infinite recursion problem. Several others failed due to not being able to find the glyph, without hitting the recursion. It turned out the package maintainer had disabled the tests on ppc64, and so Fedora 24+’s python-matplotlib has been broken on ppc64 since about April).

So anyway, with that modified C code built and used to run the test suite, I finally had a smoking gun. Running this on x86_64 and ppc64, the logged ccode values were totally different. The values logged on ppc64 were huge. But as we know from the previous logging, there was no difference in the value when the Python code passed it to the C code (the uniindex value logged in the Python code).

So now I knew: the problem lay in how the C code took the value from the Python code. At this point I started figuring out how that worked. The key line is this one:

if (!PyArg_ParseTuple(args, "I:get_char_index", &ccode)) {

That PyArg_ParseTuple function is what the C code is using to read in the value that mathtext.py calls uniindex and it calls ccode, the one that’s somehow being messed up on ppc64. So let’s read the docs!

This is one unusual example where the Python docs, which are usually awesome, are a bit difficult, because that’s a very thin description which doesn’t provide the references you usually get. But all you really need to do is read up – go back to the top of the page, and you get a much more comprehensive explanation. Reading carefully through the whole page, we can see pretty much what’s going on in this call. It basically means that args is expected to be a structure representing a single Python object, a number, which we will store into the C variable ccode. The tricky bit is that second arg, "I:get_char_index". This is the ‘format string’ that the Python page goes into a lot of helpful detail about.

As it tells us, PyArg_ParseTuple “use[s] format strings which are used to tell the function about the expected arguments…A format string consists of zero or more “format units.” A format unit describes one Python object; it is usually a single character or a parenthesized sequence of format units. With a few exceptions, a format unit that is not a parenthesized sequence normally corresponds to a single address argument to these functions.” Next we get a list of the ‘format units’, and I is one of those:

 I (integer) [unsigned int]
    Convert a Python integer to a C unsigned int, without overflow checking.

You might also notice that the list of format units include several for converting Python integers to other things, like i for ‘signed int’ and h for ‘short int’. This will become significant soon!

The :get_char_index bit threw me for a minute, but it’s explained further down:

“A few other characters have a meaning in a format string. These may not occur inside nested parentheses. They are: … : The list of format units ends here; the string after the colon is used as the function name in error messages (the “associated value” of the exception that PyArg_ParseTuple() raises).” So in our case here, we have only a single ‘format unit’ – I – and get_char_index is just a name that’ll be used in any error messages this call might produce.

So now we know what this call is doing. It’s saying “when some Python code calls this function, take the args it was called with and parse them into C structures so we can do stuff with them. In this case, we expect there to be just a single arg, which will be a Python integer, and we want to convert it to a C unsigned integer, and store it in the C variable ccode.”

(If you’re playing along at home but you didn’t get it earlier, you really should be able to get it now! Hint: read up just a few lines in the C code. If not, go refresh your memory about architectures…)

And once I understood that, I realized what the problem was. Let’s read up just a few lines in the C code:

FT_ULong ccode;

Unlike Python, C and C++ are ‘typed languages’. That just means that all variables must be declared to be of a specific type, unlike Python variables, which you don’t have to declare explicitly and which can change type any time you like. This is a variable declaration: it’s simply saying “we want a variable called ccode, and it’s of type FT_ULong“.

If you know anything at all about C integer types, you should know what the problem is by now (you probably worked it out a few paragraphs back). But if you don’t, now’s a good time to learn!

There are several different types you can use for storing integers in C: short, int, long, and possibly long long (depends on your arch). This is basically all about efficiency: you can only put a small number in a short, but if you only need to store small numbers, it might be more efficient to use a short than a long. Theoretically, when you use a short the compiler will allocate less memory than when you use an int, which uses less memory again than a long, which uses less than a long long. Practically speaking some of them wind up being the same size on some platforms, but the basic idea’s there.

All the types have signed and unsigned variants. The difference there is simple: signed numbers can be negative, unsigned ones can’t. Say an int is big enough to let you store 101 different values: a signed int would let you store any number between -50 and +50, while an unsigned int would let you store any number between 0 and 100.

Now look at that ccode declaration again. What is its type? FT_ULong. That ULong…sounds a lot like unsigned long, right?

Yes it does! Here, have a cookie. C code often declares its own aliases for standard C types like this; we can find Freetype’s in its API documentation, which I found by the cunning technique of doing a web search for FT_ULong. That finds us this handy definition: “A typedef for unsigned long.”

Aaaaaaand herein lies our bug! Whew, at last. As, hopefully, you can now see, this ccode variable is declared as an unsigned long, but we’re telling PyArg_ParseTuple to convert the Python object such that we can store it as an unsigned int, not an unsigned long.

But wait, you think. Why does this seem to work OK on most arches, and only fail on ppc64? Again, some of you will already know the answer, good for you, now go read something else. 😉 For the rest of you, it’s all about this concept called ‘endianness’, which you might have come across and completely failed to understand, like I did many times! But it’s really pretty simple, at least if we skate over it just a bit.

Consider the number “forty-two”. Here is how we write it with numerals: 42. Right? At least, that’s how most humans do it, these days, unless you’re a particularly hardy survivor of the fall of Rome, or something. This means we humans are ‘big-endian’. If we were ‘little-endian’, we’d write it like this: 24. ‘Big-endian’ just means the most significant element comes ‘first’ in the representation; ‘little-endian’ means the most significant element comes last.

All the arches Fedora supports except for ppc64 are little-endian. On little-endian arches, this error doesn’t actually cause a problem: even though we used the wrong format unit, the value winds up being correct. On (64-bit) big-endian arches, however, it does cause a problem – when you tell PyArg_ParseTuple to convert to an unsigned long, but store the result into a variable that was declared as an unsigned int, you get a completely different value (it’s multiplied by 2×32). The reasons for this involve getting into a more technical understanding of little-endian vs. big-endian (we actually have to get into the icky details of how values are really represented in memory), which I’m going to skip since this post is already long enough.

But you don’t really need to understand it completely, certainly not to be able to spot problems like this. All you need to know is that there are little-endian and big-endian arches, and little-endian are far more prevalent these days, so it’s not unusual for low-level code to have weird bugs on big-endian arches. If something works fine on most arches but not on one or two, check if the ones where it fails are big-endian. If so, then keep a careful eye out for this kind of integer type mismatch problem, because it’s very, very likely to be the cause.

So now all that remained to do was to fix the problem. And here we go, with our one character patch:

diff --git a/src/ft2font_wrapper.cpp b/src/ft2font_wrapper.cpp
index a97de68..c77dd83 100644
--- a/src/ft2font_wrapper.cpp
+++ b/src/ft2font_wrapper.cpp
@@ -971,7 +971,7 @@ static PyObject *PyFT2Font_get_char_index(PyFT2Font *self, PyObject *args, PyObj
     FT_UInt index;
     FT_ULong ccode;

-    if (!PyArg_ParseTuple(args, "I:get_char_index", &ccode)) {
+    if (!PyArg_ParseTuple(args, "k:get_char_index", &ccode)) {
         return NULL;
     }

There’s something I just love about a one-character change that fixes several hundred test failures. 🙂 As you can see, we simply change the I – the format unit for unsigned int – to k – the format unit for unsigned long. And with that, the bug is solved! I applied this change on both x86_64 and ppc64, re-built the code and re-ran the test suite, and observed that several hundred errors disappeared from the test suite on ppc64, while the x86_64 tests continued to pass.

So I was able to send that patch upstream, apply it to the Fedora package, and once the package build went through, I could finally build python-deap successfully, two days after I’d first tried it.

Bonus extra content: even though I’d fixed the python-deap problem, as I’m never able to leave well enough alone, it wound up bugging me that there were still several hundred other failures in the matplotlib test suite on ppc64. So I wound up looking into all the other failures, and finding several other similar issues, which got the failure count down to just two sets of problems that are too domain-specific for me to figure out, and actually also happen on aarch64 and ppc64le (they’re not big-endian issues). So to both the people running matplotlib on ppc64…you’re welcome 😉

Seriously, though, I suspect without these fixes, we might have had some odd cases where a noarch package’s documentation would suddenly get messed up if the package happened to get built on a ppc64 builder.

Flock 2017 Bids Now Open

Posted by Brian "bex" Exelbierd on January 12, 2017 12:00 AM

It is time to start the bid process for this year’s Flock. This year we are back in North America for Flock 2017. If you’d like to help host the event in your city, it’s time to start putting together a bid. To find out what you need to do, read the wiki page. Bids are due by February 28, 2017, so do not wait to start. It takes more time than you may realize to compile all the required information for a good bid.

Tips, advice and more details are available in the original posting on on the Fedora Community Blog from 12 January 2017.