Fedora People

Fedora Atomic Host available in Digital Ocean

Posted by Fedora Magazine on April 29, 2017 06:07 AM

The latest release of Fedora Atomic Host was announced earlier this week, and for the first time is also available on Digital Ocean. The Project Atomic blog has more details, including how to set up a new instance via either the Digital Ocean web interface or the dotctl CLI.


A New Site, A Fresh Start

Posted by Mo Morsi on April 29, 2017 03:59 AM

I started this blog 10 years ago. How the world has changed... (and yet is still the same...)

Recently I noticed my site was unaccessible. No 404, no error response, just a blank page. After a brief moment of panic, I ssh'd into my host provider and breathed a sign of relief upon discovering all db & fs entities intact, including the instance of Drupal which my site was based on (a horribly outdated instance mind you, and note I said was based). In any case, I suspect my (cheap) host provider updated their version of PHP or some critical library w/ the effect of my Drupal instance not working.


Having struggled w/ PHP & Drupal many times over the years, I was finally ready to go cold turkey, and migrated the blog to Middleman, which brings the awesomeness of Rails to static site generation. I am very much in love with Middleman right now, it's the perfect tool for this problem domain, it's incredibly easy to setup a new site, use any high level templating / markup / styling language to customize your frontend, throw in any js or other framework to handle dynamic interactions (including emscripten to run C in the browser), and you're good to go. Tailoring things on the fly is a cinch due to the convenient embedded webserver sporting live-reloading, and when you're ready to push to production it's a single command to build the static html. A quick rsync -azP synchronizes it w/ your webserver and now your site is available to the world at blazing speeds!

Anyways, enough Middleman gushing (but seriously check it out!). In addition to the port, I rethemed the site, be sure to also check out the new design if your reading this via rss. Note mobile browser UI's aren't currently supported, so no angry emails if you can't read things on your phone! (I know their coming...)

Be sure to stay subscribed to github for updates. I'm hoping virtfs-refs will see some love soon if I can figure out how to extend the current fs parsing mechanisms w/ file content retrieval. We've also been prototyping designs for the PI Switch project I mentioned a while back, more updates on that soon as things progress.

Keep surfing!!!

Automated *non*-critical path update functional testing for Fedora

Posted by Adam Williamson on April 28, 2017 11:06 PM

Yep, this here is a sequel to my most recent best-seller, Automated critical path update functional testing for Fedora 🙂

When I first thought about running update tests with openQA, I wasn’t actually thinking about testing critical path packages. I just made that the first implementation because it was easy. But I first thought about doing it when we added the FreeIPA tests to openQA – it seemed pretty obvious that it’d be handy to run the same tests on FreeIPA-related updates as well as running them on the nightly development release composes. So all along, I was planning to come up with a way to do that too.

Funnily enough, right after I push out the critpath update testing stuff, a FreeIPA-related update that broke FreeIPA showed up, and Stephen Gallagher poked me on IRC and said “hey, it sure would be nice if we could run the openQA tests on FreeIPA-related updates!”, so I said “funny you should ask…”

I bumped the topic up my todo list a bit, and wrote it that afternoon, and now it’s deployed in production. For now, it’s pretty simple: we just have a hand-written list of packages that we want to run some of the update tests for, whenever an update shows up with one of those packages in it. Simple enough, but it works: whenever an update containing one of those packages is submitted or edited, the server update tests (including the FreeIPA tests) will get run, and the results will be visible in Bodhi.

Here’s a run on the staging instance that was triggered using the new code; since I sent it to the production instance no relevant updates have been submitted or edited, but it should work just the same there. So from now on whenever our FreeIPA-ish overlords submit an update, we’ll get an idea of whether it breaks everything right away.

We can extend this system to other packages, but I couldn’t think of any (besides postgresql, which I threw in there) which would really benefit from the current update tests but aren’t already in the critical path (all the important bits of GNOME or in the critical path, for example, so all the desktop update tests get run on all GNOME updates already). If you can think of any, go ahead and let us know.

PHP version 7.0.19RC1 and 7.1.5RC1

Posted by Remi Collet on April 28, 2017 01:12 PM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.0.19RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 23 and Enterprise Linux 6.

RPM of PHP version 7.1.5RC1 are available as SCL in remi-test repository and as base packages in the remi-php71-test repository for Fedora 23 and Enterprise Linux 6.

PHP Version 5.6 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.1.5RC1 is also available in Fedora rawhide (for QA).

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php70, php71)

Base packages (php)

How many disks can you add to a (virtual) Linux machine? (contd)

Posted by Richard W.M. Jones on April 28, 2017 10:22 AM

In my last post I tried to see what happens when you add thousands of virtio-scsi disks to a Linux virtual machine. Above 10,000 disks the qemu command line grew too long for the host to handle. Several people pointed out that I could use the qemu -readconfig parameter to read the disks from a file. So I modified libguestfs to allow that. What will be the next limit?


Linux uses a strange scheme for naming disks which I’ve covered before on this blog. In brief, disks are named /dev/sda through /dev/sdz, then /dev/sdaa through /dev/sdzz, and after 18,278 drives we reach /dev/sdzzz. What’s special about zzz? Nothing really, but historically Linux device drivers would fail after this, although that is not a problem for modern Linux.


In any case I created a Linux guest with 20,000 drives with no problem except for the enormous boot time: It was over 12 hours at which point I killed it. Most of the time was being spent in:

-   72.62%    71.30%  qemu-system-x86  qemu-system-x86_64  [.] drive_get
   - 72.62% drive_get
      - 1.26% __irqentry_text_start
         - 1.23% smp_apic_timer_interrupt
            - 1.00% local_apic_timer_interrupt
               - 1.00% hrtimer_interrupt
                  - 0.82% __hrtimer_run_queues
                       0.53% tick_sched_timer

Drives are stored inside qemu on a linked list, and the drive_get function iterates over this linked list, so of course everything is extremely slow when this list grows long.

QEMU bug filed: https://bugs.launchpad.net/qemu/+bug/1686980

Edit: Dan Berrange posted a hack which gets me past this problem, so now I can add 20,000 disks.

The guest boots fine, albeit taking about 30 minutes (and udev hasn’t completed device node creation in that time, it’s still going on in the background).

><rescue> ls -l /dev/sd[Tab]
Display all 20001 possibilities? (y or n)
><rescue> mount
/dev/sdacog on / type ext2 (rw,noatime,block_validity,barrier,user_xattr,acl)

As you can see the modern Linux kernel and userspace handles “four letter” drive names like a champ.

Over 30,000

I managed to create a guest with 30,000 drives. I had to give the guest 50 GB (yes, not a mistake) of RAM to get this far. With less RAM, disk probing fails with:

scsi_alloc_sdev: Allocation failure during SCSI scanning, some SCSI devices might not be configured

I’d seen SCSI probing run out of memory before, and I made a back-of-the-envelope calculation that each disk consumed 200 KB of RAM. However that cannot be correct — there must be a non-linear relationship between number of disks and RAM used by the kernel.

Because my development machine simply doesn’t have enough RAM to go further, I wasn’t able to add more than 30,000 drives, so that’s where we have to end this little experiment, at least for the time being.

><rescue> ls -l /dev/sd???? | tail
brw------- 1 root root  66, 30064 Apr 28 19:35 /dev/sdarin
brw------- 1 root root  66, 30080 Apr 28 19:35 /dev/sdario
brw------- 1 root root  66, 30096 Apr 28 19:35 /dev/sdarip
brw------- 1 root root  66, 30112 Apr 28 19:35 /dev/sdariq
brw------- 1 root root  66, 30128 Apr 28 19:35 /dev/sdarir
brw------- 1 root root  66, 30144 Apr 28 19:35 /dev/sdaris
brw------- 1 root root  66, 30160 Apr 28 19:35 /dev/sdarit
brw------- 1 root root  66, 30176 Apr 28 19:24 /dev/sdariu
brw------- 1 root root  66, 30192 Apr 28 19:22 /dev/sdariv
brw------- 1 root root  67, 29952 Apr 28 19:35 /dev/sdariw

Fedora 26 will look awesome with supplemental wallpapers

Posted by Fedora Magazine on April 28, 2017 08:00 AM

The Fedora Design team works with the community for each release, on a set of 16 additional wallpapers. Users can install and use these to supplement the standard wallpaper. The Fedora Design team encourages submissions from the whole community. Fedora contributors then use the Nuancier app to vote on the top 16 to include.

Voting has closed on the extra wallpapers for Fedora 26 and Fedora Contributors had 15 days time to choose from 92 submissions. A total of 257 Fedora contributors voted. The results page for the voting contains the breakdown of votes, as well as links to the full-size versions of the images.They chose the following 16 backgrounds to be included in Fedora 26.

Fedora 26 wallpaper - Bluebird Fedora 26 wallpaper - Bluerose Fedora 26 wallpaper - Alternative Blue

We congratulate all the winners, and we look forward to many high-quality submissions for Fedora 27

Boring rpm tricks

Posted by Laura Abbott on April 27, 2017 06:00 PM

Several of my tasks over the past month or so have involved working with the monstrosity that is the kernel.spec file. The kernel.spec file is about 2000 lines of functions and macros to produce everything kernel related. There have been proposals to split the kernel.spec up into multiple spec files to make it easier to manage. This is difficult to accomplish since everything is generated from the same source packages so for now we are stuck with the status quo which is roughly macros all the way down. The wiki has a good overview of what all goes into the kernel.spec file. I'm still learning about how RPM and spec files work all the time but I've gotten better at figuring out how to debug problems. These are some miscelaneous tips that are not actually novel but were new to me.

Most .spec files override a set of default macros. The default macros are defined at @RPMCONFIGDIR@/macros which typically gets expanded to /usr/lib/rpm/macros. More usefully, you can put %dump anywhere in your spec file and it will dump out the current set of macros that are defined. While we're talking about macros, be very careful about whether to check if a macro is undefined vs. set to 0. This is a common mistake in general but I seem to get bit by it more in spec files than anywhere else.

Sometimes you just want to see what the spec file looks like when it's expanded. rpmspec -P <spec file> is a fantastic way to do this. You can use the -D option to override various macros. This is a cheap way to see what a spec file might look like on other archictectures (Is it the best way to see what a spec file looks like for another arch? I'll update this with a note if someone else me another way).

One of my projects has been looking at debuginfo generation for the kernel. The kernel invokes many of the scripts directly for historical reasons. Putting bash -x before a script to make it print out the commands makes it much easier to see what's going on.

Like I said, none of these are particularly new to experienced packagers but my day gets better when I have some idea of how to debug a problem.

FCAIC in the House, part III

Posted by Brian "bex" Exelbierd on April 27, 2017 12:25 PM

Hello, it’s me.

Ok, not that “Hello”. I’ve been writing quarterly updates on what I’m working on to help the Fedora Community. If you’re new to the party, welcome. I have the privilege of being the current Fedora Community Action and Impact Coordinator. I wrote last week on the Red Hat Community blog about what this role means and how it interacts with the world.

So, without further ado, let me update you on what I’ve been working on relative to my goals.

Read more over at the Fedora Magazine where this was originally posted.

FCAIC in the House, part III

Posted by Fedora Magazine on April 27, 2017 12:17 PM

Hello, it’s me.

Ok, not that “Hello”. I’ve been writing quarterly updates on what I’m working on to help the Fedora Community. If you’re new to the party, welcome. I have the privilege of being the current Fedora Community Action and Impact Coordinator. I wrote last week on the Red Hat Community blog about what this role means and how it interacts with the world.

So, without further ado, let me update you on what I’ve been working on relative to my goals.

How’d I do?

I listed these goals in my last update:

  • Get to know the community
  • Budget.Next
  • FAmSCo and FOSCo
  • Fedora Docs Publishing
  • Events
  • Packaging

Get to know the community

As I keep saying, this is a never-ending goal. I keep meeting amazing, passionate, intelligent and helpful contributors to the Fedora Project.  As part of this goal I attended both DevConf.cz and FOSDEM. At DevConf.cz I got to focus on one area of the project by participating in the Diversity FADFOSDEM was its usual glory and I got to interact with the EMEA Ambassadors there.

I’ve started publishing my conference and travel schedule in my weekly Slice of Cake updates. If we’re going to be near each other, let me know so we can meet and say “hello.”

Speaking of my weekly updates, they are designed to be quick takes on the highlights of the actual things I did and not a high-level summary like this post.  Are these (or this post) useful to you?  Let me know in the comments, by email, on IRC, or via whatever other communication method you like.  Help me with your motivational comments and constructive feedback please!


As you know, Budget.Next is the project to change the way Fedora manages money.  A new fiscal year for Fedora began on March 1, 2017.  The budget has been updated by the council to get us through the end of Quarter 1.  There is a lot of conversation going on about the mission statement right now, so the council hasn’t fully allocated the budget for the year.

However, allocations are policy decisions.  The budget process is a mechanical one designed to keep our spending and income open and transparent.  To that end, the regional treasurers and credit card holders (Neville Cross (Yn1v), Mohd Izhar Firdaus Ismail (izhar), Abdel G. Martinez L. (potty), Zacharias Mitzelos (mitzie), Joerg Simon (jsimon), and Andrew Ward (award3535)) and I have been putting our transactions into a Pagure repository and now we have a website to view the results on.  The site is currently being manually built, but is usually current.  I haven’t published the Fedora Community Blog post announcing the site that I promised last time, yet.  I am sorry about that.  It is still a goal of mine to get it out soon.  The highlights are:

  1. Built a data storage system using ledger, a plain text accounting system that has been packaged in Fedora for a while.
  2. Began storing transaction data in a Pagure repository.
  3. Wrote some basic reports to show the overall data and position for our project and the regions.
  4. This quarter we began publishing the new budget website.

My work on updated reimbursement policies and proposals for more formalized methods of using sponsored travel are stalled right now as I have too much to do.  I hope that I can get to these in the upcoming quarter.

Interested in helping out? Feel free to contact me right now. On the technical side, I’d love some help from folks interested in Ruby, AsciiDoc, Jenkins, testing (CI – Continuous Integration) and automated deployments (CD – Continuous Deployment). On the policy and procedure side, let me know about ideas and help me draft a great way forward for us. This is a great project for new contributors and junior coders or system administrators.

FOSCo (and FAmSCo)

The Fedora Ambassador Steering Committee (FAmSCo) has been working well and I am seeing great stuff.  I am so happy to see this group of contributors tackling so many tough issues.  The ideas behind FOSCo have been shelved by the council and there is now a new Mindshare position.  I am looking forward to seeing where Robert Mayr takes it.  I hope you’ll join me in helping him succeed.

Fedora Docs publishing

Our documentation reboot work continues. The documentation team has decided to move to AsciiDoc and modular writing.  The process has been very slow and this is an area where new contributors are definitely welcome.

I’ve been working on my AsciiBinder based proposal for the new tooling using the Fedora Budget website as a proof of concept.  I’ll include more details when I write the formal site announcement.

Interested in helping out? Get involved with the Docs Project or feel free to contact me right now. On the technical side, I’d love some help from folks interested in format conversions (DocBook->AsciiDoc – think perl, python, bash, etc.), ruby, AsciiDoc, Jenkins, testing (CI – Continuous Integration) and automated deployments (CD – Continuous Deployment). We also need help on the writing side with modular writing and general updates.


Planning for Flock 2017 is in progress.  Flock will be held in Cape Cod, Massachusetts, USA from 28 August – 1 September. We are making some changes to the registration and CFP engine before we make the formal announcement.  If you haven’t already, join the flock-planning mailing list to stay informed and help out.


I successfully packaged DayJournal for Fedora.  It was a rewarding and educational experience to have gone through the packaging process.  Even if you don’t ultimately publish a package, you should try to package something to understand the process.

What’s next?

For the next few months, I’d like to focus on the following:

  • Get to know the community
  • Budget.Next
  • Fedora Docs Publishing
  • Events

I am continuing my work on four of my goals for the new quarter.  I’ve got a lot of conference related travel as well as some personal holidays coming up so I don’t know that I can take on much that is new.  These remain critical priorities for me, so this is where I want to invest my energy.

I’ve talked a lot about what I hope to accomplish in the future while describing my outcomes above.  Therefore, I will just list summary goal statements for the next quarter below:

  • Get to know the community: I want to meet you! Where are you? Who are you? Let’s meet!
  • Budget.Next: I’d like the website to auto-publish after commits to the repository.
  • Fedora Docs Publishing: I’d like the documentation team to have a full proof of concept to test of my proposed workflow.  Ideally, I’d like us to publish F26 this way.
  • Events: Flock will be ready to go.  I’ll successfully represent Fedora at the events I travel too.  Other events that want my help will have it.

Let me know if I’ve missed anything. Let me know if you have input into what I’m doing or want to help. And by all means let me know what we can work on together. I can’t do it all alone (and I don’t want too!) and I can’t even help with everything I want to, but I want to make sure my work is helping the community move forward.


Posted by Daniel Lara on April 27, 2017 11:24 AM
O dnfdragora é um frontend para o DNF , onde é baseado no rpmdragora para Mageia

Sua instalação é muito simples

$ sudo dnf install dnfdragora dnfdragora-gui -y


# dnf install dnfdragora dnfdragora-gui -y

Pronto já esta instalado e podes usar

Guia de Referencia :


Endpoint visibility and monitoring using osquery and syslog-ng

Posted by Peter Czanik on April 27, 2017 09:41 AM

Using osquery you can ask questions about your machine using an SQL-like language. For example, you can query running processes, logged in users, installed packages and syslog messages as well. You can make queries on demand, and also schedule them to run regularly. The results of the scheduled queries are logged to a file.

From this post, you will learn, how to

  • send log messages to osquery,
  • read osquery logs using syslog-ng
  • parse the JSON-based log messages of osquery, so selected fields can be forwarded to Elasticsearch or other destinations expecting name-value-pairs.


You can easily perform all of these tasks using the syslog-ng log management solution.


Send log messages to osquery

To be able to query log messages, osquery needs to store them in its own database. It can collect syslog messages through a pipe, but only accepts messages in a specific format.
To configure osquery to accept syslog messages, you can either add parameters to osqueryd add them to a file. This file is usually under /etc/osquery/osquery.flags and expects each parameter on a separate line.

Set the following parameters, then restart the osqueryd service:


Once you restarted the osqueryd service, it’s time to configure syslog-ng (if you don’t have syslog-ng installed yet, install it from the repository of your distribution, or find a package on the syslog-ng website. If you are not familiar with syslog-ng, check out the quickstart section in the documentation).

Add the following snippets to the syslog-ng.conf file:

# Reformat log messages in a format that osquery accepts
rewrite r_csv_message {
  set("$MESSAGE", value("CSVMESSAGE") );
  subst("\"","\"\"", value("CSVMESSAGE"), flags(global) );

template t_csv {

# Sends messages to osquery
destination d_osquery {
  pipe("/var/osquery/syslog_pipe" template(t_csv));

# Stores messages sent to osquery in a log file, useful for troubleshooting
 destination d_osquery_copy {
  file("/var/log/csv_osquery" template(t_csv));

# Log path to send incoming messages to osquery
 log {
 # destination(d_osquery_copy);

The rewrite is needed to make sure that quotation marks are escaped. The template re-formats the messages as expected by osquery. Binaries provided by the osquery project expect syslog messages in this pipe: you might need to change the location if you compiled osquery yourself. If you want to see what messages are sent to osquery you can uncomment the “d_osquery_copy” destination in the log path. The “s_sys” source refers to your local log messages and might be different on your system (this example is from CentOS).
Note that you should not forward log messages from a central syslog-ng server to osquery, as it was designed with single machines in mind, both for performance and sizing. By default, osquery preserves the last hundred thousand messages. If you have a larger network, hundred thousand messages can arrive on a central syslog server in a matter of seconds.


Collect and parse osquery logs

By default, osquery stores all of its log messages under the /var/log/osquery directory. While configuring and debugging osquery, you can use the different .INFO and .WARNING files in this directory to figure out what went wrong. If you configured osqueryd to do periodical queries about your system, the results will go to a file called osqueryd.results.log in the same directory. The format of this file is JSON, which means that in addition to forwarding its content, syslog-ng can also parse the messages. This has many advantages:

  • you can create filters based on individual fields from the messages
  • you can limit which fields to store, or can create additional fields
  • if you want to store the messages in Elasticsearch, you can add the date in the required format, and send the messages to Elasticsearch directly from syslog-ng.


The following configuration reads the osquery log file, parses it, filters for a given event (not really useful here, just as an example) and stores the results in Elasticsearch. For easier understanding I broke the configuration into smaller pieces, you can find the full configuration at the end of my post. You can append it to your syslog-ng.conf or place it in a separate file under the etc/syslog-ng/conf.d directory in many Linux distributions and on FreeBSD.

First we need to read the osquery log file, and make sure that syslog-ng does not try to parse it as a syslog message:

source s_osquery {

Next we need to parse the log messages with the JSON parser, so we have access to the name-value pairs of the JSON-based log messages. The prefix option makes sure that names parsed from JSON do not collide with names used by syslog-ng internally.

parser p_json {
 json-parser (prefix("osquery."));

Then we define a filter, in this case searching for messages related to loading Linux kernel modules. This is just an example, you can easily filter for any fields, combine it with the inlist() filter to filter for a list of values, and so on.

filter f_modules {
 "${osquery.name}" eq "pack_incident-response_kernel_modules"

We need to store the logs somewhere, so here we define an Elasticsearch destination. (You can read more about logging to Elasticsearch with syslog-ng here.)

destination d_elastic {
 elasticsearch2 (
  template("$(format-json --scope rfc5424 --scope nv-pairs --exclude MESSAGE --exclude DATE --key ISODATE)")

As a last step, we connect all of these building blocks together using a log statement. If you do not want to filter the forwarded logs, comment out the filter line:

log {

Here is the complete configuration for a better copy & paste experience:

source s_osquery {
 parser p_json {
  json-parser (prefix("osquery."));
 filter f_modules {
  "${osquery.name}" eq "pack_incident-response_kernel_modules"
destination d_elastic {
 elasticsearch2 (
  template("$(format-json --scope rfc5424 --scope nv-pairs --exclude MESSAGE --exclude DATE --key ISODATE)")
log {


This was just a quick introduction to osquery and syslog-ng. These examples are good to whet your appetite, but you should read the official documentation if you plan to use it in production, because it requires a more in-depth knowledge of syslog-ng and osquery to produce useful results. Save the following references:


If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.


The post Endpoint visibility and monitoring using osquery and syslog-ng appeared first on Balabit Blog.

Compiling / Playing NetHack 3.6.0 on Fedora

Posted by Mo Morsi on April 26, 2017 08:43 PM

The following are the simplest instructions required to compile NetHack 3.6.0 for Fedora 25.

Why might you want to compile NetHack from source, instead of simply installing the package (sudo dnf install nethack)? For many reasons. Applying patches for custom game mechanics. Running an alternate frontend. And more!

While the official Linux instructions are complete, they are pretty involved and must be followed exactly for things to work. To give the dev team credit, they’ve been supporting a plethora of platforms and environments for 20+ years (and the number is still increasing). While a consolidated guide was written for compiling NetHack from scratch on Ubuntu/Debian but nothing exists for Fedora… until now!

# On a fresh Fedora installation (with updates) install the dependencies:

$ sudo dnf install ncurses-devel libXt-devel libXaw-devel byacc flex

# Download the NetHack (3.6.0) source tarball from the official site and unpack it:

$ tar xzvf [download]
$ cd nethack-3.6.0/

# Run the base setup utility for Linux:

$ cd sys/unix
$ ./setup.sh hints/linux
$ cd ../..

# Edit [include/unixconf.h] to uncomment the following line…

#define LINUX

# Edit [include/config.h] to uncomment the following line…

#define X11_GRAPHICS

# Edit [src/Makefile] and update the following lines…


# …to look like so


# Edit [Makefile] to uncomment the following line

VARDATND = x11tiles NetHack.ad pet_mark.xbm pilemark.xpm rip.xpm

# In previous line, apply this bugfix by changing…


# …to


# Build and install the game

$ make all
$ make install

# Finally create [~/.nethackrc] config file and populate it with the following: OPTIONS=windowtype:x11

# To play:

$ ~/nh/install/games/nethack

Go get that Amulet!

Keybase on Fedora: crypto for everyone

Posted by Fedora Magazine on April 26, 2017 08:00 AM

Keybase is a service that makes a security web of trust usable for everyone. It uses encryption to provide secure communications — including chat, file sharing, and publishing documents. But it extends encryption into a social context, like Github or Gitlab do for project and source code control. Like other acceptable secure systems, Keybase doesn’t rely on secret source code, and is based on free software.

Proving your identity

Most people use some form of social network today. Quite a few of these are already supported, including:

  • Twitter
  • Github
  • Reddit
  • Facebook
  • …and more

Proving the authenticity of a social account or site involves posting special coded information. These proofs show that an account or site truly belongs to you. Then other users can also verify your identity through these proofs. Arbitrary websites under your control are also supported.

The proofs are stored in a blockchain that ensures integrity and authenticity as you make changes.

Getting started with Keybase.io

To get started, download and install the package for your system. This example assumes you have an Intel 64-bit processor, like most computers today. If you have a very old 32-bit system, use i386 instead of amd64 below. The package takes some time to download, depending on your internet connection speed.

sudo dnf install https://prerelease.keybase.io/keybase_amd64.rpm

Now run the initial startup application:


The following screen appears, encouraging you to use encryption.

Keybase initial terminal run

The following window appears if you’re running a graphical desktop:

Keybase initial GUI window

If you have an account on Keybase.io, you can sign in using the app window that appears. You can also create a new account.

When you create a new account, you’re asked to name the computer from which you’re using the service. That way the service can alert you if there’s a login from a new device.

You’ll also receive a special proof that you should write down and store in a secure location. It will look similar to this:

Keybase default proof

This default proof is recorded like other devices or computers where you log in. It lets you access your Keybase account from another computer in case your known computers are lost, broken, or stolen. The long list of words makes the proof easy to type in, but very hard to guess.

To prove an additional service or site is authentically yours, select the Prove function and follow the instructions. The instructions will differ depending on the nature of the service or site. The more sites you prove, the better the level of authenticity you’re providing.

If you’re not running a graphical desktop, or prefer the terminal, type this to see a list of available commands:

keybase help

If you want to prove more services, for example, run the keybase prove command. For help with a specific command such as prove, type:

keybase help prove

Making crypto social

If you’re using Fedora Workstation, notice a Keybase icon appears in the extra status icon tray at the bottom left of your screen. Select it to bring up the main app window. From this window, you can carry out different secure tasks.

By default, the window shows your encrypted folders. These are stored on your system using a FUSE plugin, and you can access them through the /keybase path. You can use these folders just like any other folder on your system. When you store documents there, they are automatically encrypted. Items in the public folder are available for others on Keybase to see. The other users can be certain these documents are authentic, thanks to the way GPG encryption and signing work.

When you select the people icon, your profile window appears. From this window you can access your list of followers and those you’re following on Keybase. “Following” is the Keybase expression of verifying identity.

Using the Keybase main window, you can share files with other users either publicly or privately. Users signed into Keybase receive notifications when new files are shared to them or the contents of a shared folder change. You can also enjoy encrypted chat with other users with whom you’re connected.

Keybase main window

You’ll know when someone shares files or chats with you that they’re really who they claim to be. In part this is because they’ve verified their identity based on social accounts and properties. Because many other Keybase users recognize those accounts you can be more certain of their identity.

New badge: Red Hat Summit 2017 !

Posted by Fedora Badges on April 26, 2017 06:23 AM
Red Hat Summit 2017You visited Fedora at the Red Hat Summit in 2017!

libVirt on Hetzner

Posted by Fabio Alessandro Locati on April 26, 2017 12:00 AM

After many years of using Hetzner as a server provider, and having rented from them multiple servers for many reasons, I decided to rent a server with 128Gb of RAM to do some tests with many (virtualized) machines on top of CentOS.

As it often happens, hosting providers put in place a lot of security measurements that sometimes make doing simple stuff more complex. The first approach I tried was using the (only) Ethernet interface as a bridged interface, but that did not brought me very far. Speaking with the support they pointed out that it was impossible in my setup, so I moved to the second option: the broute.

In the broute approach, a bridge is added to the interface, but all the traffic gets routed. With the broute the configuration is very easy, in fact, to install and properly configure libVirt in such environment, the following steps are enough.

Let’s start installing the needed software.

yum install bridge-utils libvirt libvirt-daemon-kvm qemu-kvm virt-install libguestfs-tools

In my case, I’ll use qemu-kvm as virtualisation engine, and that’s why I’m installing it. Also, I install virt-install and libguestfs-tools since I want to do some changes to the images before running them.

The second step is to start and enable the libvirtd daemon:

systemctl enable libvirtd
systemctl start libvirtd

We can now move to configure the networking. In my case, I had the following situation IP wide:

  • 1 primary IPv4 (
  • 1 primary IPv6 network (2a01:4f8:10a:390a::1/64)
  • 1 additional IPv4 network (

My goal was to create 3 different networks:

  • an internal network that is not routable but where all VMs can talk to each other (and to the underlying system)
  • a public network where all machines get a public IP
  • a private network that is routable through NAT and therefore the machines can connect to internet but not be reached from outside

Prepare the OS

The first step is ensure that the kernel rp_filter will not block the packages during the routing process. To do so, you need to put in the file /etc/sysctl.d/98-rp-filter.conf:

net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

In case you also want to be able to use nested-virtualisation acceleration, you need to put in /etc/modprobe.d/kvm_intel.conf:

options kvm-intel nested=1
options kvm-intel enable_shadow_vmcs=1
options kvm-intel enable_apicv=1
options kvm-intel ept=1

Even if it’s not mandatory, I always suggest to upgrade all your packages. To do so, you can run:

yum update -y

At this point you should reboot your box to ensure that all kernel options are properly loaded and that you are running the latest kernel version.

To make IPv6 work, it’s necessary to change the IPv6 class in /etc/sysconfig/network-scripts/ifcfg-enp0s2 (or the file that identifies your primary network interface) from /64 to /128. This will allow the bridge to obtain all the other IPv6 available in that class and be able to assign them to your virtual machines. To configure the three networks, we are going to use virsh, using xml files that describe the network.

Network creation

The file that describes the internal network is going to be called /tmp/internal.xml and contains:

  <bridge name="br-internal" stp="on" delay="0"/>
  <ip address="" netmask="">

As you can see we are just declaring an IP range (, the bridge name (br-internal) and the network name (internal). We can now create the network:

virsh net-define /tmp/internal.xml
virsh net-start internal
virsh net-autostart internal

The second network is in the file /tmp/public.xml and contains the following:

  <bridge name="br-public" />
  <forward mode="route"/>
  <ip address="" netmask="">
  <ip family="ipv6" address="2a01:4f8:10a:390a::1" prefix="64">

This is very similar to the previous one, with a couple of differences:

  • we are declaring a forward mode (route) that will allow this network to speak with the other networks available to the physical box, that will behave as a router
  • we are declaring an IPv4 class (which is the additional IPv4/28 class)
  • we are declaring an IPv6 class (which is the primary IPv6/64 class)

We can now create the network:

virsh net-define /tmp/public.xml
virsh net-start public
virsh net-autostart public

The third file, called /tmp/private.xml contains:

  <bridge name="br-private" stp="on" delay="0"/>
  <forward mode="nat">
      <port start="1024" end="65535"/>
  <ip address="" netmask="">

This is very similar to the internal one, with the addition of the forward section, where we declare the ability of forwarding packages toward the internet via NAT.

We can now create the network:

virsh net-define /tmp/private.xml
virsh net-start private
virsh net-autostart private

If you now run virsh net-list you can now see the networks:

 Name                 State      Autostart     Persistent
 defaul               active     yes           yes
 internal             active     yes           yes
 public               active     yes           yes
 private              active     yes           yes

In case you want to remove the default one (mainly for cleaness reasons):

virsh net-destroy default
virsh net-undefine default

Create the virtual machine

We can now move to starting our first virtual machine. I already had in /var/lib/libvirt/images/rhel-guest-image-7.3-35.x86_64.qcow2 a qcow2 RHEL7 image. The first step is copy it to a new file that will become the disk of the VM:

cp /var/lib/libvirt/images/rhel-guest-image-7.3-35.x86_64.qcow2 /var/lib/libvirt/images/vm00.qcow2

At this point we can customize the image with virt-customize to make the machine able to run properly:

virt-customize -a /var/lib/libvirt/images/vm00.qcow2 \
    --root-password password:PASSWORD_HERE \
    --ssh-inject root:file:/root/.ssh/fale.pub \
    --run-command 'systemctl disable cloud-init; systemctl mask cloud-init' \
    --run-command "echo -e 'DEVICE=eth0\nONBOOT=yes\nBOOTPROTO=none\nIPADDR=\nNETMASK=\nSCOPE=\"peer\"\nGATEWAY=\nIPV6INIT=yes\nIPV6ADDR=2a01:4f8:10a:390a::10/64\nIPV6_DEFAULTGW=2a01:4f8:10a:390a::2' > /etc/sysconfig/network-scripts/ifcfg-eth0" \
    --run-command "echo -e 'DEVICE=eth1\nONBOOT=yes\nBOOTPROTO=none\nIPADDR=\nNETMASK=' > /etc/sysconfig/network-scripts/ifcfg-eth1" \

With this command we are going to perform a lot of changes. In order:

  • set a root password since by default RHEL qcow image has no password and therefore is not possible to login
  • inject an SSH key for root
  • disable cloud-init since it will not be able to connect to anything and will fail
  • configure eth0 that is going to be attached to public
  • configure eth1 that is going to be attached to private
  • make SELinux re-label the wholte filesystem to ensure that all files are properly labled

At this point we can create the machine with:

    --ram 8192
    --os-variant rhel7
    --disk path=/var/lib/libvirt/images/vm00.qcow2,device=disk,bus=virtio,format=qcow2
    --network network:public
    --network network:private
    --name vm00

You can now connect via SSH to the machine.

Flock Cod Registration Form Design

Posted by Máirín Duffy on April 25, 2017 10:00 PM

Flock logo (plain)

We’re prepping the regcfp site for Flock to open up registrations and CFP for Flock. As a number of changes are underfoot for this year’s Flock compared to previous Flocks, we’ve needed to change up the registration form accordingly. (For those interested, the discussion has been taking place on the flock-planning list).

This is a second draft of those screens after the first round of feedback. The first screen is going to spoil the surprises herein, hopefully.

First screen – change announcements, basic details

On the first screen, we announce a few changes that will be taking place at this year’s Flock. The most notable one is that we’ll now have partial Flock funding available, in an attempt to fund as many Fedora volunteers as possible to enable them to come to Flock. Another change is the addition of a nominal (~$25 USD) registration fee. We had an unusually high number of no-shows at the last Flock, which cost us funding that could have been used to bring more people to Flock. This registration fee is meant to discourage no-shows and enable more folks to come.

Flock registration mockup.

Second screen – social details, personal requirements

This is the screen where you can fill out your badge details as well as indicate your personal requirements (T-shirt size, dietary preferences/restrictions, etc.)

Second Flock registration screen - personal details for badge and prefs (dietary, etc.)

Third screen – no funding needed

So depending, the next section may be split into a separate form or be a conditional based on whether or not the registrant is requesting funding. The reason we would want to split funding requests into a separate form is that applicants will need to do some research into cost estimates for their travel, and that could take some time, and we don’t want the form to time out while that’s going on.

Anyhow, this is what this page of the form looks like if you don’t need funding. We offer an opportunity to help out other attendees to those folks who don’t need funding here.

Third screen – travel details

This is the travel details page for those seeking financial assistance; it’s rather long, as we’ve many travel options, domestic and international.

Fourth screen – funding request review

This is a summary of the total funding request cost as well as the breakdown of partial funding options. I’d really like to hear your feedback on this, if it’s confusing or if it makes sense. Are there too many partial options?

mockup providing partial funding options

Final screen – summary

This screen is just a summary of everything submitted as well as information about next steps.

final screen - registration summary and next steps

What do you think?

Do these seem to make sense? Any confusion or issues come up as you were reading through them? Please let me know. You can drop a comment or join the convo on flock-planning.


(Update: Changed the language of the first questions in both of the 3rd screens; there were confusing double-negatives pointed out by Rebecca Fernandez. Thanks for the help!)

LibreOffice in The Matrix [m]

Posted by Eike Rathke on April 25, 2017 08:45 PM
If you use the Riot App (or any other) to connect to the Matrix and communicate, there's now a LibreOffice room that is bridged with the #libreoffice IRC channel on freenode.net. IRC channels are bridged to the matrix already for some time, but so far you had to type
/join #freenode_#libreoffice:matrix.org
in your matrix mobile app, or in the browser app know the bridge exists and select from the list, now you can just search in the available matrix rooms for LibreOffice and join. This should be a convenient method to join a chat with other LibreOffice users for people who otherwise don't use IRC or don't want to install an IRC app just for this on their mobile, smartphone, tablet..

The #libreoffice IRC channel and thus the LibreOffice matrix room is dedicated to user questions around all LibreOffice applications. Join and enjoy, get help and help others.

Encrypt all the Fedora Project

Posted by Till Maas on April 25, 2017 08:37 PM


It seems that thanks to the hard work of the Fedora Infrastructure team we can soon enforce HTTPS for all of the Fedora Project – at least of all hosts within *.fedoraproject.org. To make sure that this does not awfully break everything it would be awesome if you could test whether we need to fix something for you for this. If you use chromium or chrome, you can easily enforce HTTPS for fedoraproject.org and its subdomains:

  1. Go to the net internals settings at chrome:://net-internals#hsts
  2. Put fedoraproject.org in the input field for domain
  3. Check the Include subdomains for STS checkbox
  4. Click on the Add button

chromium-hsts-fedoraAfterwards you should notice that all requests to any fedoraproject.org URL should got to HTTPS by default. If you notice any problems with yes, please not this in the Fedora Infrastructure ticket #2888. Let me know if you figure out how to do this in Firefox and I will add the instructions here as well.

Reverse engineering ComputerHardwareIds.exe with winedbg

Posted by Richard Hughes on April 25, 2017 07:49 PM

In an ideal world vendors could use the same GUID value for hardware matching in Windows and Linux firmware. When installing firmware and drivers in Windows vendors can always use some generated HardwareID GUIDs that match useful things like the BIOS vendor and the product SKU. It would make sense to use the same scheme as Microsoft. There are a few issues in an otherwise simple plan.

The first, solved with a simple kernel patch I wrote (awaiting review by Jean Delvare), exposes a few more SMBIOS fields into /sys/class/dmi/id that are required for the GUID calculation.

The second problem is a little more tricky. We don’t actually know how Microsoft joins the strings, what encoding is used, or more importantly the secret namespace UUID used to seed the GUID. The only thing we have got is the closed source ComputerHardwareIds.exe program in the Windows DDK. This, luckily, runs in Wine although Wine isn’t able to get the system firmware data itself. This can be worked around, and actually makes testing easier.

So, some research. All we know from the MSDN page is that Each hardware ID string is converted into a GUID by using the SHA-1 hashing algorithm which actually tells us quite a bit. Generating a GUID from a SHA-1 hash means this has to be a type 5 UUID.

The reference code for a type-5 UUID is helpfully available in the IETF RFC document so it’s quite quick to get started with research. From a few minutes of searching online, the most likely symbols the program will be using are the BCrypt* set of functions. From the RFC code, we call the checksum generation update function with first the encoded namespace (aha!) and then the encoded joined string (ahaha!). For Win32 programs, BCryptHashData is the function we want to trace.

So, to check:

wine /home/hughsie/ComputerHardwareIds.exe /mfg "To be filled by O.E.M."

…matches the reference HardwareID-14 output from Microsoft. So onto debugging, using +relay shows all the calling values and return values from each Win32 exported symbol:

WINEDEBUG=+relay winedbg --gdb ~/ComputerHardwareIds.exe
Wine-gdb> b BCryptHashData
Wine-gdb> r ~/ComputerHardwareIds.exe /mfg "To be filled by O.E.M." /family "To be filled by O.E.M."
005b:Call bcrypt.BCryptHashData(0011bab8,0033fcf4,00000010,00000000) ret=0100699d
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so

Great, so this is the secret namespace. The first parameter is the context, the second is the data address, the third is the length (0x10 as a length is indeed SHA-1) and the forth is the flags — so lets print out the data so we can see what it is:

Wine-gdb> x/16xb 0x0033fcf4
0x33fcf4:	0x70	0xff	0xd8	0x12	0x4c	0x7f	0x4c	0x7d
0x33fcfc:	0x00	0x00	0x00	0x00	0x00	0x00	0x00	0x00

Using either the uuid in python, or uuid_unparse in libuuid, we can format the namespace to 70ffd812-4c7f-4c7d-0000-000000000000 — now this doesn’t look like a randomly generated UUID to me! Onto the next thing, the encoding and joining policy:

Wine-gdb> c
005f:Call bcrypt.BCryptHashData(0011bb90,00341458,0000005a,00000000) ret=010069b3
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so
Wine-gdb> x/90xb 0x00341458
0x341458:	0x54	0x00	0x6f	0x00	0x20	0x00	0x62	0x00
0x341460:	0x65	0x00	0x20	0x00	0x66	0x00	0x69	0x00
0x341468:	0x6c	0x00	0x6c	0x00	0x65	0x00	0x64	0x00
0x341470:	0x20	0x00	0x62	0x00	0x79	0x00	0x20	0x00
0x341478:	0x4f	0x00	0x2e	0x00	0x45	0x00	0x2e	0x00
0x341480:	0x4d	0x00	0x2e	0x00	0x26	0x00	0x54	0x00
0x341488:	0x6f	0x00	0x20	0x00	0x62	0x00	0x65	0x00
0x341490:	0x20	0x00	0x66	0x00	0x69	0x00	0x6c	0x00
0x341498:	0x6c	0x00	0x65	0x00	0x64	0x00	0x20	0x00
0x3414a0:	0x62	0x00	0x79	0x00	0x20	0x00	0x4f	0x00
0x3414a8:	0x2e	0x00	0x45	0x00	0x2e	0x00	0x4d	0x00
0x3414b0:	0x2e	0x00
Wine-gdb> q

So there we go. The encoding looks like UTF-16 (as expected, much of the Windows API is this way) and the joining character seems to be &.

I’ve written some code in fwupd so that this happens:

$ fwupdmgr hwids
Computer Information
BiosVendor: LENOVO
BiosVersion: GJET75WW (2.25 )
Manufacturer: LENOVO
Family: ThinkPad T440s
ProductName: 20ARS19C0C
ProductSku: LENOVO_MT_20AR_BU_Think_FM_ThinkPad T440s
EnclosureKind: 10
BaseboardManufacturer: LENOVO
BaseboardProduct: 20ARS19C0C

Hardware IDs
{c4159f74-3d2c-526f-b6d1-fe24a2fbc881}   <- Manufacturer + Family + ProductName + ProductSku + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{ff66cb74-5f5d-5669-875a-8a8f97be22c1}   <- Manufacturer + Family + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{2e4dad4e-27a0-5de0-8e92-f395fc3fa5ba}   <- Manufacturer + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{3faec92a-3ae3-5744-be88-495e90a7d541}   <- Manufacturer + Family + ProductName + ProductSku + BaseboardManufacturer + BaseboardProduct
{660ccba8-1b78-5a33-80e6-9fb8354ee873}   <- Manufacturer + Family + ProductName + ProductSku
{8dc9b7c5-f5d5-5850-9ab3-bd6f0549d814}   <- Manufacturer + Family + ProductName
{178cd22d-ad9f-562d-ae0a-34009822cdbe}   <- Manufacturer + ProductSku + BaseboardManufacturer + BaseboardProduct
{da1da9b6-62f5-5f22-8aaa-14db7eeda2a4}   <- Manufacturer + ProductSku
{059eb22d-6dc7-59af-abd3-94bbe017f67c}   <- Manufacturer + ProductName + BaseboardManufacturer + BaseboardProduct
{0cf8618d-9eff-537c-9f35-46861406eb9c}   <- Manufacturer + ProductName
{f4275c1f-6130-5191-845c-3426247eb6a1}   <- Manufacturer + Family + BaseboardManufacturer + BaseboardProduct
{db73af4c-4612-50f7-b8a7-787cf4871847}   <- Manufacturer + Family
{5e820764-888e-529d-a6f9-dfd12bacb160}   <- Manufacturer + EnclosureKind
{f8e1de5f-b68c-5f52-9d1a-f1ba52f1f773}   <- Manufacturer + BaseboardManufacturer + BaseboardProduct
{6de5d951-d755-576b-bd09-c5cf66b27234}   <- Manufacturer

Which basically matches the output of ComputerHardwareIds.exe on the same hardware. If the kernel patch gets into the next release I’ll merge the fwupd branch to master and allow vendors to start using the Microsoft HardwareID GUID values.

Brightness control doesn’t work in i3wm. Due to xbacklight

Posted by Luca Ciavatta on April 25, 2017 07:37 PM

I’m very pleased with i3wm, but day by day I need to adjust something. At this time, on my new laptop the brightness hardware keys don’t work as expected.

The issue on the Lenovo Thinkpad with Fedora 26 is the same on the Acer ES1-111 with Ubuntu 17.04. At the bottom of i3wm, we found respectively the GNOME and the Unity windows manager. On both GNOME3 and Unity7, the hardware brightness keys work fine.

So, the problem is with i3wm only.

<figure class="wp-caption alignleft" id="attachment_417" style="width: 790px">Light. A GNU/Linux application to control backlights<figcaption class="wp-caption-text">Light. A GNU/Linux application to control backlights</figcaption></figure>


Many solutions. Changing the brightness

Looking into the infinite knowledge of Google, I found a lot of solutions.

The issue is set by the xbacklight command, I tested it and it doesn’t work also from a terminal session. So, the following lines don’t work.

# Screen brightness controls
bindsym XF86MonBrightnessUp exec xbacklight -inc 20 # increase screen brightness
bindsym XF86MonBrightnessDown exec xbacklight -dec 20 # decrease screen brightness

The most part of the solutions that I found, planned to make some scripts, changing the brightness with some bash commands:

$ sudo echo VALUE > /sys/class/backlight/intel_backlight/brightness

And save the script in some place, maybe in user bin folder or something in your PATH. Then, call the script from your i3wm .config file. To run the script, it needs super user (sudo) privileges and so you must just give it to the root user to launch it from the configuration file.

Anyway, I preferred a different approach. I found an alternative to xbacklight command, a GNU/Linux application to control backlights called simply light. You can find light over GitHub.

light is packed only for Arch Linux and you can found only packages for this distribution. Luckily, light is really lightweight and easy-easy to compile on every other distribution.

So, just download or clone the source of light:

$ git clone https://github.com/haikarainen/light.git

Compile and install it:

$ sudo make

$ sudo make install

On Fedora 26 and Ubuntu 17.04 all went fine. You can check it:

$ light --help

And, finally, replace the lines of code in the i3wm .config file:

# Screen brightness controls
# bindsym XF86MonBrightnessUp exec xbacklight -inc 20 # increase screen brightness
# bindsym XF86MonBrightnessDown exec xbacklight -dec 20 # decrease screen brightness
bindsym XF86MonBrightnessUp exec light -A 5 # increase screen brightness
bindsym XF86MonBrightnessDown exec light -U 5 # decrease screen brightness

The post Brightness control doesn’t work in i3wm. Due to xbacklight appeared first on cialu.net.

How many disks can you add to a (virtual) Linux machine?

Posted by Richard W.M. Jones on April 25, 2017 07:32 PM
><rescue> ls -l /dev/sd[tab]
Display all 4001 possibilities? (y or n)

Just how many virtual hard drives is it practical to add to a Linux VM using qemu/KVM? I tried to find out. I started by modifying virt-rescue to raise the limit on the number of scratch disks that can be added¹: virt-rescue --scratch=4000

I hit some interesting limits in our toolchain along the way.


256 is the maximum number of virtio-scsi disks in unpatched virt-rescue / libguestfs. A single virtio-scsi controller supports 256 targets, with up to 16384 SCSI logical units (LUNs) per target. We were assigning one disk per target, and giving them all unit number 0, so of course we couldn’t add more than 256 drives, but virtio-scsi supports very many more. In theory each virtio-scsi controller could support 256 x 16,384 = 4,194,304 drives. You can even add more than one controller to a guest.

About 490-500

At around 490-500 disks, any monitoring tools which are using libvirt to collect disk statistics from your VMs will crash (https://bugzilla.redhat.com/show_bug.cgi?id=1440683).

About 1000

qemu uses one file descriptor per disk (maybe two per disk if you are using ioeventfd). qemu quickly hits the default open file limit of 1024 (ulimit -n). You can raise this to something much larger by creating this file:

$ cat /etc/security/limits.d/99-local.conf
# So we can run qemu with many disks.
rjones - nofile 65536

It’s called /etc/security for a reason, so you should be careful adjusting settings here except on test machines.

About 4000

The Linux guest kernel uses quite a lot of memory simply enumerating each SCSI drive. My default guest had 512 MB of RAM (no swap), and ran out of memory and panicked when I tried to add 4000 disks. The solution was to increase guest RAM to 8 GB for the remainder of the test.

Booting with 4000 disks took 10 minutes² and free shows about a gigabyte of memory disappears:

><rescue> free -m
              total        used        free      shared  buff/cache   available
Mem:           7964         104        6945          15         914        7038
Swap:             0           0           0

What was also surprising is that increasing the number of virtual CPUs from 1 to 16 made no difference to the boot time (in fact it was a bit slower). So even though SCSI LUN probing is not deterministic, it appears that it is not running in parallel either.

About 8000

If you’re using libvirt to manage the guest, it will fail at around 8000 disks because the XML document describing the guest is too large to transfer over libvirt’s internal client to daemon connection (https://bugzilla.redhat.com/show_bug.cgi?id=1443066). For the remainder of the test I instructed virt-rescue to run qemu directly.

My guest with 8000 disks took 77 minutes to boot. About 1.9 GB of RAM was missing, and my ballpark estimate is that each extra drive takes about 200KB of kernel memory.

Between 10,000 and 11,000

We pass the list of drives to qemu on the command line, with each disk taking perhaps 180 bytes to express. Somewhere between 10,000 and 11,000 disks, this long command line fails with:

qemu-system-x86_64: Argument list too long

To be continued …

So that’s the end of my testing, for now. I managed to create a guest with 10,000 drives, but I was hoping to explore what happens when you add more than 18278 drives since some parts of the kernel or userspace stack may not be quite ready for that.

Continue to part 2 …


¹That command will not work with the virt-rescue program found in most Linux distros. I have had to patch it extensively and those patches aren’t yet upstream.

²Note that the uptime command within the guest is not an accurate way to measure the boot time when dealing with large numbers of disks, because it doesn’t include the time taken by the BIOS which has to scan the disks too. To measure boot times, use the wallclock time from launching qemu.

Thanks: Paolo Bonzini

Edit: 2015 KVM Forum talk about KVM’s limits.

Fedora Installation Workshop in Ranchi

Posted by Mohan on April 25, 2017 06:52 PM
Fedora Installation workshop was organized at Ranchi, Jharkhand, India on 23 April, 2017 to introduce Fedora OS to local students and computer users. The workshop was conducted by Mohan Prakash and was attended mostly by undergraduate students. Fedora DVDs and stickers were distributed. The participants used Fedora Live and also installed Fedora on their machines. Mohan Prakash spoke about important packages shipped with the Fedora DVD and introduced different websites related to Fedora.

Red Hat job opening for Linux Graphics stack developer

Posted by Christian F.K. Schaller on April 25, 2017 05:53 PM

So we have a new job available for someone interested in joing our team and work on improving the Linux graphics stack. The focus of this job will be on GPU compute related work, but you should also expect to be spending time on improving the graphics driver stack in general. We are looking for someone at the Principal Engineer level, but I do recommend that even if you don’t feel you are quite at that level yet you should apply because to be fair the amount of people with the kind of experience we are looking for are few and far between, so in the end there is a chance we will hire two more junior developers instead if we have candidates with the right profile.

We are quite flexible on working location for this job, so for the right candidate working remotely is definitely a possibility. And of course if you are interested in joining us at one of our offices that is an option too, for instance we have existing team members working out of our Boston (USA), Brno(Czech Republic), Brisbane (Australia) and Munich (Germany) offices.

GPU Compute is rapidly growing in importance and use so this is your chance to be in the middle of it and work for what I personally think is one of the best companies in the world to work for.

So be sure to submit an application though the Red Hat hiring portal.

Continúa trabajando con Linux. Sin Linux no tendrás futuro: Estudio.

Posted by Fernando Espinoza on April 25, 2017 02:59 PM

por Crespo Estamos en el año 2014, viviendo en una época donde los avances tecnológicos aumentan de manera muy rápida,  sobre todo gracias a varias empresas que han motivado el desarrollo de diversos tipos de tecnología, ya sea de forma física o en forma de software. Google, Facebook, Samsung, Microsoft, Linux Foundation,  Apple, etc. son empresas [...]

FSFE Fellowship Representative, OSCAL'17 and other upcoming events

Posted by Daniel Pocock on April 25, 2017 12:57 PM

The Free Software Foundation of Europe has just completed the process of electing a new fellowship representative to the General Assembly (GA) and I was surprised to find that out of seven very deserving candidates, members of the fellowship have selected me to represent them on the GA.

I'd like to thank all those who voted, the other candidates and Erik Albers for his efforts to administer this annual process.

Please consider becoming an FSFE fellow or donor

The FSFE runs on the support of both volunteers and financial donors, including organizations and individual members of the fellowship program. The fellowship program is not about money alone, it is an opportunity to become more aware of and involved in the debate about technology's impact on society, for better or worse. Developers, users and any other supporters of the organization's mission are welcome to join, here is the form. You don't need to be a fellow or pay any money to be an active part of the free software community and FSFE events generally don't exclude non-members, nonetheless, becoming a fellow gives you a stronger voice in processes such as this annual election.

Attending OSCAL'17, Tirana

During the election period, I promised to keep on doing the things I already do: volunteering, public speaking, mentoring, blogging and developing innovative new code. During May I hope to attend several events, including OSCAL'17 in Tirana, Albania on 13-14 May. I'll be running a workshop there on the Debian Hams blend and Software Defined Radio. Please come along and encourage other people you know in the region to consider attending.

What is your view on the Fellowship and FSFE structure?

Several candidates made comments about the Fellowship program and the way individual members and volunteers are involved in FSFE governance. This is not a new topic. Debate about this topic is very welcome and I would be particularly interested to hear any concerns or ideas for improvement that people may contribute. One of the best places to share these ideas would be through the FSFE's discussion list.

In any case, the fellowship representative can not single-handedly overhaul the organization. I hope to be a constructive part of the team and that whenever my term comes to an end, the organization and the free software community in general will be stronger and happier in some way.

Project Idea - PI Sw1tch

Posted by Mo Morsi on April 25, 2017 12:07 PM

While gaming is not high on my agenda anymore (... or rather at all), I have recently been mulling buying a new console, to act as much as a home entertainment center as a gaming system.

Having owned several generations PlayStation and Sega products, a few new consoles caught my eye. While the most "open" solution, the Steambox sort-of fizzled out, Nintendo's latest console Switch does seem to stand out of the crowd. The balance between power and portability looks like a good fit, and given Nintendo's previous successes, it wouldn't be surprising if it became a hit.

In addition to the separate home and mobile gaming markets, new entertainment mechanisms are needing to provide seamless integration between the two environments, as well as offer comprehensive data and information access capabilities. After all what'd be the point of a gaming tablet if you couldn't watch Youtube on it! Neal Stephenson recently touched on this at his latest TechCrunch talk, by expressing a vision of technology that is more integrated/synergized with our immediate environment. While mobile solutions these days offer a lot in terms of processing power, nothing quite offers the comfort or immersion that a console / home entertainment solution provides (not to mention mobile phones being horrendous interfaces for gaming purposes!)

Being the geek that I am, this naturally led me to thinking about developing a hybrid mechanism of my own, based on open / existing solutions, so that it could be prototyped and demonstrated quickly. Having recently bought a Raspeberry PI (after putting my Arduino to use in my last microcontroller project), and a few other odds and end pieces, I whipped up the following:

Pi sw1tch

The idea is simple, the Raspberry PI would act as the 'console', with a plethora of games and 'apps' available (via open repositories, steam, emulators, and many more... not to mention Nethack!). It would be anchorable to the wall, desk, or any other surface by using a 3D-printed mount, and made portable via a cheap wireless controller / LCD display / battery pack setup (tied together through another custom 3D printed bracket). The entire rig would be quickly assemblable and easy to use, simply snap the PI into the wall to play on your TV; remove and snap into the controller bracket to take it on the go.

I suspect the power component is going to be the most difficult to nail down, finding an affordable USB power source that is lightweight but offers sufficient juice to drive the Raspberry PI w/ LCD might be tricky. But if this is done correctly, all components will be interchangeable, and one can easily plug in a lower-power microcontroller and/or custom hardware component for a tailored experience.

If there is any interest, let me know via email. If 3 or so people commit, this could be done in a weekend! (stay tuned for updates!)

Episode 44 - Bug Bounties vs Pen Testing

Posted by Open Source Security Podcast on April 25, 2017 12:06 PM
Josh and Kurt discuss Lego, bug bounties, pen testing, thought leadership, cars, lemons, entropy, and CVE.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/319388588&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes

QElectroTech on the road to 0.6

Posted by Remi Collet on April 25, 2017 09:12 AM

RPM of QElectroTech version 0.6rc1 (release candidate), an application to design electric diagrams, are available in remi-test for Fedora and Enterprise Linux 7.

While the version 0.5, available in the official repository is already 16 months old, the project is working on a new major version of their electric diagrams editor.

Official web site : http://qelectrotech.org/.

Installation by YUM :

yum --enablerepo=remi-test install qelectrotech

RPM (version 0.60~rc1-1) are available for Fedora ≥ 23 and Enterprise Linux 7 (RHEL, CentOS, ...)

Follow this entry which will be updated on each new version (beta, RC, ...) until the finale version release.

Notice :a Copr / Qelectrotech repository also exists, which provides "development" versions (0.60-dev for now).

FLISoL 2017 | Panamá

Posted by Jose A Reyes H on April 25, 2017 05:04 AM
El pasado sabado 22 de abril se realizo el FLISoL en la Universidad Tecnológica de Panamá. Con un total de 150 asistentes durante todo el evento, entre estudiantes y profesionales de varias provincias del país: Panamá Oeste, Colón y Panamá. Muchas…

Visualizar JSON en Firefox

Posted by Eduardo Villagrán Morales on April 25, 2017 02:07 AM
Hace un tiempo que Firefox me cargaba las respuestas JSON hechas vía URL en un modo amigable:
Por algún motivo, luego de una actualización, dejó de hacerlo y me mostraba las respuestas JSON en forma plana:
Para volver a activar esta visualización debemos ingresar en la barra de URL:  about:config. Con ello entraremos en la configuración detallada de Firefox. Al ingresar recibiremos un mensaje indicando que los cambios que hagamos podrían causar problemas, lo cual es cierto para algunos ajustes.
Luego de aceptar el risego, colocamos en el campo de búsqueda "json" y luego modificamos el ajuste de "devtools.jsonview.enabled" a true.

Automated critical path update functional testing for Fedora

Posted by Adam Williamson on April 25, 2017 01:08 AM

A little thing I’ve been working on lately finally went live today…this thing:

openQA test results in Bodhi

Several weeks ago now, I adapted Fedora’s openQA to run an appropriate subset of tests on critical path updates. We originally set up our openQA deployment strictly to run tests at the distribution compose level, but I noticed that most of the post-install tests would actually also be quite useful things to test for critical path updates, too.

First, I set up a slightly different openQA workflow that starts from an existing disk image of a clean installed system, downloads the packages from a given update, sets up a local repository containing the packages, and runs dnf -y update before going ahead with the main part of the test.

Then, I adapted our openQA scheduler to trigger this workflow whenever a critical path update is submitted or edited, and forward the results to ResultsDB.

All of this went into production a few weeks ago, and the tests have been run on every critical path update since then. But there was a big piece missing: making the information easily available to the update submitter (and anyone else interested). So I wanted to make the results visible in Bodhi, alongside the Taskotron results. So I sent a patch for Bodhi, and the new Bodhi release with that change included was deployed to production today.

The last two Bodhi releases actually make some other great improvements to the display of automated test results, thanks to Ryan Lerch and Randy Barlow. The results are actually retrieved from ResultsDB by client-side Javascript every time someone views an update. Previously, this was done quite inefficiently and the results were shown at the top of the main update page, which meant they would show up piecemeal for several seconds after the page had mostly loaded, which was rather annoying especially for large updates.

Now the results are retrieved in a much more efficient manner and shown on a separate tab, where a count of the results is displayed once they’ve all been retrieved.

So with Bodhi 2.6, you should have a much more pleasant experience viewing automated test results in Bodhi’s web UI – and for critical path updates, you’ll now see results from openQA functional testing as well as Taskotron tests!

At present, the tests openQA runs fall into three main groups:

  1. Some simple ‘base’ tests, which check that SELinux is enabled, service manipulation (enabling, disabling, starting and stopping services) works, no default-enabled services fail to start, and updating the system with dnf works.

  2. Some desktop tests (currently run only on GNOME): launching and using a graphical terminal works, launching Firefox and doing some basic tests in it works, and updating the system with the graphical updater (GNOME Software in GNOME’s case) works.

  3. Some server tests: is the firewall configured and working as expected, is Cockpit enabled by default, and does it basically work, and both server and client tests for the database server (PostgreSQL) and domain controller (FreeIPA) server roles.

So if any of these fail for a critical path update, you should be able to see it. You can click any of the results to see the openQA webUI view of the test.

At present you cannot request a re-run of a single test. We’re thinking about mechanisms for allowing this at present. You can cause the entire set of openQA tests to be run again by editing the update: you don’t have to add or remove any builds, any kind of edit (just change a character in the description) will do.

If you need help interpreting any openQA test results, please ask on the test@ mailing list or drop by #fedora-qa . Myself or garretraziel should be available there most of the time.

Please do send along any thoughts, questions, suggestions or complaints to test@ or as a comment on this blog post. We’ll certainly be looking to extend and improve this system in future!

systemd ❤ meson

Posted by Zbigniew Jędrzejewski-Szmek on April 24, 2017 10:00 PM

After hearing good things about meson for a long time, I decided to take the plunge and started working on porting the build system of systemd to meson. In our case "build system" is really a system — 11.5k lines in configure.ac and two Makefile.am s. This undertaking was bigger than I expected. Even though I had the initial patch compiling most of the code after a weekend of work, it took another three weeks and 80 patches [1] to bring it to mergeable state. There are still minor issues outstanding, but the pull request has been merged, so I want to take the opportunity to celebrate and summarize my impressions about meson.

It has been an immense privilege and pleasure to receive feedback and advice from contributors on both sides. On the systemd side, systemd contributors Michael Biebl, Evgeny Vereshchagin, Michael Olbrich, Mike Gilbert, and Lennart Poettering reviewed the pull request multiple times, providing a long stream of issues to fix and hints and patches. But also from the other side, meson contributors Igor Gnatenko, Jussi Pakkanen, Nirbheek Chauhan, TingPing reviewed the patchset and provided many useful suggestions. In addition, since systemd is a fairly complicated project, I filed quite a few bugs against meson [2], and received response to many of them immediately, and some are already fixed. This makes me very optimistic about meson's future — even though there are some shortcomings, the community is extremely responsive and meson seems to improve very quickly.

Finally I want to give a shout out to Michal Sojka who wrote meson-mode for emacs and also fixed all reported bugs incredibly quickly.

Why meson?

In case you didn't know, meson is python-based configuration system that performs detection and configuration and generates ninja rules to do the actual compilation [3]. The project is young — it was started right before Christmas 2012, but it has recently been picked up by various high-profile projects including mesa, gstreamer, gnome-builder, etc.

Why would one want to replace a working build system with something new?

For systemd, there are basically 2½ reasons:

  • the build is faster. This sounds like a minor issue, but quick builds make development easier. Detailed statistics are provided below [update: will be provided in a subsequent note, this one is long enough already], but the summary is that under meson a full configuration and build is an order of magnitude faster, and for partial rebuilds the gap is even bigger. To quote Lennart Poettering

    you can't overestimate the relevance of the speed of building systemd: it's one of the most defining factors of making hacking systemd fun, and keeping people focused.

  • the configuration language is simpler. For historical reasons, autoconf uses a mixture of m4 and shell, and automake has its own dsl that is similar-but-not-the-same as make. Under meson this is replaced by a single pythonesque language that is used to declare the configuration options, environment checks, dependencies, and compilation and installation rules. This lowers the bar for contributors, and removes many gotchas.

    To get a taste for the syntax, compare meson

    if cc.has_function('getrandom', prefix : '''#include <sys/random.h>''')
             conf.set('USE_SYS_RANDOM_H', 1,
                      description: 'sys/random.h is usable')
             conf.set10('HAVE_DECL_GETRANDOM', 1)
             have = cc.has_function('getrandom', prefix : '''#include <linux/random.h>''')
             conf.set10('HAVE_DECL_GETRANDOM', have)

    with autoconf

                   [AC_DEFINE([USE_SYS_RANDOM_H], [], [sys/random.h is usable])],
                   [AC_CHECK_DECLS([getrandom], [], [], [[
    #include <sys/random.h>
    ]])], [[
    #include <linux/random.h>

    As a trivial example, we had occasional bugs in the old build system where a line continuation was omitted (e.g. c22569eeea, fe582db94b) resulting in strange build failures. Under the new build system I get a useful error message:

    Meson encountered an error in file src/shared/meson.build, line 114, column 44:
    Expecting rbracket got comma.
            shared_sources += ['seccomp-util.c',,]
    Meson encountered an error in file src/shared/meson.build, line 115, column 27:
    Expecting rbracket got string.
    For a block that started at 114,26
            shared_sources += ['seccomp-util.c'
  • the half reason is that meson + ninja provide slightly better error reporting in case of build failures. By default the compilation log is very terse, with \r used to constantly overwrite the status, keeping an uneventful build to one line. But when something goes wrong, the command is printed along with the full output. The result is superior in the case of multi-threaded compilation. On my workstation I normally use make -j12, and finding the error requires scrolling back through pages of logs to find the failure point.

    The way that the commands themselves are reported is also nicer under ninja. Compare meson:

    FAILED: src/shared/systemd-shared-233@sha/seccomp-util.c.o
    ccache cc '-Isrc/shared/systemd-shared-233@sha' '-Isrc/shared'
    '-Isrc/basic' '-Isrc/journal' '-I../src/shared'
    '-Isrc/libsystemd-network' '-I../src/libsystemd-network'
    '-I../src/libsystemd/sd-network' '-I../src/libsystemd/sd-netlink'
    '-I../src/libsystemd/sd-id128' '-I../src/libsystemd/sd-hwdb'
    '-I../src/libsystemd/sd-device' '-I../src/libsystemd/sd-bus'
    '-Isrc/core' '-I../src/core' '-Isrc/libudev' '-I../src/libudev'
    '-Isrc/udev' '-I../src/udev' '-Isrc/login' '-I../src/login'
    '-Isrc/timesync' '-I../src/timesync' '-Isrc/resolve'
    '-I../src/resolve' '-I../src/journal' '-Isrc/systemd'
    '-I../src/systemd' '-I../src/basic' '-I/usr/include/blkid'
    '-I/usr/include/uuid' '-fdiagnostics-color=always' '-pipe'
    '-D_FILE_OFFSET_BITS=64' '-Wall' '-Winvalid-pch' '-std=gnu99'
    '-O0' '-g' '-Wundef' '-Wlogical-op' '-Wmissing-include-dirs'
    '-Wold-style-definition' '-Wpointer-arith' '-Winit-self'
    '-Wdeclaration-after-statement' '-Wfloat-equal'
    '-Wsuggest-attribute=noreturn' '-Werror=missing-prototypes'
    '-Werror=missing-declarations' '-Werror=return-type'
    '-Werror=incompatible-pointer-types' '-Werror=format=2'
    '-Wstrict-prototypes' '-Wredundant-decls' '-Wmissing-noreturn'
    '-Wshadow' '-Wendif-labels' '-Wstrict-aliasing=2'
    '-Wwrite-strings' '-Wno-unused-parameter'
    '-Wno-missing-field-initializers' '-Wno-unused-result'
    '-Wno-format-signedness' '-Werror=overflow' '-Wdate-time'
    '-Wnested-externs' '-ffast-math' '-fno-common'
    '-fdiagnostics-show-option' '-fno-strict-aliasing'
    '-fvisibility=hidden' '-fstack-protector'
    '-fstack-protector-strong' '-fPIE' '--param=ssp-buffer-size=4'
    '-Werror=shadow' '-include' 'config.h' '-fPIC' '-pthread'
    '-fvisibility=default' '-MMD' '-MQ'
    'src/shared/systemd-shared-233@sha/seccomp-util.c.o' '-MF'
    'src/shared/systemd-shared-233@sha/seccomp-util.c.o.d' -o
    'src/shared/systemd-shared-233@sha/seccomp-util.c.o' -c
    ../src/shared/seccomp-util.c: In function ‘seccomp_restrict_archs’:
    ../src/shared/seccomp-util.c:1305:15: error: expected ‘=’, ‘,’, ‘;’,
                                          ‘asm’ or ‘__attribute__’ before ‘r’
             int r r;
    ../src/shared/seccomp-util.c:1305:15: error: ‘r’ undeclared…

    with automake + libtool (make V=1):

    /bin/sh ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H
    -I. -I..  -include ./config.h -DPKGSYSCONFDIR=\"/etc/systemd\"
    -DROOTPREFIX=\"/usr\" -DRANDOM_SEED_DIR=\"/var/lib/systemd/\"
    -DQUOTACHECK=\"/usr/sbin/quotacheck\" -DKEXEC=\"/usr/sbin/kexec\"
    -DUMOUNT_PATH=\"/usr/bin/umount\" -DLIBDIR=\"/usr/lib64\"
    -DROOTLIBDIR=\"/usr/lib64\" -DROOTLIBEXECDIR=\"/usr/lib/systemd\"
    -I ../src -I ./src/basic -I ../src/basic -I ../src/shared
    -I ./src/shared -I ../src/network -I ../src/locale -I ../src/login
    -I ../src/journal -I ./src/journal -I ../src/timedate
    -I ../src/timesync -I ../src/nspawn -I ../src/resolve
    -I ./src/resolve -I ../src/systemd -I ./src/core -I ../src/core
    -I ../src/libudev -I ../src/udev -I ../src/udev/net -I ./src/udev
    -I ../src/libsystemd/sd-bus -I ../src/libsystemd/sd-event
    -I ../src/libsystemd/sd-login -I ../src/libsystemd/sd-netlink
    -I ../src/libsystemd/sd-network -I ../src/libsystemd/sd-hwdb
    -I ../src/libsystemd/sd-device -I ../src/libsystemd/sd-id128
    -I ../src/libsystemd-network
    -D__SANE_USERSPACE_TYPES__ -pipe -Wall -Wextra -Wundef -Wlogical-op
    -Wmissing-include-dirs -Wold-style-definition -Wpointer-arith -Winit-self
    -Wdeclaration-after-statement -Wfloat-equal -Wsuggest-attribute=noreturn
    -Werror=missing-prototypes -Werror=implicit-function-declaration
    -Werror=missing-declarations -Werror=return-type -Werror=incompatible-pointer-types
    -Werror=format=2 -Wstrict-prototypes -Wredundant-decls -Wmissing-noreturn
    -Wshadow -Wendif-labels -Wstrict-aliasing=2 -Wwrite-strings
    -Wno-unused-parameter -Wno-missing-field-initializers -Wno-unused-result
    -Wno-format-signedness -Werror=overflow -Wdate-time -Wnested-externs
    -ffast-math -fno-common -fdiagnostics-show-option -fno-strict-aliasing
    -fvisibility=hidden -fstack-protector -fstack-protector-strong -fPIE
    --param=ssp-buffer-size=4 -Werror=shadow
    -D__SANE_USERSPACE_TYPES__ -pipe -Wall -Wextra -Wundef -Wlogical-op
    -Wmissing-include-dirs -Wold-style-definition -Wpointer-arith -Winit-self
    -Wdeclaration-after-statement -Wfloat-equal -Wsuggest-attribute=noreturn
    -Werror=missing-prototypes -Werror=implicit-function-declaration
    -Werror=missing-declarations -Werror=return-type -Werror=incompatible-pointer-types
    -Werror=format=2 -Wstrict-prototypes -Wredundant-decls -Wmissing-noreturn
    -Wshadow -Wendif-labels -Wstrict-aliasing=2 -Wwrite-strings
    -Wno-unused-parameter -Wno-missing-field-initializers -Wno-unused-result
    -Wno-format-signedness -Werror=overflow -Wdate-time -Wnested-externs
    -ffast-math -fno-common -fdiagnostics-show-option -fno-strict-aliasing
    -fvisibility=hidden -fstack-protector -fstack-protector-strong -fPIE
    --param=ssp-buffer-size=4 -Werror=shadow
    -D__SANE_USERSPACE_TYPES__ -pipe -Wall -Wextra -Wundef -Wlogical-op
    -Wmissing-include-dirs -Wold-style-definition -Wpointer-arith -Winit-self
    -Wdeclaration-after-statement -Wfloat-equal -Wsuggest-attribute=noreturn
    -Werror=missing-prototypes -Werror=implicit-function-declaration
    -Werror=missing-declarations -Werror=return-type -Werror=incompatible-pointer-types
    -Werror=format=2 -Wstrict-prototypes -Wredundant-decls -Wmissing-noreturn
    -Wshadow -Wendif-labels -Wstrict-aliasing=2 -Wwrite-strings
    -Wno-unused-parameter -Wno-missing-field-initializers -Wno-unused-result
    -Wno-format-signedness -Werror=overflow -Wdate-time -Wnested-externs
    -ffast-math -fno-common -fdiagnostics-show-option -fno-strict-aliasing
    -fvisibility=hidden -fstack-protector -fstack-protector-strong -fPIE
    --param=ssp-buffer-size=4 -Werror=shadow
    -I/usr/include/blkid -I/usr/include/uuid
    -D__SANE_USERSPACE_TYPES__ -pipe -Wall -Wextra -Wundef -Wlogical-op
    -Wmissing-include-dirs -Wold-style-definition -Wpointer-arith -Winit-self
    -Wdeclaration-after-statement -Wfloat-equal -Wsuggest-attribute=noreturn
    -Werror=missing-prototypes -Werror=implicit-function-declaration
    -Werror=missing-declarations -Werror=return-type -Werror=incompatible-pointer-types
    -Werror=format=2 -Wstrict-prototypes -Wredundant-decls -Wmissing-noreturn
    -Wshadow -Wendif-labels -Wstrict-aliasing=2 -Wwrite-strings
    -Wno-unused-parameter -Wno-missing-field-initializers -Wno-unused-result
    -Wno-format-signedness -Werror=overflow -Wdate-time -Wnested-externs
    -ffast-math -fno-common -fdiagnostics-show-option -fno-strict-aliasing
    -fvisibility=hidden -fstack-protector -fstack-protector-strong -fPIE
    --param=ssp-buffer-size=4 -Werror=shadow -Wno-pointer-arith
    -I/usr/include/uuid -fvisibility=default -g -O0 -MT
    src/shared/libsystemd_shared_la-seccomp-util.lo -MD -MP -MF
    src/shared/.deps/libsystemd_shared_la-seccomp-util.Tpo -c -o
    src/shared/libsystemd_shared_la-seccomp-util.lo `test -f
    'src/shared/seccomp-util.c' || echo
    libtool: compile: gcc -DHAVE_CONFIG_H -I. -I..
    -include ./config.h
    -DROOTPREFIX=\"/usr\" -DRANDOM_SEED_DIR=\"/var/lib/systemd/\"
    -DQUOTACHECK=\"/usr/sbin/quotacheck\" -DKEXEC=\"/usr/sbin/kexec\"
    -DUMOUNT_PATH=\"/usr/bin/umount\" -DLIBDIR=\"/usr/lib64\"
    -DROOTLIBDIR=\"/usr/lib64\" -DROOTLIBEXECDIR=\"/usr/lib/systemd\"
    -I ../src -I ./src/basic -I ../src/basic -I ../src/shared
    -I ./src/shared -I ../src/network -I ../src/locale -I ../src/login
    -I ../src/journal -I ./src/journal -I ../src/timedate
    -I ../src/timesync -I ../src/nspawn -I ../src/resolve
    -I ./src/resolve -I ../src/systemd -I ./src/core -I ../src/core
    -I ../src/libudev -I ../src/udev -I ../src/udev/net -I ./src/udev
    -I ../src/libsystemd/sd-bus -I ../src/libsystemd/sd-event
    -I ../src/libsystemd/sd-login -I ../src/libsystemd/sd-netlink
    -I ../src/libsystemd/sd-network -I ../src/libsystemd/sd-hwdb
    -I ../src/libsystemd/sd-device -I ../src/libsystemd/sd-id128
    -I ../src/libsystemd-network
    -D__SANE_USERSPACE_TYPES__ -pipe -Wall -Wextra -Wundef -Wlogical-op
    -Wmissing-include-dirs -Wold-style-definition -Wpointer-arith -Winit-self
    -Wdeclaration-after-statement -Wfloat-equal -Wsuggest-attribute=noreturn
    -Werror=missing-prototypes -Werror=implicit-function-declaration
    -Werror=missing-declarations -Werror=return-type -Werror=incompatible-pointer-types
    -Werror=format=2 -Wstrict-prototypes -Wredundant-decls -Wmissing-noreturn
    -Wshadow -Wendif-labels -Wstrict-aliasing=2 -Wwrite-strings
    -Wno-unused-parameter -Wno-missing-field-initializers -Wno-unused-result
    -Wno-format-signedness -Werror=overflow -Wdate-time -Wnested-externs
    -ffast-math -fno-common -fdiagnostics-show-option -fno-strict-aliasing
    -fvisibility=hidden -fstack-protector -fstack-protector-strong -fPIE
    --param=ssp-buffer-size=4 -Werror=shadow
    -D__SANE_USERSPACE_TYPES__ -pipe -Wall -Wextra -Wundef -Wlogical-op
    -Wmissing-include-dirs -Wold-style-definition -Wpointer-arith -Winit-self
    -Wdeclaration-after-statement -Wfloat-equal -Wsuggest-attribute=noreturn
    -Werror=missing-prototypes -Werror=implicit-function-declaration
    -Werror=missing-declarations -Werror=return-type -Werror=incompatible-pointer-types
    -Werror=format=2 -Wstrict-prototypes -Wredundant-decls -Wmissing-noreturn
    -Wshadow -Wendif-labels -Wstrict-aliasing=2 -Wwrite-strings
    -Wno-unused-parameter -Wno-missing-field-initializers -Wno-unused-result
    -Wno-format-signedness -Werror=overflow -Wdate-time -Wnested-externs
    -ffast-math -fno-common -fdiagnostics-show-option -fno-strict-aliasing
    -fvisibility=hidden -fstack-protector -fstack-protector-strong -fPIE
    --param=ssp-buffer-size=4 -Werror=shadow
    -D__SANE_USERSPACE_TYPES__ -pipe -Wall -Wextra -Wundef -Wlogical-op
    -Wmissing-include-dirs -Wold-style-definition -Wpointer-arith -Winit-self
    -Wdeclaration-after-statement -Wfloat-equal -Wsuggest-attribute=noreturn
    -Werror=missing-prototypes -Werror=implicit-function-declaration
    -Werror=missing-declarations -Werror=return-type -Werror=incompatible-pointer-types
    -Werror=format=2 -Wstrict-prototypes -Wredundant-decls -Wmissing-noreturn
    -Wshadow -Wendif-labels -Wstrict-aliasing=2 -Wwrite-strings
    -Wno-unused-parameter -Wno-missing-field-initializers -Wno-unused-result
    -Wno-format-signedness -Werror=overflow -Wdate-time -Wnested-externs
    -ffast-math -fno-common -fdiagnostics-show-option -fno-strict-aliasing
    -fvisibility=hidden -fstack-protector -fstack-protector-strong -fPIE
    --param=ssp-buffer-size=4 -Werror=shadow
    -I/usr/include/blkid -I/usr/include/uuid
    -D__SANE_USERSPACE_TYPES__ -pipe -Wall -Wextra -Wundef -Wlogical-op
    -Wmissing-include-dirs -Wold-style-definition -Wpointer-arith -Winit-self
    -Wdeclaration-after-statement -Wfloat-equal -Wsuggest-attribute=noreturn
    -Werror=missing-prototypes -Werror=implicit-function-declaration
    -Werror=missing-declarations -Werror=return-type -Werror=incompatible-pointer-types
    -Werror=format=2 -Wstrict-prototypes -Wredundant-decls -Wmissing-noreturn
    -Wshadow -Wendif-labels -Wstrict-aliasing=2 -Wwrite-strings
    -Wno-unused-parameter -Wno-missing-field-initializers -Wno-unused-result
    -Wno-format-signedness -Werror=overflow -Wdate-time -Wnested-externs
    -ffast-math -fno-common -fdiagnostics-show-option -fno-strict-aliasing
    -fvisibility=hidden -fstack-protector -fstack-protector-strong -fPIE
    --param=ssp-buffer-size=4 -Werror=shadow -Wno-pointer-arith
    -I/usr/include/uuid -fvisibility=default -g -O0 -MT
    src/shared/libsystemd_shared_la-seccomp-util.lo -MD -MP -MF
    src/shared/.deps/libsystemd_shared_la-seccomp-util.Tpo -c
    ../src/shared/seccomp-util.c -fPIC -DPIC -o
    ../src/shared/seccomp-util.c: In function ‘seccomp_restrict_archs’:
    ../src/shared/seccomp-util.c:1305:15: error: expected ‘=’, ‘,’, ‘;’,
                                          ‘asm’ or ‘__attribute__’ before ‘r’
             int r r;
    ../src/shared/seccomp-util.c:1305:15: error: ‘r’ undeclared…
    Makefile:18833: recipe for target 'src/shared/libsystemd_shared_la-seccomp-util.lo' failed
    make: *** [src/shared/libsystemd_shared_la-seccomp-util.lo] Error 1
    make: Leaving directory '/home/zbyszek/src/systemd/build-autotools'

    With meson it is natural to put all the defines into config.h, which reduces clutter. It also prints the final command line, while automake first calls libtool (and also includes a shell check to determine if the input file is in build or source directory), and libtool calls the compiler, so the same stuff is printed twice.

    (Yes, some of the compilation options are needlessly repeated in both cases. Under automake, they seem to be repeated quite bit. I think nobody really noticed before, this stuff is so painful to look at.)

    I graded this as half of a reason, because autotools does a pretty good job with its V=0/1/2 options, and customizable prefixes for different commands. I think there's still room for improvement under meson. For example, the command line would be much easier to read if the unneeded quoting was dropped.


In the end, the meson patchset is 76 files with 7.5k lines. Pretty much all options that the old build system supported are also available under the new one. One thing which doesn't yet work is builds with link-time optimization.

One area in which meson seems inferior to Makefiles, is in support for shell pipelines. In systemd we have a lot of semi-automatically generated sources, e.g.:

     $(AM_V_at)$(MKDIR_P) $(dir $@)
          -dM -include net/if_arp.h - </dev/null | \
          $(AWK) '/^#define[ \t]+ARPHRD_[^ \t]+[ \t]+[^ \t]/ { print $$2; }' | \
          sed -e 's/ARPHRD_//' >$@

Under meson, this has to be exported to an external script (hence the total of 76 files), or stuffed into a target which calls sh -c '...' inline. The first solution results in a proliferation of tiny files. There's also the problem that there's no easy way to pass $CPP, $AWK, $CPPFLAGS, because those paths are not exported as variables. The second solution is hard to read, and program paths have to be substituted through awkward string formatting. There's also the issue that meson does not escape regexp-y characters correctly. I hope that in the future meson will get a way to construct calls to arbitrary external programs inline.

All said, this was my very first experience with meson (I hadn't even compiled anything with ninja before), and (with lots of help from people) I built a replacement which is almost a match in features and works significantly faster then a build system which had had years to mature. I wouldn't be surprised if meson becomes the default linux build system.


I include a changelog on patches that have been updated in response to comments, and the longest one is currently at v12. Even though it technically is still one patch, it doesn't feel like that.

Also, this count excludes "cleanup" patches that came out of this work, but were submitted as separate pull requests because they impact autotools and meson builds equally.

[3]meson supports other backends, but since systemd is Linux-only, those are not relevant.

Nethack Encyclopedia Reduxd

Posted by Mo Morsi on April 24, 2017 05:23 PM

I've been working on way too many projects recently... Alas, I was able to slip in some time to update the NetHack Encyclopedia app on the Android MarketPlace (first released nearly 5 years ago!).

Version 5.3 brings several features including new useful tools. The first is the Message Searcher that allows the user to quickly query the many cryptic game messages by substring & context. Additionally the Game Tracker has been implemented, faciliting player, item, and level identification in a persistant manner. Simply enter entity attributes as they are discovered and the tracker will deduce the remaining missing information based on its internal alogrithm. This is ontop of many enhancements to the backend including the incorporation of a searchable item database.

The logic of the application has been highly refactored & cleaned up, the code has come along ways since first being written. In large, I feel pretty comfortable with the Android platform at the current time, it has its nuances, but all platorms do, and it's pretty easy to go from concept to implementation.

As far as the game itself, I have a ways to go before retrieving the Amulet! It's quite a challenge, but you learn with every replay, and thus you get closer. Ascension will be mine! (someday)

Nethack 5.3 screen1 Nethack 5.3 screen2 Nethack 5.3 screen3 Nethack 5.3 screen4

Lessons on Aikido and Life via Splix

Posted by Mo Morsi on April 24, 2017 05:23 PM

Recently, I've stumbled upon splix, a new obsession game, with simple mechanics that unfold into a complex competitive challenge requiring fast reflexes and dynamic tactics.

Splix intro

At the core the rule set is very simple: - surround territory to claim it - do not allow other players to hit your tail (you lose... game over)

Splix overextended

While in your territory you have no tail, rendering you invulnerable, but during battles territory is always changing, and you don't want to get caught deep on an attack just to be surrounded by an enemy who swaps the territory alignment to his!

Splix deception

The simple dynamic yields an unbelievable amount of strategy & tactics to excel at while at the same time requiring quick calculation and planning. A foolheardy player will just rush into enemy territory to attempt to capture squares and attack his opponent but a smart player will bait his opponent into his sphere of influence through tactful strikes and misdirections.

Splix bait

Furthermore we see age old adages such as "better to run and fight another day" and the wisdom of pitting opponents against each other. Alliances are always shifting in splix, it simply takes a single tap from any other player to end your game. So while you may be momentarily coordinating with another player to surround and obliterate a third, watch your back as the alliance may dissove at the first opportunity (not to mention the possiblity of outside players appearing anytime!)

Splix alliance

All in all, I've found careful observation and quick action to yield the most successful results on the battlefield. The ideal kill is from behind an opponent who has periously invaded your territory deeply. Beyond this, lurking at the border so as the goad the enemy into a foolheardy / reckless attack is a robust tactic provided you have built up the relfexes and coordination to quickly move in and out of territory which is constantly changing. Make sure you don't fall suspect to your own trick and overpenetrate the enemy border!

Splix bait2

Another tactic to deal w/ an overly aggressive opponent is to slightly fallback into your safe zone to quickly return to the front afterwords, perhaps at a different angle or via a different route. Often a novice opponent will see the retreat as a sign of fear or weakness and become over confident, penetrating deep into your territory in the hopes of securing a large portion quickly. By returning to the front at an unexpected moment, you will catch the opponents off guard and be able to destroy them before they have a chance to retreat to their safe zone.

Splix draw out

Of course if the opponent employs the same strategy, a player can take a calculated risk and drive a distance into the enemy territory before returning to the safe zone. By paying attention to the percentage of visible territory which the player's vulnerability zone occupies and the relative position of the opponent, they should be able to safely guage the safe distance to which they can extend so as to ensure a safe return. Taking large amounts of territory quickly is psychologically damaging to an opponent, especially one undergoing attacks on multiple fronts.

Splix draw out2

If all else fails to overcome a strong opponent, a reasonable retreat followed by an alternate attack vector may result in success. Since in splix we know that an safe zone corresponds to only one enemy, if we can guage / guess where they are, we can attempt to alter the dynamics of the battle accordingly. If we see that an opponent has stretch far beyond the mass of his safe zone via a single / thin channel, we can attempt to cut them off, preventing a retreat without crossing your sphere of influence.

Splix changing

This dynamic becomes even more pronounced if we can encircle an opponent, and start slowly reducing his control of the board. By slowly but mechanically & gradually taking enemy territory we can drive an opponent in a desired direction, perhaps towards a wall or other player.

Splix tactics2

Regardless of the situation, the true strategist will always be shuffling his tactics and actions to adapt to the board and setup the conditions for guaranteed victory. At no point should another player be underestimated or trusted. Even a new player with little territory can pose a threat to the top of the leader board given the right conditions and timing. The victorious will stay clam in the heat of the the battle, and use careful observations, timing, and quick reflexes to win the game.

(<endnote> the game *requires* a keyboard, it can be played via smartphone (swapping) but the arrow keys yields the fastest feedback</endnode>)


Posted by Bodhi on April 24, 2017 04:58 PM

Special instructions

  1. The database migrations have been trimmed in this release. To upgrade to this version of Bodhi
    from a version prior to 2.3, first upgrade to Bodhi 2.3, 2.4, or 2.5, run the database
    migrations, and then upgrade to this release.
  2. Bodhi cookies now expire, but cookies created before 2.6.0 will not automatically expire. To
    expire all existing cookies so that only expiring tickets exist, you will need to change
    authtkt.secret to a new value in your settings file.

Dependency adjustments

  • zope.sqlalchemy is no longer a required dependency (#1414).
  • WebOb is no longer a directly required dependency, though it is still indirectly required through


  • The web UI footer has been restyled to fit better with the new theme (#1366).
  • A link to documentation has been added to the web UI footer (#1321).
  • The bodhi CLI now supports editing updates (#937).
  • The CLI's USERNAME environment variable is now documented, and its --user flag is
    clarified (28dd380).
  • The icons that we introduced in the new theme previously didn't have titles.
    Consequently, a user might not have know what these icons meant. Now if a user
    hovers over these icons, they get a description of what they mean, for
    example: "This is a bugfix update" or "This update is in the critial path"
  • Update pages with lots of updates look cleaner (#1351).
  • Update page titles are shorter now for large updates (#957).
  • Add support for alternate architectures to the MasherThread.wait_for_sync() (#1343).
  • Update lists now also include type icons next to the updates (5983d99).
  • Testing updates use a consistent label color now (6233064).
  • openQA results are now displayed in the web ui (450dbaf).
  • Bodhi cookies now expire. There is a new authtkt.timeout setting that sets Bodhi's session
    lifetime, defaulting to 1 day.


  • Comments that don't carry karma don't count as a user's karma vote (#829).
  • The web UI now uses the update alias instead of the title so editors of large updates can click
    the edit button (#1161).
  • Initialize the bugtracker in main() instead of on import so that docs can be built without
    installing Bodhi (#1359).
  • Make the release graph easier to read when there are many datapoints (#1172).
  • Optimize the JavaScript that loads automated test results from ResultsDB (#983).
  • Bodhi's testing approval comment now respects the karma reset event (#1310).
  • pop and copy now lazily load the configuration (#1423).

Development improvements

  • A new automated PEP-257 test has been introduced to enforce docblocks across the codebase.
    Converting the code will take some time, but the code will be expanded to fully support PEP-257
    eventually. A few modules have now been documented.
  • Test coverage is now 84%.
  • The Vagrant environment now has vim with a simple vim config to make sure spaces are used instead
    of tabs (#1372).
  • The Package database model has been converted into a single-table inheritance model, which will
    aid in adding multi-type support to Bodhi. A new RpmPackage model has been added (#1392).
  • The database initialization code is unified (e9a2604).
  • The base model class now has a helpful query property (8167f26).
  • .pyc files are now removed when running the tests in the dev environment (9e9adb6).
  • An unused inherited column has been dropped from the builds table (e8a95b1).

Release contributors

The following contributors submitted patches for Bodhi 2.6.0:

  • Jeremy Cline
  • Ryan Lerch
  • Clement Verna
  • Caleigh Runge-Hottman
  • Bianca Nenciu
  • Adam Williamson
  • Ankit Raj Ojha
  • Jason Taylor
  • Randy Barlow

Search and Replace The VIM Way

Posted by Mo Morsi on April 24, 2017 04:18 PM

Did you know that it is 2017 and the VIM editor still does not have a decent multi-file search and replacement mechanism?! While you can always roll your own, it’s rather cumbersome, and even though some would say this isn’t in the spirit of an editor such as VIM, a large community has emerged around extending it in ways to behave more like a traditional IDE.

Having written about doing something similar to this via the cmd line a while back, and having refactored a large amount of code recently that involved lots of renaming, I figured it was time to write a plugin to do just that, rename strings across source files, using grep and sed

Before we begin, it should be noted that this is of most use with a ‘rooting’ plugin like vim-rooter. By using this, you will ensure vim is always running in the root directory of the project you are working on, regardless of the file being modified. Thus all search & replace commands will be run relative to the top project dir.

To install vsearch, we use Vundle. Setup & installation of that is out of scope for this article, but I highly recommend familiarizing yourself with Vundle as it’s the best Vim plugin management system (in my opinion).

Once Vundle is installed, using vsearch is as simple as adding the following to your ~/.vim/vimrc:

Plugin ‘movitto/vim-vsearch’

Restart Vim and run :PluginInstall to install vsearch from github. Now you’re good to go!

vsearch provides two commands :VSearch and :VReplace.

VSearch simply runs grep and displays the results, without interrupting the buffer you are currently editing.

VReplace runs a search in a similar manner to VSearch but also performs and in-memory string replacement using the specified args. This is displayed to the user who is prompted for comfirmation. Upon receiving it, the plugin then executes sed and reports the results.

VirtFS New Plugin Guide

Posted by Mo Morsi on April 24, 2017 03:27 PM

Having recently extracted much of the FS interface from MiQ into virtfs plugins, it was a good time to write a guide on how to write a new plugin from scratch. It is attached below.

This document details the process of writing a new VirtFS plugin from scratch.

Plugins may be written for many targets, from traditional filesystems (EXT, FAT, XFS), to filesystem-like entities, such as databases and object repositories, to things completely unrelated all together. Once written, VirtFS will use the plugin to expose the underlying component via the Ruby Filesystem API. Simply issue File & Dir calls to files under the specified mountpoint, and VirtFS will take care of the remaining details.

This guide assumes basic familiarity with the Ruby language and gem project format, in this tutorial we will be creating a new gem called virtfs-hellofs for our ‘hello’ filesystem, based on a simple JSON map.

Note, the end result can be seen at virtfs-hellofs

Initial Project Layout

Create a new working directory with the following contents:


TODO: a generator [patches are welcome!]

Required Components

The following components are required to define a full-fledged filesystem plugin:

  • A ‘mounting’ mechanism - Allows VirtFS to load your FS at the specified filesystem path / mountpoint.

  • Core File and Dir classes and class methods - VirtFS maps standard Ruby FS operations to their equivalent plugin calls

  • FS specific representations - the internal representation of filesystem constructs being implemented so as to satisfy the core class calls

Upon instantiation, a fs-specific ‘blocklike device’ is often required so as to provide block-level seek/read/write operations (such as from a physical disk, disk image, or other).

Eventually this will be implemented via a separate abstraction hierarchy, but for the time being virt-disk provides basic functionality to read simple file-based “devices”. Since we are only using a simply in-memory JSON based fs, we do not need to pull in virt_disk here.

Core functionality

First we will define the FS class providing our filesystem interface:


  module VirtFS::HelloFS
    class FS
      include DirClassMethods
      include FileClassMethods

      attr_accessor :mount_point, :superblock

      # Return bool indicating if device contains
      # a HelloFS instance
      def self.match?(device)
          Superblock.new(self, device)
          return true
        rescue => err
          return false

      # Initialze new HelloFS instance w/ the
      # specified device
      def initialize(device)
        @superblock  = Superblock.new(self, device)

      # Return root directory of the filesystem
      def root_dir

      def thin_interface?

      def umount
        @mount_point = nil
    end # class FS
  end # module VirtFS::HelloFS

Here we see a few things, particularly the inclusion of the Directory and File class methods satisfying the VirtFS API (more on those later) and the instantiation of a HelloFS specific Superblock construct.

In the #match? method, We verify the superblock of the underlying device matches that required by hellofs and we specify various core callbacks needed by VirtFS (particularly the #unmount and #thin_interface? methods, see this for more details on thin vs. thick interfaces).

The superblock class for HelloFS is simple, we implement our ‘filesystem’ through a simple json map, passed into virtfs on instantiation


module VirtFS::HelloFS
  # Top level filesystem construct.
  # In our case, we simply create a new
  # root directory from the HelloFS
  # json hash, but in most cases this
  # would parse / read top level metadata
  class Superblock
    attr_accessor :device

    def initialize(fs, device)
      @fs     = fs
      @device = device

    def root_dir
      Dir.new(self, device)
  end # class SuperBlock
end # module VirtFS::Hello


In the previous section the core fs class included two mixins, DirClassMethods and FileClassMethods implementing the VirtFS filesystem interface.


module VirtFS::HelloFS
  class FS
    # VirtFS Dir API implementation, dispatches
    # calls to underlying HelloFS constructs
    module DirClassMethods
      def dir_delete(p)

      def dir_entries(p)
        dir = get_dir(p)
        return nil if dir.nil?

      def dir_exist?(p)

      def dir_foreach(p, &block)
        r = get_dir(p).try(:glob_names)
                      .try(:each, &block)
        block.nil? ? r : nil

      def dir_mkdir(p, permissions)

      def dir_new(fs_rel_path, hash_args, _open_path, _cwd)


      def get_dir(p)
        names = p.split(/[\\\/]/)

        dir = get_dir_r(names)
        raise "Directory '#{p}' not found" if dir.nil?

      def get_dir_r(names)
        return root_dir if names.empty?

        # Check for this path in the cache.
        fname = names.join('/')

        name = names.pop
        pdir = get_dir_r(names)
        return nil if pdir.nil?

        de = pdir.find_entry(name)
        return nil if de.nil?

        Directory.new(self, superblock, de.inode)
    end # module DirClassMethods
  end # class FS
end # module VirtFS::HelloFS

This module implements the standard Ruby Dir Class operations including retrieving & modifying directory contents, and checking for file existence.

Particularly noteworthy is the get_dir method which returns the FS specific dir instance.


module VirtFS::HelloFS
  class FS
    # VirtFS file class implemention, dispatches requests
    # to underlying HelloFS constructs
    module FileClassMethods
      def file_atime(p)

      def file_blockdev?(p)

      def file_chardev?(p)

      def file_chmod(permission, p)
        raise "writes not supported"

      def file_chown(owner, group, p)
        raise "writes not supported"

      def file_ctime(p)

      def file_delete(p)

      def file_directory?(p)
        f = get_file(p)
        !f.nil? && f.dir?

      def file_executable?(p)

      def file_executable_real?(p)

      def file_exist?(p)

      def file_file?(p)
        f = get_file(p)
        !f.nil? && f.file?

      def file_ftype(p)

      def file_grpowned?(p)

      def file_identical?(p1, p2)

      def file_lchmod(permission, p)

      def file_lchown(owner, group, p)

      def file_link(p1, p2)

      def file_lstat(p)

      def file_mtime(p)

      def file_owned?(p)

      def file_pipe?(p)

      def file_readable?(p)

      def file_readable_real?(p)

      def file_readlink(p)

      def file_rename(p1, p2)

      def file_setgid?(p)

      def file_setuid?(p)

      def file_size(p)

      def file_socket?(p)

      def file_stat(p)

      def file_sticky?(p)

      def file_symlink(oname, p)

      def file_symlink?(p)

      def file_truncate(p, len)

      def file_utime(atime, mtime, p)

      def file_world_readable?(p)

      def file_world_writable?(p)

      def file_writable?(p)

      def file_writable_real?(p)

      def file_new(f, parsed_args, _open_path, _cwd)
        file = get_file(f)
        raise Errno::ENOENT, "No such file or directory" if file.nil?
        File.new(file, superblock)


        def get_file(p)
          dir, fname = VfsRealFile.split(p)

            dir_obj = get_dir(dir)
            dir_entry = dir_obj.nil? ? nil : dir_obj.find_entry(fname)
          rescue RuntimeError
    end # module FileClassMethods
  end # class FS
end # module VirtFS::HelloFS

The FileClassMethods module provides all the FS-specific funcality needed by Ruby to dispatch File Class calls (which contains a larger footprint than Dir, hence the need for more methods here).

Here we see many methods are not yet implemented. This is OK for the purposes of use in VirtFS but note any calls to the corresponding methods on a mounted filesystem will fail.

File and Dir classes

The final missing piece of the puzzle is the File and Dir classes. These provide standard interfaces which VirtFS can extract file and dir information.


module VirtFS::HelloFS
  # File class representation, responsible for
  # managing corresponding dir_entry attributes
  # and file content.
  # For HelloFS, files are simple in memory strings
  class File
    attr_accessor :superblock, :dir_entry

    def initialize(superblock, dir_entry)
      @sb        = superblock
      @dir_entry = dir_entry

    def to_h
      { :directory? => dir?,
        :file?      => file?,
        :symlink?   => false }

    def dir?

    def file?

    def fs

    def size
      dir? ? 0 : dir_entry.size

    def close
  end # class File
end # module VirtFS::HelloFS


module VirtFS::HelloFS
  # Dir class representation, responsible
  # for managing corresponding dir_entry
  # attributes
  # For HelloFS, dirs are simply nested
  # json maps
  class Dir
    attr_accessor :sb, :dir_entry

    def initialize(sb, dir_entry)
      @sb        = sb
      @dir_entry = dir_entry

    def close

    def glob_names

    def find_entry(name, type = nil)
      dir = type == :dir
      fle = type == :file

      return nil unless glob_names.include?(name)
      return nil if (dir && !dir_entry[name].is_a?(Hash)) ||
                    (fle && !dir_entry[name].is_a?(String))
      dir ? Dir.new(sb, dir_entry[name]) :
            File.new(sb, dir_entry[name])
  end # class Directory
end # module VirtFS::HelloFS

Again these are fairly straightforward, providing access to the underlying JSON map in a filesystem-like manner.


To finish, we’ll populate the project components required by every rubygem:


require "virtfs/hellofs.rb"


require "virtfs/hellofs/version"
require_relative 'hellofs/fs.rb'
require_relative 'hellofs/dir'
require_relative 'hellofs/file'
require_relative 'hellofs/superblock'


module VirtFS
  module HelloFS
    VERSION = "0.1.0"


lib = File.expand_path('../lib', __FILE__)
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
require 'virtfs/hellofs/version'

Gem::Specification.new do |spec|
  spec.name          = "virtfs-hellofs"
  spec.version       = VirtFS::HelloFS::VERSION
  spec.authors       = ["Cool Developers"]

  spec.summary       = %q{An HELLO based filesystem module for VirtFS}
  spec.description   = %q{An HELLO based filesystem module for VirtFS}
  spec.homepage      = "https://github.com/ManageIQ/virtfs-hellofs"
  spec.license       = "Apache 2.0"

  spec.files         = `git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(test|spec|features)/}) }
  spec.bindir        = "exe"
  spec.executables   = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
  spec.require_paths = ["lib"]

  spec.add_dependency "activesupport"
  spec.add_development_dependency "bundler"
  spec.add_development_dependency "rake", "~> 10.0"
  spec.add_development_dependency "rspec", "~> 3.0"
  spec.add_development_dependency "factory_girl"


source 'https://rubygems.org'

gem 'virtfs', "~> 0.0.1",
    :git => "https://github.com/ManageIQ/virtfs.git",
    :branch => "master"

# Specify your gem's dependencies in virtfs-hellofs.gemspec

group :test do
  gem 'virt_disk', "~> 0.0.1",
      :git => "https://github.com/ManageIQ/virt_disk.git",
      :branch => "initial"


require "bundler/gem_tasks"
require "rspec/core/rake_task"


task :default => :spec

Packaging It Up

Building virtfs-hellofs.gem is as simple as running:

rake build

in the project directory.

The gem will be written to the ‘pkg’ subdir and is ready for subsequent use / upload to rubygems.


To verify the plugin, create a test module which simply mounts a FS instance and dumps the directory contents:


require 'json'
require 'virtfs'
require 'virtfs/hellofs'

PATH = JSON.parse(File.read('hello.fs'))

exit 1 unless VirtFS::HelloFS::FS.match?(PATH)
fs = VirtFS::HelloFS::FS.new(PATH)

VirtFS.mount fs, '/'
puts VirtFS::VDir.entries('/')

We can create a simple JSON filesystem for testing purposes:


  "f1" : "foobar",
  "f2" : "barfoo",
  "d1" : { "sf1" : "fignewton",
           "sd1" : { "t" : "s" } }

Run the script, and if the directory contents are printed, you verified your FS!


rspec and factory_girl were added as development dependencies to the project and testing the new filesystem is as simple as adding new unit tests.

For ‘real’ filesystems, the plugin author will need to generate a ‘blocklike device’ image and populate it w/ the necessary test data.

Because large block image files are not condusive to source repository systems and automated build systems, virtfs-camcorderfs can be used to record and playback disk interactions in local dev environment, recording text based ‘cassettes’ which may be used to replicate disk interactions. See virtfs-camcorderfs for usage details.

Next Steps

We added barebones basic VirtFS functionality for our hellofs filesystem backend. From here, we can continue expanding upon this, providing read, write, and query support. Once implemented, VirtFS will use this filesystem like every other, providing seamless interchangeabilty!

I have seen the future, and it is bug bounties

Posted by Josh Bressers on April 24, 2017 02:23 PM

Every now and then I see something on a blog or Twitter about how you can't replace a pen test with a bug bounty. For a long time I agreed with this, but I've recently changed my mind. I know this isn't a super popular opinion (yet), and I don't think either side of this argument is exactly right. Fundamentally the future of looking for issues will not be a pen test. They won't really be bug bounties either, but I'm going to predict pen testing will evolve into what we currently call bug bounties.

First let's talk about a pen test. There's nothing wrong with getting a pen test, I'd suggest everyone goes through a few just to see what it's like. I want to be clear that I'm not saying pen testing is bad. I'm going to be making the argument why it's not the future. It is the present, many organizations require them for a variety of reasons. They will continue to be a thing for a very long time. If you can only pick one thing, you should probably choose a pen test today as it's at least a known known. Bug bounties are still known unknowns for most of us.

I also want to clarify that internal pen testing teams don't fall under this post. Internal teams are far more focused and have special knowledge that an outside company never will. It's my opinion that an internal team is and will always be superior to an outside pen test or bug bounty. Of course a lot of organizations can't afford to keep a dedicated internal team, so they turn to the outside.

So anyhow, it's time for a pen test. You find a company to conduct it, you scope what will be tested (it can't be everything). You agree on various timelines, then things get underway. After perhaps a week of testing, you have a very very long and detailed report of what was found. Here's the thing about a pen test; you're paying someone to look for problems. You will get what you pay for, you'll get a list of problems, usually a huge list. Everyone knows that the bigger the list, the better the pen test! But here's the dirty secret. Most of the results won't ever be fixed. Most results will fall below your internal bug bar. You paid for a ton of issues, you got a ton of issues, then you threw most of them out. Of course it's quite likely there will be high priority problems found, which is great. Those are what you really care about, not all the unexciting problems that are 95% of the report. What's your cost per issue fixed from that pen test?

Now let's look at how a bug bounty works. You find a company to run the bounty (it's probably not worth doing this yourself, there are many logistics). You scope what will be tested. You can agree on certain timelines and/or payout limits. Then things get underway. Here's where it's very different though. You're paying for the scope of bounty, you will get what you pay for, so there is an aspect of control. If you're only paying for critical bugs, by definition, you'll only get critical bugs. Of course there will be a certain amount of false positives. If I had to guess it's similar to a pen test today, but it's going to decrease as these organizations start to understand how to cut down on noise. I know HackerOne is doing some clever things to prevent noise.

My point to this whole post revolves around getting what you pay for, essential a cost per issue fixed instead of the current cost per issue found model. The real difference is that in the case of a bug bounty, you can control the scope of incoming. In no way am I suggesting a pen test is a bad idea, I'm simply suggesting that 200 page report isn't very useful. Of course if a pen test returned three issues, you'd probably be pretty upset when paying the bill. We all have finite resources so naturally we can't and won't fix minor bugs. it's just how things work. Today at best you'll about the same results from a bug bounty and a pen test, but I see a bug bounty as having room to improve. I think the pen test model isn't full of exciting innovation.

All this said, not every product and company will be able to attract enough interest in a bug bounty. Let's face it, the real purpose behind all this is to raise the security profiles of everyone involved. Some organizations will have to use a pen test like model to get their products and services investigated. This is why the bug bounty program won't be a long term viable option. There are too many bugs and not enough researchers.

Now for the bit about the future. The near future we will see the pendulum swing from pen testing to bug bounties. The next swing of the pendulum after bug bounties will be automation. Humans aren't very good at digging through huge amounts of data but computers are. What we're really good at and computers are (currently) really bad at is finding new and exciting ways to break systems. We once thought double free bugs couldn't be exploited. We didn't see a problem with NULL pointer dereferences. Someone once thought deserializing objects was a neat idea. I would rather see humans working on the future of security instead of exploiting the past. The future of the bug bounty can be new attack methods instead of finding bugs. We have some work to do, I've not seen an automated scanner that I'd even call "almost not terrible". It will happen though, tools always start terrible and get better through the natural march of progress. The road to this unicorn future will pass through bug bounties. However, if we don't have automation ready on the other side, it's nothing but dragons.

Auto-generating build-requires for packages built with Maven

Posted by Mikolaj Izdebski on April 24, 2017 09:34 AM
For a few months all Fedora packages built using Maven in rawhide have auto-generated build-requires included in their build.logs, which after some adjustments can be copied to spec files. This blog posts shows how to extract then from Koji build logs.

Running rawhide app in chroot by example of Eclipse

Posted by Mikolaj Izdebski on April 24, 2017 09:34 AM
How to run install and run a single application from Fedora rawhide without having to install a full rawchide machine.

Slice of Cake #6

Posted by Brian "bex" Exelbierd on April 24, 2017 09:30 AM

A slice of cake

Last week as FCAIC I:

  • Participated in the second week of the opensource.com blogging challenge. If you missed the first week post it is here. You should definitely participate in the future weeks!
  • The new budget website for Fedora is live! I need to write an announcement. I also need to work with Fedora-Infrastructure to get more of the building automated. If you want to help, ping me.
  • An amazing meeting with mizmo, puiterwijk, and stickster to plan the changes needed to regcfp and other details for Flock.
  • Finalized the GSoC Fedora selections for 2017 (due a bit after this post goes live). Students get notified in May, not before - thems the rules from the Googles!

A la Mode

I also made some personal progress doing:

  • More work on my taxes and begining to re-evaluate my understanding of how the Czech Republic views my financial life. Taxes for two countries ain’t easy.
  • Upgraded from F24 to F25 … I know I am late to the party.

Cake Around the World

I’ll be traveling to:

  • Back to Beantown (Boston, MA USA) for Red Hat Summit from 2-5 May. If you’re in the neighborhood, you know the deal.
  • Community Leadership Summit in Austin, TX from 6-7 May and OSCON is also in Austin, TX from 8-12 May. I may have to leave on 12 May early for …
  • OSCAL in Tirana, Albania from 13-14 has accepted a talk proposal from me (eep!) so definitely come and give me some audience love.


Posted by Engels Antonio on April 24, 2017 09:25 AM
Flutter is a $20 wireless ARM development board with over 1 km (half-mile) range; secured using 256-bit AES encryption; built with Open Source hardware and Open Source software. Exciting times ahead!


Posted by Engels Antonio on April 24, 2017 09:25 AM
Gaim is now Pidgin! Following a legal settlement with AOL, Gaim has been renamed Pidgin and its 2.0.0 release is now available.

SHA-1 Cracked!

Posted by Engels Antonio on April 24, 2017 09:25 AM
Chinese Professor Cracks Fifth Data Security Algorithm SHA-1 added to list of "accomplishments" Central News Agency http://en.epochtimes.com/news/7-1-11/50336.html Jan 11, 2007 Associate professor Wang Xiaoyun of Beijing's Tsinghua University and Shandong University of Technology has cracked SHA-1, a widely used data security algorithm. TAIPEI—Within four years, the U.S. government will cease to use SHA-1 (Secure Hash Algorithm) for digital signatures, and convert to a new and more advanced "hash" algorithm, according to the article "Security Cracked!" from New Scientist . The reason for this change is that associate professor Wang Xiaoyun of Beijing's Tsinghua University and Shandong University of Technology, and her associates, have already cracked SHA-1. Wang also cracked MD5 (Message Digest 5), the hash algorithm most commonly used before SHA-1 became popular. Previous attacks on MD5 required over a million years of supercomputer time, but Wang and her research team obtained results using ordinary personal computers. In early 2005, Wang and her research team announced that they had succeeded in cracking SHA-1. In addition to the U.S. government, well-known companies like Microsoft, Sun, Atmel, and others have also announced that they will no longer be using SHA-1. Two years ago, Wang announced at an international data security conference that her team had successfully cracked four well-known hash algorithms—MD5, HAVAL-128, MD4, and RIPEMD—within ten years. A few months later, she cracked the even more robust SHA-1. Focus and Dedication According to the article, Wang's research focusses on hash algorithms. A hash algorithm is a mathematical procedure for deriving a 'fingerprint' of a block of data. The hash algorithms used in cryptography are "one-way": it is easy to derive hash values from inputs, but very difficult to work backwards, finding an input message that yields a given hash value. Cryptographic hash algorithms are also resistant to "collisions": that is, it is computationally infeasible to find any two messages that yield the same hash value. Hash algorithms' usefulness in data security relies on these properties, and much research focusses in this area. Recent years have seen a stream of ever-more-refined attacks on MD5 and SHA-1—including, notably, Wang's team's results on SHA-1, which permit finding collisions in SHA-1 about 2,000 times more quickly than brute-force guessing. Wang's technique makes attacking SHA-1 efficient enough to be feasible. MD5 and SHA-1 are the two most extensively used hash algorithms in the world. These two algorithms underpin many digital signature and other security schemes in use throughout the international community. They are widely used in banking, securities, and e-commerce. SHA-1 has been recognized as the cornerstone for modern Internet security. According to the article, in the early stages of Wang's research, there were other researchers who tried to crack it. However, none of them succeeded. This is why in 15 years hash research had become the domain of hopeless research in many scientists' minds. Wang's method of cracking algorithms differs from others'. Although such analysis usually cannot be done without the use of computers, according to Wang, the computer only assisted in cracking the algorithm. Most of the time, she calculated manually, and manually designed the methods. "Hackers crack passwords with bad intentions," Wang said. "I hope efforts to protect against password theft will benefit [from this]. Password analysts work to evaluate the security of data encryption and to search for even more secure … algorithms." "On the day that I cracked SHA-1," she added, "I went out to eat. I was very excited. I knew I was the only person who knew this world-class secret." Within ten years, Wang cracked the five biggest names in cryptographic hash algorithms. Many people would think the life of this scientist must be monotonous, but "That ten years was a very relaxed time for me," she says. During her work, she bore a daughter and cultivated a balcony full of flowers. The only mathematics-related habit in her life is that she remembers the license plates of taxi cabs. With additional reporting by The Epoch Times.


Posted by Engels Antonio on April 24, 2017 09:25 AM
This is a very timely reminder! [Devel] [PATCH][DOCUMENTATION] The namespaces compatibility list doc ---------------------------- Original Message ---------------------------- Subject: [Devel] [PATCH][DOCUMENTATION] The namespaces compatibility list doc From: "Pavel Emelyanov" <xemul@openvz.org> Date: Fri, November 16, 2007 5:34 pm To: "Andrew Morton" <akpm@osdl.org> Cc: "Linux Containers" <containers@lists.osdl.org> "Cedric Le Goater" <clg@fr.ibm.com> "Theodore Tso" <tytso@mit.edu> "Linux Kernel Mailing List" <linux-kernel@vger.kernel.org> -------------------------------------------------------------------------- From time to time people begin discussions about how the namespaces are working/going-to-work together. Ted T'so proposed to create some document that describes what problems user may have when he/she creates some new namespace, but keeps others shared. I liked this idea, so here's the initial version of such a document with the problems I currently have in mind and can describe somewhat audibly - the "namespaces compatibility list". The Documentation/namespaces/ directory is about to contain more docs about the namespaces stuff. Thanks to Cedirc for notes and spell checks on the doc. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> --- commit 83061c56e1c4dcd54d48a62b108d219a7f5279a0 Author: Pavel <pavel@xemulnb.sw.ru> Date: Fri Nov 16 12:25:53 2007 +0300 Namespaces compatibility list diff --git a/Documentation/00-INDEX b/Documentation/00-INDEX index 910e511..3ead06b 100644 --- a/Documentation/00-INDEX +++ b/Documentation/00-INDEX @@ -262,6 +262,8 @@ mtrr.txt - how to use PPro Memory Type Range Registers to increase performance. mutex-design.txt - info on the generic mutex subsystem. +namespaces/ + - directory with various information about namespaces nbd.txt - info on a TCP implementation of a network block device. netlabel/ diff --git a/Documentation/namespaces/compatibility-list.txt b/Documentation/namespaces/compatibility-list.txt new file mode 100644 index 0000000..9c9e5c1 --- /dev/null +++ b/Documentation/namespaces/compatibility-list.txt @@ -0,0 +1,33 @@ + Namespaces compatibility list + +This document contains the information about the problems user +may have when creating tasks living in different namespaces. + +Here's the summary. This matrix shows the known problems, that +occur when tasks share some namespace (the columns) while living +in different other namespaces (the rows): + + UTS IPC VFS PID User Net +UTS X +IPC X 1 +VFS X +PID 1 1 X +User 2 X +Net X + +1. Both the IPC and the PID namespaces provide IDs to address + object inside the kernel. E.g. semaphore with ipcid or + process group with pid. + + In both cases, tasks shouldn't try exposing this id to some + other task living in a different namespace via a shared filesystem + or IPC shmem/message. The fact is that this ID is only valid + within the namespace it was obtained in and may refer to some + other object in another namespace. + +2. Intentionnaly, two equal user ids in different user namespaces + should not be equal from the VFS point of view. In other + words, user 10 in one user namespace shouldn't have the same + access permissions to files, beloging to user 10 in another + namespace. But currently this is not so. + _______________________________________________ Containers mailing list Containers@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/containers _______________________________________________ Devel mailing list Devel@openvz.org https://openvz.org/mailman/listinfo/devel

Dual Core

Posted by Engels Antonio on April 24, 2017 09:25 AM
I caused a kernel panic while playing around with some ReiserFS parameters on my Fedora Core 6 Athlon64 X2 test box. Instead of freezing as expected, Linux continued to run! I managed to duplicate the error. The box froze this time. I found out that the first kernel panic only took out one of the cores. This was why Linux continued to run. It took a second panic to take out the other core and crash the box. Cool!

Joining Sun

Posted by Engels Antonio on April 24, 2017 09:25 AM
News from Debian Master Ian Murdock: http://ianmurdock.com/2007/03/19/joining-sun/ I saw my first Sun workstation about 15 years ago, in 1992. I was a business student at Purdue University, and a childhood love for computers had just been reawakened. I was spending countless hours in the basement of the Math building, basking in the green phosphorescent glow of a Z29 and happily exploring every nook and cranny of the Sequent Symmetry upstairs. It didn’t take too long to discover, though, just a short walk away in the computer science building, several labs full of Sun workstations. Suddenly, the Z29 didn’t have quite the same allure. A few months later, I walked over to the registrar’s office and changed my major to computer science. (OK, advanced tax accounting had something to do with it too.) Everything I know about computing I learned on those Sun workstations, as did so many other early Linux developers; I even had my own for a while, after I joined the University of Arizona computer science department in 1997. But within a year, the Suns were starting to disappear, replaced by Pentiums running Red Hat Linux. More and more people coming through university computer science programs were cutting their teeth on Linux, much as I had on Sun. Pretty soon, Sun was increasingly seen by this new generation as the vendor who didn’t “get it”, and Sun’s rivals did a masterful job running with that and painting the company literally built on open standards as “closed”. To those of us who knew better, it was a sad thing to watch. The last several years have been hard for Sun, but the corner has been turned. As an outsider, I’ve watched as Sun has successfully embraced x86, pioneered energy efficiency as an essential computing feature, open sourced its software portfolio to maximize the network effects, championed transparency in corporate communications, and so many other great things. Now, I’m going to be a part of it. And, so, I’m excited to announce that, as of today, I’m joining Sun to head up operating system platform strategy. I’m not saying much about what I’ll be doing yet, but you can probably guess from my background and earlier writings that I’ll be advocating that Solaris needs to close the usability gap with Linux to be competitive; that while as I believe Solaris needs to change in some ways, I also believe deeply in the importance of backward compatibility; and that even with Solaris front and center, I’m pretty strongly of the opinion that Linux needs to play a clearer role in the platform strategy. It is with regrets that I leave the Linux Foundation, but if you haven’t figured out already, Sun is a company I’ve always loved, and being a part of it was an opportunity I simply could not pass up. I think the world of the people at the LF, particularly my former FSG colleagues with whom I worked so closely over the past year and a half: Jim Zemlin, Amanda McPherson, Jeff Licquia, and Dan Kohn. And I still very much believe in the core LF mission, to prevent the fragmentation of the Linux platform. Indeed, I’m remaining in my role as chair of the LSB—and Sun, of course, is a member of the Linux Foundation. Anyway. Watch this space. This is going to be fun!


Posted by Engels Antonio on April 24, 2017 09:25 AM
After using Gitosis for almost 3 years, I have made the switch to Gitolite ... and it über rocks!


Posted by Engels Antonio on April 24, 2017 09:25 AM
Related news about my favorite filesystem, from kerneltrap.org: From: Kobajashi Zaghi [email blocked] To: linux-kernel Subject: The Future of ReiserFS development Date: Wed, 11 Oct 2006 10:53:02 +0200 Hi! Hans Reiser arrested on suspicion of murder. http://sfgate.com/cgi-bin/article.cgi?f=/c/a/2006/10/10/BAGERLM3RR15.DTL What is the plan? Could i migrate from reiserfs to another journaling filesystem? How will this trouble affect reiserfs development? I hope Hans innocent. Thanks, -- Kobi From: Jan Engelhardt [email blocked] Subject: Re: The Future of ReiserFS development Date: Wed, 11 Oct 2006 13:20:39 +0200 (MEST) > What is the plan? Could i > migrate from reiserfs to another journaling filesystem? How will this > trouble affect reiserfs development? Since development has pretty much ceased already, there is nothing to lose if you continue to use reiserfs. -`J' -- From: Alan Cox [email blocked] Subject: Re: The Future of ReiserFS development Date: Wed, 11 Oct 2006 18:56:44 +0100 Ar Mer, 2006-10-11 am 10:53 +0200, ysgrifennodd Kobajashi Zaghi: > Hi! > > Hans Reiser arrested on suspicion of murder. > > http://sfgate.com/cgi-bin/article.cgi?f=/c/a/2006/10/10/BAGERLM3RR15.DTL > > What is the plan? Could i > migrate from reiserfs to another journaling filesystem? How will this > trouble affect reiserfs development? Reiserfs is written by a team of people at Namesys, and particularly with reiserfs3 people at SuSE and elsewhere as well. Alan From: Alexander Lyamin [email blocked] Subject: Re: The Future of ReiserFS development Date: Wed, 11 Oct 2006 20:41:03 +0400 Well, this is correct statement if we are talking about 3.6, its only bugfixes lately. Altough SuSE people used to add some new stuff like ACL support. As for reiser4, we are still going through revision, thanks to AKPM. Chunking out patches,fixing issues and generally cleaning the house. Yes, we are rather shaked and stressed at moment, altough I can not say, we didn't seen it coming. I, personally, really like how US police acted exactly like their russian counterpart: e.g. sitting on their ass for whole month, waiting, so they can declare person officially missing and then just press charges against whoever looks most vulnerable. Well, probably I am wrong. Time will show. What WE (e.g. reiser4 dev people) are planng to do ? Short term ( present + 6 months ): We will just buzz along as ussual, chunking out patches and going through review, while pursuing existing business oportunities to get some funding. Long term (6 months from now and beyond): If it goes way we hope it will go. Well... We will do fine. If it goes bad. That is where it becomes tricky. We will try to appoint a proxy to run Namesys business. Thats it for now. Wed, Oct 11, 2006 at 01:20:39PM +0200, Jan Engelhardt wrote: > > > What is the plan? Could i > > migrate from reiserfs to another journaling filesystem? How will this > > trouble affect reiserfs development? > > Since development has pretty much ceased already, there is nothing to > lose if you continue to use reiserfs. > > > -`J' > -- > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [email blocked] > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ -- "the liberation loophole will make it clear.." lex lyamin

Recalled to Active Duty

Posted by Engels Antonio on April 24, 2017 09:25 AM
I'll be handling Total Linux 41 this June: http://bluepoint.com.ph/enroll/ Lock and load!