Fedora People

Does virt-v2v preserve sparseness?

Posted by Richard W.M. Jones on October 18, 2018 10:50 AM

A common question is does virt-v2v preserve sparseness (aka thin provisioning) when you convert the guest from VMware to a KVM-based hypervisor like oVirt/RHV or OpenStack? The very short answer is: no. The medium answer is: The question doesn’t really make sense. For the long answer, read on …

First of all we need to ask what is thin provisioning? For complex and essentially historical reasons when you provision a new guest you have to decide how much maximum disk space you want it to have. Let’s say you choose 10 GB. However because a new guest install might only take, say, 1 GB, you can also decide if you want the whole 10 GB to be preallocated up front or if you want the disk to be thin provisioned or sparse, meaning it’ll only take up 1 GB of host space, but that will increase as the guest creates new files. There are pros and cons to preallocation. Preallocating the full 10 GB obviously takes a lot of extra space (and what if the guest never uses it?), but it can improve guest performance in some circumstances.

That is what happens initially. By the time we come to do a virt-v2v conversion that guest may have been running for years and years. Now what does the disk look like? It doesn’t matter if the disk was initially thin provisioned or fully allocated, what matters is what the guest did during those years.

Did it repeatedly fill up the disk and/or delete those files? – In which case your initially thin provisioned guest could now be fully allocated.

Did it have trimming enabled? Your initially preallocated guest might now have become sparsely allocated.

In any case VMware doesn’t store this initial state, nor does it make it very easy to find out which bits of the disk are actually backed by host storage and which bits are sparse (well, maybe this is possible, but not using the APIs we use when liberating your guests from VMware).

In any case, as I explained in this talk (slides) from a few years ago, virt-v2v tries to avoid copying any unused, zeroed or deleted parts of the disk for efficiency reasons, and so it will always make the disk maximally sparse when copying it (subject to what the target hypervisor does, read on).

When virt-v2v comes to creating the target guest, the default is to create a maximally sparse guest, but there are two ways to change this:

  1. You can specify the -oa preallocated option, where virt-v2v will try to ask the target hypervisor to fully preallocate the target disks of the guest.
  2. For some hypervisors, especially RHV, your choice of backend storage may mean you have no option but to use preallocated disks (unfortunately I cannot give clear advice here, best to ask a RHV expert).

The basic rule is that when converting guests you need to think about whether you want the guest to be sparse or preallocated after conversion, based on your own performance vs storage criteria. Whether it happened to be thin provisioned when you set it up on VMware years earlier isn’t a relevant issue.

New in nbdkit: Create an ISO image on the fly

Posted by Richard W.M. Jones on October 18, 2018 09:26 AM

nbdkit is the pluggable Network Block Device server that Eric and I wrote. I have submitted a talk to FOSDEM next February about the many weird and wonderful ways you can use nbdkit as a flexible replacement for loopback mounting.

Anyway, new in nbdkit 1.7.6 you can now create ISO 9660 (CD-ROM) disk images on the fly from a directory:

# nbdkit iso /boot params="-JrT"
# nbd-client -b 512 localhost /dev/nbd0
# file -bsL /dev/nbd0
ISO 9660 CD-ROM filesystem data 'CDROM'
# mount /dev/nbd0 /tmp/mnt
# ls /tmp/mnt
config-4.18.0-0.rc8.git2.1.fc29.x86_64
config-4.19.0-0.rc1.git3.2.fc30.x86_64
config-4.19.0-0.rc6.git0.1.fc30.x86_64
efi
extlinux
grub2
[etc]
# umount /tmp/mnt
# nbd-client -d /dev/nbd0
# killall nbdkit

That ISO wouldn’t actually be bootable, but you could create one (eg. an El Torito ISO) by adding the appropriate extra parameters.

To head off the first question: If you copy files into the directory while nbdkit is running, do they appear in the ISO? Answer: No! This is largely impossible with the way Linux block devices work.

Fedora Women’s Day 2018 – Lima, Peru

Posted by Fedora Community Blog on October 18, 2018 08:30 AM
Fedora Women's Day 2018 - Lima, Peru

On Setember 22, 2018 we celebrated the Fedora Women’s Day in Lima, Perú at PUCP (Pontifical Catholic University of Peru). The Fedora Women’s Day event seeks to integrate women in the world of Free Software. This year, I had the opportunity to be one of the organizers.

<figure class="wp-block-image">Fedora Women's Day 2018 event in Lima, Peru - promotional poster<figcaption>Promotional poster design</figcaption></figure>

Topics

  • What is Free Software? by Giohanny Falla
    • She talked about introduction to Free Software and also made the invitation to the GSoC and the Outreachy.
  • Women to Power by Solanch Ccasa
    • She talked about success stories and the inclusion of women in technology.
  • Fedora Love Python by Lizbeth Lucar
    • She talked about the use and importance of using Python, and they made mini challenges about Python exercises.
  • Design for Enterpreneurs by Sheyla Breña
    • She told us about her experiences as a business designer.

Attendees

<figure class="aligncenter">20 people total attended who did not have knowledge about Linux for the Fedora Women's Day 2018 event in Lima, Peru<figcaption>20 people total attended who did not have knowledge about Linux</figcaption></figure>

Wrapping up FWD in Lima, Peru

We finished the talks with “Design for Entrepreneurs“. For the workshop, all attendees tried out Inkscape and Gimp in Fedora. Finally, we took a snack and coffee break.

We ended with questions, answers, and group photos.

<figure class="aligncenter">Final group photo with all attendees of Fedora Women's Day 2018 in Lima, Peru<figcaption>Final group photo with all attendees of Fedora Women’s Day 2018 in Lima, Peru</figcaption></figure>

Thanks to the support and organization of the community Penguin Codes, Blender Peru, AIIS PUCP, Diego Balbuena, and Fredy Hernandez.

The post Fedora Women’s Day 2018 – Lima, Peru appeared first on Fedora Community Blog.

Creating a Self Trust In Keystone

Posted by Adam Young on October 18, 2018 02:44 AM

Lets say you are an administrator of an OpenStack cloud. This means you are pretty much all powerful in the deployment. Now, you need to perform some operation, but you don’t want to give it full admin privileges? Why? well, do you work as root on your Linux box? I hope note. Here’s how to set up a self trust for a reduced set of roles on your token.

First, get a regular token, but use the –debug to see what the project ID, role ID, and your User ID actually are:

In my case, they are … long uuids.

I’ll trim them down both for obscurity as well as the make it more legible. Here is the command to create the trust.

openstack trust create --project 9417f7 --role 9fe2ff 154741 154741

Mine returned:

+--------------------+----------------------------------+
| Field              | Value                            |
+--------------------+----------------------------------+
| deleted_at         | None                             |
| expires_at         | None                             |
| id                 | 26f8d2                           |
| impersonation      | False                            |
| project_id         | 9417f7                           |
| redelegation_count | 0                                |
| remaining_uses     | None                             |
| roles              | _member_                         |
| trustee_user_id    | 154741                           |
| trustor_user_id    | 154741                           |
+--------------------+----------------------------------+

On my system, role_id 9fe2ff is the _member_role.

Note that, if you are Admin, you need to explicitly grant yourself the _member_ role, or use an implied role rule that says admin implies member.

Now, you can get a reduced scope token. Unset the variables that are used to scope the token, since you want to scope to the trust now.

$ unset OS_PROJECT_DOMAIN_NAME 
$ unset OS_PROJECT_NAME 
$ openstack token issue --os-trust-id  26f8d2eaf1404489ab8e8e5822a0195d
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2018-10-18T10:31:57+0000         |
| id         | f16189                           |
| project_id | 9417f7                           |
| user_id    | 154741                           |
+------------+----------------------------------+

This still requires you to authenticate with your userid and password. An even better mechanism is the new Application Credentials API. It works much the same way, but you use an explicitly new password. More about that next time.

Gestiona máquinas virtuales en QEMU/KVM fácilmente con QEMUzier

Posted by Alvaro Castillo on October 17, 2018 06:00 PM

QEMUzier es un pequeño script licenciado bajo GPL v2.0 que he laborado para gestionar de forma muy sencilla máquinas virtuales con el software de emulación QEMU modificado para utilizarlo con KVM para obtener virtualización completa en Linux.

Por el momento está adaptado para utilizarlo en distribuciones Fedora.

Para poder instalarlo:

GitLab

git clone https://gitlab.com/sincorchetes/qemuzier

GitHub

git clone https://github.com/sincorchetes/qemuzier

Cuando ejecutemos el script, hará...

Design faster web pages, part 2: Image replacement

Posted by Fedora Magazine on October 17, 2018 08:00 AM

Welcome back to this series on building faster web pages. The last article talked about what you can achieve just through image compression. The example started with 1.2MB of browser fat, and reduced down to a weight of 488.9KB. That’s still not fast enough! This article continues the browser diet to lose more fat. You might think that partway through this process things are a bit crazy, but once finished, you’ll understand why.

Preparation

Once again this article starts with an analysis of the web pages. Use the built-in screenshot function of Firefox to make a screenshot of the entire page. You’ll also want to install Inkscape using sudo:

$ sudo dnf install inkscape

If you want to know how to use Inkscape, there are already several articles in Fedora Magazine. This article will only explain some basic tasks for optimizing an SVG for web use.

Analysis

Once again, this example uses the getfedora.org web page.

Getfedora page with graphics marked

Getfedora page with graphics marked

This analysis is better done graphically, which is why it starts with a screenshot. The screenshot above marks all graphical elements of the page. In two cases or better in four cases, the Fedora websites team already used measures to replace images. The icons for social media are glyphs from a font and the language selector is an SVG.

There are several options for replacing:

HTML5 Canvas

Briefly, HTML5 Canvas is an HTML element that allows you to draw with the help of scripts, mostly JavaScript, although it’s not widely used yet. As you draw with the help of scripts, the element can also be animated. Some examples of what you can achieve with HTML Canvas include this triangle pattern, animated wave, and text animation. In this case, though, it seems not to be the right choice.

CSS3

With Cascading Style Sheets you can draw shapes and even animate them. CSS is often used for drawing elements like buttons. However, more complicated graphics via CSS are usually only seen in technical demonstration pages. This is because graphics are still better done visually as with coding.

Fonts

The usage of fonts for styling web pages is another way, and Fontawesome is quiet popular. For instance, you could replace the Flavor and the Spin icons with a font in this example. There is a negative side to using this method, which will be covered in the next part of this series, but it can be done easily.

SVG

This graphics format has existed for a long time and was always supposed to be used in the browser. For a long time not all browsers supported it, but that’s history. So the best way to replace pictures in this example is with SVG.

Optimizing SVG for the web

To optimize an SVG for internet use requires several steps.

SVG is an XML dialect. Components like circle, rectangle, or text paths are described with nodes. Each node is an XML element. To keep the code clean, an SVG should use as few nodes as possible.

The SVG example is a circular icon with a coffee mug on it. You have 3 options to describe it with SVG.

Circle element with the mug on top

<circle
style="opacity:1;fill:#717d82;fill-opacity:1;stroke:none;stroke-width:9.51950836;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:markers fill stroke"
id="path36"
cx="68.414307"
cy="130.71523"
r="3.7620001" />

Circular path with the mug on top

<path
style="opacity:1;fill:#717d82;fill-opacity:1;stroke:none;stroke-width:1.60968435;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:markers fill stroke"
d="m 68.414044,126.95318 a 3.7618673,3.7618673 0 0 0 -3.76153,3.76204 3.7618673,3.7618673 0 0 0 3.76153,3.76205 3.7618673,3.7618673 0 0 0 3.76206,-3.76205 3.7618673,3.7618673 0 0 0 -3.76206,-3.76204 z"
id="path20" />

single path

<path
style="opacity:1;fill:#717d82;fill-opacity:1;stroke:none;stroke-width:1.60968435;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:markers fill stroke"
d="m 68.414044,126.95318 a 3.7618673,3.7618673 0 0 0 -3.76153,3.76204 3.7618673,3.7618673 0 0 0 3.76153,3.76205 3.7618673,3.7618673 0 0 0 3.76206,-3.76205 3.7618673,3.7618673 0 0 0 -3.76206,-3.76204 z m -1.21542,0.92656 h 2.40554 c 0.0913,0.21025 0.18256,0.42071 0.27387,0.63097 h 0.47284 v 0.60099 h -0.17984 l -0.1664,1.05989 h 0.24961 l -0.34779,1.96267 -0.21238,-0.003 -0.22326,1.41955 h -2.12492 l -0.22429,-1.41955 -0.22479,0.003 -0.34829,-1.96267 h 0.26304 l -0.16692,-1.05989 h -0.1669 v -0.60099 h 0.44752 c 0.0913,-0.21026 0.18206,-0.42072 0.27336,-0.63097 z m 0.12608,0.19068 c -0.0614,0.14155 -0.12351,0.28323 -0.185,0.42478 h 2.52336 c -0.0614,-0.14155 -0.12248,-0.28323 -0.18397,-0.42478 z m -0.65524,0.63097 v 0.21911 l 0.0594,5.2e-4 h 3.35844 l 0.0724,-5.2e-4 v -0.21911 z m 0.16846,0.41083 0.1669,1.05937 h 2.80603 l 0.16693,-1.05937 -1.57046,0.008 z m -0.061,1.25057 0.27956,1.5782 1.34411,-0.0145 1.34567,0.0145 0.28059,-1.5782 z m 1.62367,1.75441 -1.08519,0.0124 0.19325,1.2299 h 1.79835 l 0.19328,-1.2299 z"
id="path2714"
inkscape:connector-curvature="0" />

You probably can see the code becomes more complex and needs more characters to describe it. More characters in a file result, of course, in a larger size.

Node cleaning

If you open an example SVG in Inkscape and press F2, that activates the Node tool. You should see something like this:

Inkscape - Node tool activated

Inkscape – Node tool activated

There are 5 nodes that aren’t necessary in this example — the ones in the middle of the lines. To remove them, select them one by one with the activated Node tool and press the Del key. After this, select the nodes which define this lines and make them corners again using the toolbar tool.

Inkscape - Node tool make node a corner

Inkscape – Node tool make node a corner

Without fixing the corners, handles are used that define the curve, which gets saved and will increase file size. You have to do this node cleaning by hand, as it can’t be effectively automated. Now you’re ready for the next stage.

Use the Save as function and choose Optimized svg. A dialogue window opens where you can select what to remove or keep.

Inkscape - Dialog window for save as optimized SVG

Inkscape – Dialog window for save as optimized SVG

Even the little SVG in this example got down from 3.2 KB to 920 bytes, less than a third of its original size.

Back to the getfedora page: The grey voronoi pattern used in the background of the main section, after our optimization from Part 1 of this series, is down to 164.1 KB versus the original 211.12 KB size.

The original SVG it was exported from is 1.9 MB in size. After these SVG optimization steps, it’s only 500.4KB. Too big? Well, the current blue background is 564.98 KB in size. But there’s only a small difference between the SVG and the PNG.

Compressed files

$ ls -lh
insgesamt 928K
-rw-r--r--. 1 user user 161K 19. Feb 19:44 grey-pattern.png
-rw-rw-r--. 1 user user 160K 18. Feb 12:23 grey-pattern.png.gz
-rw-r--r--. 1 user user 489K 19. Feb 19:43 greyscale-pattern-opti.svg
-rw-rw-r--. 1 user user 112K 19. Feb 19:05 greyscale-pattern-opti.svg.gz

This is the output of a small test I did to visualize this topic. You should probably see that the raster graphic — the PNG — is already compressed and can’t be anymore. The opposite is the SVG, an XML file. This is just text and can compressed, to less then a fourth of its size. As a result it is now around 50 KB smaller in size than the PNG.

Modern browsers can handle compressed files natively. Therefore, a lot of web servers have switched on mod_deflate (Apache) and gzip (nginx). That’s how we save space during delivery. Check out if it’s enabled at your server here.

Tooling for production

First of all, nobody wants to always optimize SVG in Inkscape. You can run Inkscape without a GUI in batch mode, but there’s no option to convert from Inkscape SVG to optimized SVG. You can only export raster graphics this way. But there are alternatives:

  • SVGO (which seems not actively developed)
  • Scour

This example will use scour for optimization. To install it:

$ sudo dnf install scour

To automatically optimize an SVG file, run scour similarly to this:

[user@localhost ]$ scour INPUT.svg OUTPUT.svg -p 3 --create-groups --renderer-workaround --strip-xml-prolog --remove-descriptive-elements --enable-comment-stripping --disable-embed-rasters --no-line-breaks --enable-id-stripping --shorten-ids

This is the end of part two, in which you learned how to replace raster images with SVG and how to optimize it for usage. Stay tuned to the Fedora Magazine for part three, coming soon.

Initial thoughts on MongoDB's new Server Side Public License

Posted by Matthew Garrett on October 16, 2018 10:43 PM
MongoDB just announced that they were relicensing under their new Server Side Public License. This is basically the Affero GPL except with section 13 largely replaced with new text, as follows:

If you make the functionality of the Program or a modified version available to third parties as a service, you must make the Service Source Code available via network download to everyone at no charge, under the terms of this License. Making the functionality of the Program or modified version available to third parties as a service includes, without limitation, enabling third parties to interact with the functionality of the Program or modified version remotely through a computer network, offering a service the value of which entirely or primarily derives from the value of the Program or modified version, or offering a service that accomplishes for users the primary purpose of the Software or modified version.

“Service Source Code” means the Corresponding Source for the Program or the modified version, and the Corresponding Source for all programs that you use to make the Program or modified version available as a service, including, without limitation, management software, user interfaces, application program interfaces, automation software, monitoring software, backup software, storage software and hosting software, all such that a user could run an instance of the service using the Service Source Code you make available.


MongoDB admit that this license is not currently open source in the sense of being approved by the Open Source Initiative, but say:We believe that the SSPL meets the standards for an open source license and are working to have it approved by the OSI.

At the broadest level, AGPL requires you to distribute the source code to the AGPLed work[1] while the SSPL requires you to distribute the source code to everything involved in providing the service. Having a license place requirements around things that aren't derived works of the covered code is unusual but not entirely unheard of - the GPL requires you to provide build scripts even if they're not strictly derived works, and you could probably make an argument that the anti-Tivoisation provisions of GPL3 fall into this category.

A stranger point is that you're required to provide all of this under the terms of the SSPL. If you have any code in your stack that can't be released under those terms then it's literally impossible for you to comply with this license. I'm not a lawyer, so I'll leave it up to them to figure out whether this means you're now only allowed to deploy MongoDB on BSD because the license would require you to relicense Linux away from the GPL. This feels sloppy rather than deliberate, but if it is deliberate then it's a massively greater reach than any existing copyleft license.

You can definitely make arguments that this is just a maximalist copyleft license, the AGPL taken to extreme, and therefore it fits the open source criteria. But there's a point where something is so far from the previously accepted scenarios that it's actually something different, and should be examined as a new category rather than already approved categories. I suspect that this license has been written to conform to a strict reading of the Open Source Definition, and that any attempt by OSI to declare it as not being open source will receive pushback. But definitions don't exist to be weaponised against the communities that they seek to protect, and a license that has overly onerous terms should be rejected even if that means changing the definition.

In general I am strongly in favour of licenses ensuring that users have the freedom to take advantage of modifications that people have made to free software, and I'm a fan of the AGPL. But my initial feeling is that this license is a deliberate attempt to make it practically impossible to take advantage of the freedoms that the license nominally grants, and this impression is strengthened by it being something that's been announced with immediate effect rather than something that's been developed with community input. I think there's a bunch of worthwhile discussion to have about whether the AGPL is strong and clear enough to achieve its goals, but I don't think that this SSPL is the answer to that - and I lean towards thinking that it's not a good faith attempt to produce a usable open source license.

(It should go without saying that this is my personal opinion as a member of the free software community, and not that of my employer)

[1] There's some complexities around GPL3 code that's incorporated into the AGPLed work, but if it's not part of the AGPLed work then it's not covered

comment count unavailable comments

NOTICE: Major problem with nrpe-3.2.1-6 in EPEL-7

Posted by Stephen Smoogen on October 16, 2018 07:28 PM
During the summer, I worked on updating nrpe to a newer version and made changes to the systemd startup to match the provided one. Part of this was adding PIDfile so that systemd could send signals and monitor the correct nrpe daemon as there had been bugs where systemctl was unable to restart the daemon.

I tested nrpe-3.2.1-6 on my systems and had no problems, and then put it in epel-testing for a couple of months waiting for some testing. This is where I made a mistake and forgot about it and also I did not thoroughly test nrpe updates from very old versions of nrpe. My tests of updates had been with more recent versions which had a line in the start up for


pid_file = /var/run/nrpe/nrpe.pid

which made sure that my tests worked fine. The daemon started up and it ran without problems, created the file in the correct place etc etc. However if you had a configuration management system with an older template for the file, or had touched your /etc/nagios/nrpe.cfg previously you have problems. yum update will fail to restart the nrpe and other errors will occur.

One fix would be to update the config file to the newer version in the 3.2.x series, but that is not going to work for a lot of people.

I have worked with Andrea Veri to work out a functional change which will allow for systemctl to work properly without needing the pid_file. This is by removing the PIDfile and making the startup a simple versus forking daemon. I have built nrpe-3.2.1-8 and it should show up in epel-testing in the next day or so.

Please, please test this and see if it works. If it really works (aka after 24 hours of an update it is still running, please add karma to it in

https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2018-7f7330f37a 

Thank you.

How five Queen songs went mainstream in totally different ways

Posted by Justin W. Flory on October 16, 2018 09:00 AM
How five Queen songs went mainstream in totally different ways

Originally published on the MusicBrainz blog.


Making graphs is easy. Making intuitive, easy-to-understand graphs? It’s harder than most people think. At the Rochester Institute of Technology, the ISTE-260 (Designing the User Experience) course teaches the language of design to IT students. For an introductory exercise in the class, students are tasked to visualize any set of data they desire. Students David Kim, Jathan Anandham, Justin W. Flory, and Scott Tinker used the MusicBrainz database to look at how five different Queen songs went mainstream in different ways.

Five factors of Queen

Our mini data science experiment decided to look at five unique data points available to us via MusicBrainz Works:

  • Number of recorded covers
  • Number of artists who covered a song
  • Release year
  • Year of last recorded cover
  • Time elapsed between release year and year of last recorded cover

Originally, we looked at songs from different artists, but decided to look at five recordings from the same artist. With Queen being a notoriously famous band, there were several data points to work with in terms of how often a song was covered.

<figure class="aligncenter">How five Queen songs went mainstream in totally different ways<figcaption>Studying five Queen songs: Another One Bites the Dust, Bohemian Rhapsody, Don’t Stop Me Now, Fat Bottomed Girls, We Will Rock You</figcaption></figure>

Making sense of the data

A few explanations are necessary for some of the data, especially the difference in number of covers and number of artists. Don’t Stop Me Now, Fat Bottomed Girls, and We Will Rock You had the same number of recorded covers as number of artists who have covered the song. Why were Another One Bites the Dust and Bohemian Rhapsody different?

As it turns out, Another One Bites the Dust had more covers than the number of artists who have covered the song. This happens because some artists have covered the song twice (e.g. once on a studio release and another on a live recording release). On the other hand, Bohemian Rhapsody had more artists covering it than number of covers because some recordings featured multiple artists on the same cover (e.g. the 1992 live performance with Elton John and Axl Rose).

The data opens many interesting questions. Why have some songs persisted longer than others (in terms of recent covers)? Have these songs impacted culture and society in different ways? How have they permeated culture? Is there geographical bias in the data?

This exercise was an exploratory assignment, but we had fun visualizing it and ended up learning an interesting pattern in music data.

Check out the presentation and paper

If you’re interested for the full details, the slides and a short paper about the presentation are available online. They provide deeper context for the research and the visualization details based on different design concepts.

You can see what else David Kim, Jathan Anandham, Justin W. Flory, and Scott Tinker are up to on LinkedIn. Thanks for tuning in to this adventure into music data analysis, powered by MusicBrainz!


Photo by Matthias Wagner on Unsplash.

The post How five Queen songs went mainstream in totally different ways appeared first on Justin W. Flory's Blog.

CommOps takeaways from Flock 2018

Posted by Fedora Community Blog on October 16, 2018 08:30 AM
CommOps takeaways from Flock 2018

The annual Fedora contributor conference, Flock, took place from August 8-11, 2018. Several members of the Community Operations (CommOps) team were present for the conference. We also held a half-day team sprint for team members and interested people to participate and share feedback with the team.

This blog post summarizes some of the high-level takeaways and next steps for CommOps based on the feedback we received.

What we talked about

Our team sprint was significant for planning future goals and milestones for CommOps. Our sprint was a hands-on session split between Fedora Appreciation Week planning and exploring new questions to answer in fedmsg metrics.

  • Fedora Appreciation Week
    • Revisited timeline, updated roadmap (see latest comments in #110)
    • First draft of new roadmap in Etherpad
  • Metrics, fedmsg, and data analysis
    • Grimoire dashboard delayed – but how can we start answering questions now with tools we have today?
    • Interactive activity to generate ideas on most valuable / interesting metrics to review
    • Tentative plan on how to start

Deeper into metrics and data

The metrics discussion was the longest part of our session and also the least planned for. The CommOps session was intentionally unstructured to benefit from the people in the room. A fedmsg developer joined us for the first half of the session, and along with many others, we explored possibilities for questions we can answer about Fedora with today’s data and tools.

Reviewing Grimoire

An interactive Grimoire dashboard with graphs, charts, and other visualizations is still desired, but we lost our lead developer this year. We looked to see if it is possible to salvage the plan, and decided to put it on hold until we can find development interest and support for a Perceval fedmsg ingestion plugin.

Since Grimoire is blocked, the discussion drove towards answering data questions today with the tools we have now.

Brainstorming on sticky notes

What do you do when you’re in a room full of people with different backgrounds and interests, and you don’t know how to make things less awkward? Thankfully, sticky notes are convenient to get everyone present involved and draw out unique ideas because of the diversity in the room. Next, we distributed sticky notes and asked a broad question: what questions do you want answered about Fedora?

The responses were quite interesting and we had a lot of questions to sort through and arrange. We categorized and sorted similar ideas together and created three buckets, based on the buckets we identified at the CommOps 2018 FAD.

<figure class="aligncenter">Scratch notes from brainstorming at the Fedora Project Community Operations (CommOps) team sprint at Flock 2018 in Dresden, Germany<figcaption>Scratch notes from brainstorming at the Fedora Project Community Operations (CommOps) team sprint at Flock 2018 in Dresden, Germany.</figcaption></figure>

The three “buckets” identified were as follows:

  • Individual contributors
  • Teams / sub-projects
  • Overall Fedora Project community

Top metrics questions to explore

For each bucket, we identified a top question we thought was both interesting and valuable, and could also be answered with fedmsg tools we have in place today.

Question 1: Who contributes in a single, narrow place versus broadly and in many places?

Do we have many contributors in specific, narrow places versus contributors working in many different areas? If so, could we better connect the narrow places to other parts of the community that find their input valuable? An example is connecting a sub-project or team to something like the Fedora Magazine or CommBlog for improved visibility.

Question 2: What do drive-by contributors do?

Do we have many small activities (in time needed to contribute) yet they generate high levels of activity by drive-by contributors? Can we get more visibility of some of these high-interest places across the wider community?

Thinking ahead to 2019

For the rest of 2018, our hands are full with Fedora Appreciation Week and other tasks (like migrating from our mailing list to Discourse). However, we started to brainstorm high priority tasks to look at in 2019.

  • Answer identified metrics questions from session
  • Begin writing on-boarding guide for publication

Better format for next time

One suggestion for next year’s Flock conference: we had better engagement focusing on narrow, specific topics in our sub-project than having a broad, general focus of “team planning” for our session. We were better able to engage with everyone who decided to show up (since they had some sort of interest if they decided to join our session). When the scope was narrow and specific, people had several ideas and we opened an intersectionalist view on many of the things we’re doing (which is really what CommOps is all about).

We’re making progress to wrapping out 2018 on a high note, and Flock provided an opportunity for us to work together and plan for the future. All CommOps contributors as of today are remote, so the precious time a few of us get to spend together is valuable to plan ahead for huge gaps of time when we are apart and limited to IRC, mailing lists, and other text communication.

We hope to see you next year at the CommOps session at Flock!

The post CommOps takeaways from Flock 2018 appeared first on Fedora Community Blog.

Contributing to OSP upstream a.k.a. Peer Review

Posted by Pablo Iranzo Gómez on October 16, 2018 05:32 AM

Introduction

In the article "Contributing to OpenStack" we did cover on how to prepare accounts and prepare your changes for submission upstream (and even how to find low hanging fruits to start contributing).

Here, we'll cover what happens behind the scene to get change published.

Upstream workflow

Peer review

Upstream contributions to OSP and other projects are based on Peer Review, that means that once a new set of code has been submitted, several steps for validation are required/happen before having it implemented.

The last command executed (git-review) on the submit sequence (in the prior article) will effectively submit the patch to the defined git review service (git-review -s does the required setup process) and will print an URL that can be used to access the review.

Each project might have a different review platform, but usually for OSP it's https://review.openstack.org while for other projects it can be https://gerrit.ovirt.org, https://gerrithub.io, etc (this is defined in .gitreview file in the repository).

A sample .gitreview file looks like:

[gerrit]
host=review.gerrithub.io
port=29418
project=citellusorg/citellus.git

For a review example, we'll use one from gerrithub from Citellus project:

https://review.gerrithub.io/#/c/380646/

Here, we can see that we're on review 380646 and that's the link that allows us to check the changes submitted (the one printed when executing git-review).

CI tests (Verified +1)

Once a review has been submitted, usually the bots are the first ones to pick them and run the defined unit testing on the new changes, to ensure that it doesn't break anything (based on what is defined to be tested).

This is a critical point as:

  • Tests need to be defined if new code is added or modified to ensure that later updates doesn't break this new code without someone being aware.
  • Infrastructure should be able to test it (for example you might need some specific hardware to test a card or network configuration)
  • Environment should be sane so that prior runs doesn't affect the validation.

OSP CI can be checked at 'Zuul' http://zuul.openstack.org/ where you can 'input' the number for your review and see how the different bots are running CI tests on it or if it's still queued.

If everything is OK, the bot will 'vote' your change as Verified +1 allowing others to see that it should not break anything based on the tests performed

In the case of OSP, there's also third-party CI's that can validate other changes by third party systems. For some of them, the votes are counting towards or against the proposed change, for others it's just a comment to take into account.

Even if sometimes you know that your code is right, there's a failure because of the infrastructure, in those cases, writing a new comment saying recheck, will schedule a new CI test run.

This is common usually during busy periods when it's harder for the scheduler to get available resources for the review validation. Also, sometimes there are errors in the configuration of CI that must be fixed in order to validate those changes.

Note: you can run some of the tests on your system to validate faster if you've issues by running tox this will setup virtual environment for tests to be run so it's easier to catch issues before upstream CI does (so it's always a good idea to run tox even before submitting the review with git-review to detect early errors).

This is however not always possible as some changes include requirements like testing upgrades, full environment deployments, etc that cannot be done without the required preparation steps or even the infrastructure.

Code Review+2

This is probably the 'longest' process, it requires peers to be added as 'reviewer' (you can get an idea on the names based on other reviews submitted for the same component) or they will pick up new reviews as the pop un on notification channels or pending queues.

On this, you must prepare mentally for everything... developers could suggest to use a different approach, or highlight other problems or just do some small nit comments to fixes like formating, spacing, var naming, etc.

After each comment/change suggested, repeat the workflow for submitting a new patchset, but make sure you're using the same review id (that's by keeping the commit id that is appended): this allows the Code Review platform to identify this change as an update to a prior one, and allow you for example to compare changes across versions, etc. (and also notify the prior reviewers of new changes).

Once reviewers are OK with your code, and with some 'Core' developers also agreeing, you'll see some voting happening (-2..+2) meaning they like the change in its actual form or not.

Once you get Code Review +2 and with the prior Verified +1 you're almost ready to get the change merged.

Workflow+1

Ok, last step is to have someone with Workflow permissions to give a +1, this will 'seal' the change saying that everything is ok (as it had CR+2 and Verified+1) and change is valid...

This vote will trigger another build by CI, and when finished, the change will be merged into the code upstream, congratulations!

Cannot merge, please rebase

Sometimes, your change is doing changes on the same files that other programmers did on the code, so there's no way to automatically 'rebase' the change, in this case the bad news is that you need to:

git checkout master # to change to master branch
git pull # to push latest upstream changes
git checkout yourbranch # to get back to your change branch
git rebase master # to apply your changes on top of current master

After this step, it might be required to manually fix the code to solve the conflicts and follow instructions given by git to mark them as reviewed.

Once it's done, remember to do like with any patchset you submited afterwards:

git commit --amend # to commit the new changes on the same commit Id you used
git-review # to upload a new version of the patchset

This will start over the progress, but will, once completed to get the change merged.

How do we do it with Citellus?

In Citellus we've replicated more or less what we've upstream... even the use of tox.

Citellus does use https://gerrithub.io (free service that hooks on github and allows to do PR)

We've setup a machine that runs Jenkins to do 'CI' on the tests we've defined (mostly for python wrapper and some tests) and what effectively does is to run tox, and also, we do use https://travis-ci.org free Tier to repeat the same on other platform.

Tox is a tool that allows to define several commands that are executed inside python virtual environments, so without touching your system libraries, it can get installed new ones or removed just for the boundaries of that test, helping into running:

  • pep8 (python formating compliance)
  • py27 (python 2.7 environment test)
  • py35 (python 3.5 environment test)

The py tests are just to validate the code can run on both base python versions, and what they do is to run the defined unit testing scripts under each interpreter to validate.

For local test, you can run tox and it will go trough the different tests defined and report status... if everything is ok, it should be possible that your new code review passes also CI.

Jenkins will do the +1 on verified and 'core' reviewers will give +2 and 'merge' the change once validated.

Hope you enjoy!

Pablo

PSA: System update fails when trying to remove rtkit-0.11-19.fc29

Posted by Kamil Páral on October 15, 2018 11:20 AM
Recently a bug in rtkit packaging has been fixed, but the update will fail on all Fedora 29 pre-release installation that have rtkit installed (Workstation has it for sure). The details and the workaround is described here:

 

Announcing Linux Autumn 2018

Posted by Rafał Lużyński on October 15, 2018 11:07 AM

Autumn

If you have ever wondered, like I have, whether there will be an autumn (the Linux Autumn) this year then the answer is: yes.

Linux Autumn is an annual meeting of Free Software and Linux enthusiast from Poland organized since 2003 which means this year it will be its 16th time. This year it will be organized in Ustroń in the southern Poland from 9 to 11 November. The town is the same as the last year but in a different hotel.

As the place is located near the Czech and Slovak border we would like to invite more people, both speakers and attendees, from other countries. We are aware of strong presence of Fedora contributors in Brno and other nearby cities just across the border.

This conference has always been mostly Polish (in terms of the language) but there was always at least one foreign speaker who gave a talk in English. It has always been a chicken and egg problem: there are not many English talks because there are not many foreign attendees; on the other hand there are not many foreign attendees because there are not many English talks. I think we all will be happy to change this. We have already one foreign speaker confirmed, others are in progress.

Currently the registration is open but the organizers are still accepting talk proposals, the CfP deadline has been extended to 19 October. Please hurry with your talk proposal!

If you don’t know what is Linux Autumn about please see my articles about the event in 2017 and in 2016 or see the organizers’ website.

Fedora/RISC-V now mirrored as a Fedora “alternative” architecture

Posted by Richard W.M. Jones on October 15, 2018 09:48 AM

https://dl.fedoraproject.org/pub/alt/risc-v/repo/fedora/29/latest/. These packages now get mirrored further by the Fedora mirror system, eg. to https://mirror.math.princeton.edu/pub/alt/risc-v/repo/fedora/29/latest/

If you grab the latest nightly Fedora builds you can get the mirrors by editing the /etc/yum.repos.d/*.repo file.

Also we got some additional help so we now have loads more build hosts! These were provided by Facebook with hosting by Oregon State University Open Source Lab (see cfarm), so thanks to them.

Thanks to David Abdurachmanov and Laurent Guerby for doing all the work (I did nothing).

Running Linux containers as a non-root with Podman

Posted by Fedora Magazine on October 15, 2018 08:00 AM

Linux containers are processes with certain isolation features provided by a Linux kernel — including filesystem, process, and network isolation. Containers help with portability — applications can be distributed in container images along with their dependencies, and run on virtually any Linux system with a container runtime.

Although container technologies exist for a very long time, Linux containers were widely popularized by Docker. The word “Docker” can refer to several different things, including the container technology and tooling, the community around that, or the Docker Inc. company. However, in this article, I’ll be using it to refer to the technology and the tooling that manages Linux containers.

What is Docker

Docker is a daemon that runs on your system as root, and manages running containers by leveraging features of the Linux kernel. Apart from running containers, it also makes it easy to manage container images — interacting with container registries, storing images, managing container versions, etc. It basically supports all the operations you need to run individual containers.

But even though Docker is very a handy tool for managing Linux containers, it has two drawbacks: it is a daemon that needs to run on your system, and it needs to run with root privileges which might have certain security implications. Both of those, however, are being addressed by Podman.

Introducing Podman

Podman is a container runtime providing a very similar features as Docker. And as already hinted, it doesn’t require any daemon to run on your system, and it can also run without root privileges. So let’s have a look at some examples of using Podman to run Linux containers.

Running containers with Podman

One of the simplest examples could be running a Fedora container, printing “Hello world!” in the command line:

$ podman run --rm -it fedora:28 echo "Hello world!"

Building an image using the common Dockerfile works the same way as it does with Docker:

$ cat Dockerfile
FROM fedora:28
RUN dnf -y install cowsay

$ podman build . -t hello-world
... output omitted ...

$ podman run --rm -it hello-world cowsay "Hello!"

To build containers, Podman calls another tool called Buildah in the background. You can read a recent post about building container images with Buildah — not just using the typical Dockerfile.

Apart from building and running containers, Podman can also interact with container registries. To log in to a container registry, for example the widely used Docker Hub, run:

$ podman login docker.io

To push the image I just built, I just need to tag so it refers to the specific container registry and my personal namespace, and then simply push it.

$ podman -t hello-world docker.io/asamalik/hello-world
$ podman push docker.io/asamalik/hello-world

By the way, have you noticed how I run everything as a non-root user? Also, there is no big fat daemon running on my system!

Installing Podman

Podman is available by default on Silverblue — a new generation of Linux Workstation for container-based workflows. To install it on any Fedora release, simply run:

$ sudo dnf install podman

Episode 118 - Cloudflare's IPFS and onion service

Posted by Open Source Security Podcast on October 15, 2018 01:39 AM
Josh and Kurt talk about Cloudflare's new IPFS and Onion services. One brings distributed blockchain files to the masses, the other lets you host your site on tor easily.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/7128770/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

Cleaning up systemd journal logs on Fedora

Posted by Josef Strzibny on October 14, 2018 03:13 PM

systemd journal logs take a lot of space after a while. Let’s wipe them out!

First you might be interested how much space journal actually takes:

# journalctl --disk-usage
Archived and active journals take up 72.0M in the file system.

Now you know whether that’s too much or not. In case it is, use --vacuum-size option to limit the size of the log (everything above will be deleted). Here is me running the vacuum with 10MB limit:

# journalctl --vacuum-size=10M
Vacuuming done, freed 0B of archived journals from /var/log/journal/d0c1c31ca63b4654a92792c004b69295.

As you can see no space was freed up in my case. Why is that? Reading up the man page reveals that running –vacuum-size= has only an indirect effect on the output shown by –disk-usage, as the latter includes active journal files, while the vacuuming operation only operates on archived journal files.

We also learn about --vacuum-time option that limits the vacuum by time (can be combined with the previous option):

# journalctl --vacuum-size=10M --vacuum-time=2d
Deleted archived journal /var/log/journal/d0c1c31ca63b4654a92792c004b69295/user-1000@0005746cd1587966-78bd9d00691c4f53.journal~ (8.0M).
Vacuuming done, freed 8.0M of archived journals from /var/log/journal/d0c1c31ca63b4654a92792c004b69295.

Above I am deleting entries older than 2 days.

But what about those active files you ask? We need to rotate the log first with journalctl --rotate:

# journalctl --rotate
# journalctl --vacuum-size=10M --vacuum-time=1s

Using --rotate in combination with 1s (retaining only 1 second old logs) brings the disk usage down almost to zero hovewer it probably won’t be zero exactly. In case we want to be confident removing all log files we need to remove them manualy from /var/log/journal. They always end with .journal. (I do not recommend to remove them this way, but it’s the only way --disk-usage show exactly 0B…)

After the cleanup we might want to prevent excessive log size for the future. For that we can lookup SystemMaxUse option in the /etc/systemd/journald.conf configuration file.

# cat /etc/systemd/journald.conf
...
SystemMaxUse=50M
...

50M will limit the size of logs to 50MB maximum.

After editing journald.conf file restart systemd-journald service:

# systemctl restart systemd-journald.service

FEDORA WOMEN`S DAY 2018

Posted by Solanch69 on October 13, 2018 08:07 PM

El dia 22 de octubre, se celebrò en Lima, el  Fedora Women`s Day,  es un evento anual que busca integrar a mujeres en el mundo del Software Libre. Este año, tuve la oportunidad de ser una de las organizadoras, el lugar de la sede fue en la Pontificia Universidad Catolica del Peru, Lima- Peru. Detrás… Sigue leyendo FEDORA WOMEN`S DAY 2018

Fedora 28 : Fix XFS internal error XFS_WANT_CORRUPTED_GOTO .

Posted by mythcat on October 13, 2018 08:00 PM
It is a common kernel error created under certain predefined conditions.
...kernel: XFS internal error XFS_WANT_CORRUPTED_GOTO...
# xfs_repair -L /dev/mapper/fedora-root
# xfs_repair -L /dev/mapper/fedora-swap
# reboot
This will fix this error.

NeuroFedora update: week 41

Posted by Ankur Sinha "FranciscoD" on October 13, 2018 05:11 PM

In week 41, we finally announced NeuroFedora to the community on the mailing list and on the Fedora Community Blog. So, it is officially a thing!

There is a lot of software available in NeuroFedora already. You can see the list here. If you use software that is not on our list, please suggest it to us using the suggestion form.

In week 41:

  • NEST was updated to version 2.16.0. It is in testing for both Fedora 28 and Fedora 29. They should both move to the stable repositories in a few days. This new version does not support 32 bit architectures, so I've had to drop support for those.
  • libneurosim has now been submitted for review. NEST must be built with libneurosim support for PyNN to work with it properly. So PyNN will have to wait until this review is approved and NEST rebuilt.

I am hoping to spend some time on NeuroFedora every week, and I will provide regular updates as I do. Feedback is always welcome. You can get in touch with us here.

-d in go get

Posted by Sayan Chowdhury on October 13, 2018 03:20 PM

Saturday, I am sitting at a Starbucks in Bangalore, trying my hands on a Golang project. I come across this argument -d in go get:

The go cli help says:

The -d flag instructs get to stop after downloading the packages; that is,
it instructs get not to install the packages.

Wonderful! So, if you just want to download the golang project for the sake of contributing, you can use:

go get -d k8s.io/kubernetes

... and it will download the package for you, after which you can start working on the project.

Using ZRAM as swap on Fedora

Posted by Peter Robinson on October 13, 2018 12:33 PM

One of the changes I did for Fedora 29 adding using ZRAM as swap on ARM. The use of compressed RAM for swap on constrained single board computer devices has performance advantages because the RAM is an order of faster than most of the attached storage and in the case of SD/emmc and related flash storage it also saves on the wear and tear of the flash there extending the life of the storage device.

The use of ZRAM as swap isn’t limited to constrained SBCs though, I also use it on my x86 laptop to great effect. It’s also very simple to setup.

# dnf install zram
# systemctl enable zram-swap.service
# reboot

And that’s it! Simple right? To see how it’s being used there are three commands that are useful:

# systemctl status zram-swap.service
● zram-swap.service - Enable compressed swap in memory using zram
   Loaded: loaded (/usr/lib/systemd/system/zram-swap.service; enabled; vendor preset: disabled)
   Active: active (exited) since Tue 2018-10-09 22:13:24 BST; 3 days ago
 Main PID: 1177 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   Memory: 0B
   CGroup: /system.slice/zram-swap.service

Oct 09 22:13:24 localhost zramstart[1177]: Setting up swapspace version 1, size = 7.4 GiB (7960997888 bytes)
Oct 09 22:13:24 localhost zramstart[1177]: no label, UUID=d79b7cf6-41e7-4065-90a9-000811c654b4
Oct 09 22:13:24 localhost zramstart[1177]: Activated ZRAM swap device of 7961 MB
Oct 09 22:13:24 localhost systemd[1]: Started Enable compressed swap in memory using zram.
# swapon
NAME       TYPE      SIZE   USED PRIO
/dev/zram0 partition 7.4G 851.8M   -2
# zramctl
NAME       ALGORITHM DISKSIZE   DATA  COMPR  TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4           7.4G 848.3M 378.4M 389.9M       8 [SWAP]
#

When I was researching the use of ZRAM there was a lot of information online. A lot of implementations sliced up the zram into multiple slices to enable the balancing of the slices across CPUs, but this is outdated information as the zram support in recent kernels is now multi threaded so there’s no performance advantage to having multiple smaller swap devices any longer, and having a single larger swap space allows the kernel to be more effective in using it.

In Fedora all the pieces of the Fedora implementation are stored in the package source repo. So those that are interested in using zram for other use cases are free to test it. Bugs and RFEs can be reported as issues in pagure or in RHBZ like any other package.

FPgM report: 2018-41

Posted by Fedora Community Blog on October 12, 2018 09:04 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week. The Fedora 29 Final Go/No-Go and Release Readiness meetings are next week.

I’ve set up weekly office hours in #fedora-meeting. Drop by if you have any questions or comments about the schedule, Changes, elections or anything else.

Help requests

Announcements

Upcoming meetings

Fedora 29 Status

Fedora 30 Status

Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

Accepted changes

The post FPgM report: 2018-41 appeared first on Fedora Community Blog.

Disque dur fantôme

Posted by Casper on October 12, 2018 08:05 PM
Je lançais des commandes au hasard (en fait non, je cherchais à produire une liste des disques durs branchés), quand soudain une anomalie est apparue.

Lorsque je lance ls /dev/sd*, j'obtiens le retour suivant :

/dev/sda  /dev/sda1  /dev/sdb  /dev/sdb1  /dev/sdb2  /dev/sdb3  /dev/sdc

Le problème, c'est que je n'ai que 2 disques durs branché en SATA, sda et sdb identifiés avec smartctl, et rien d'autre. D'où vient alors ce disque sdc ?

# smartctl -i /dev/sdc
smartctl 6.5 2016-05-07 r4318 [x86_64-linux-4.16.9-200.fc27.x86_64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

/dev/sdc: Unknown USB bridge [0x05e3:0x0723 (0x9451)]
Please specify device type with the -d option.

Use smartctl -h to get a usage summary


Smartctl indique que ce n'est pas un disque dur régulier, c'est donc autre chose. L'information utile qu'a fourni smartctl, c'est l'identifiant du périphérique 0x05e3:0x0723 qui va servir de filtre avec grep.
Petite recherche dans lspci, sans succès. J'ai regardé dans lsusb, et ô joie :

# lsusb|grep 05e3
Bus 003 Device 003: ID 05e3:0723 Genesys Logic, Inc. GL827L SD/MMC/MS Flash Card Reader


Et le mystère est résolu ! sdc est présent même lorsque le lecteur de carte SD est vide !

[F29] Participez à la journée de test consacrée à la modularité

Posted by Charles-Antoine Couret on October 12, 2018 08:20 AM

Aujourd'hui, ce vendredi 12 octobre est une journée dédiée à un test précis : sur la modularité. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

La modularité est le résultat de la réflexion de Fedora.next, amorcé en 2014. L'objectif est de découpler le cycle de vie des applications avec celui de Fedora afin qu'un utilisateur de Fedora bénéficie de plus de flexibilité. Il serait par exemple possible de choisir une version native de Python plus récente ou plus ancienne que celle disponible par défaut. Auparavant cela n'était possible qu'en choisissant une autre version de Fedora ce qui pouvait être contraignant.

Les modules se comportent comme des dépôts supplémentaires proposant un ensemble de paquets destinés à remplacer ceux des dépôts officiels. Mais bien entendu, tout cela est géré par le projet Fedora avec la même qualité et les mêmes mainteneurs.

Le changement majeur pour Fedora 29 est la mise à disposition de cette fonctionnalité nativement pour les autres éditions de Fedora, jusque là uniquement Server en bénéficiait.

Pour le moment Fedora propose quelques modules comme Docker, Django, NodeJS et le langage Go.

Les tests du jour couvrent :

  • Lister les modules disponibles et installés ;
  • Installer un nouveau module ;
  • Activer un nouveau module ;
  • Mettre à jour un module.

Comme vous pouvez le constater, ces tests sont assez simples et ne devraient prendre que quelques minutes seulement. Il vous faudra bien évidemment installer le dépôt modular avant (paquet fedora-repos-modular).

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Kiwi TCMS team updates

Posted by Kiwi TCMS on October 12, 2018 08:10 AM

I am happy to announce that our team is steadily growing! As we work through our roadmap, status update here, and on-board new team members I start to feel the need for a bit more structure and organization behind the scenes. I also wish for consistent contributions to the project (commit early, commit often) so I can better estimate the resources that we have!

I am also actively discussing Kiwi TCMS with lots of people at various conferences and generate many ideas for the future. The latest SEETEST in Belgrade was particularly fruitful. Some of these ideas are pulling into different directions and I need help to keep them under control!

Development-wise sometimes I lose track of what's going on and who's doing what between working on Kiwi TCMS, preparing for conferences and venues to promote the project, doing code review of other team members, trying not to forget to check-in on progress (especially by interns), recruiting fresh blood and thinking about the overall future of the project. Our user base is growing and there are days where I feel like everything is happening at once or that something needs to be implemented ASAP (which is usually true anyway)!

Meet Rayna Stankova in the role of our team coach! Reny is a director for Women Who Code Sofia, senior QA engineer at VMware, mentor with CoderDojo Bulgaria and a long-time friend of mine. Although she is an experienced QA in her own right she will be contributing to the people side of Kiwi TCMS and less so technically!

Her working areas will be planning and organization:

  • help us (re)define the project vision and goals
  • work with us on roadmaps and action plans so we can meet the project goals faster
  • help us (self) organize so that we are more efficient, including checking progress and blockers (aka enforcer) and meet the aforementioned consistency point
  • serve as our professional coach, motivator and somebody who will take care of team health (yes I really suck at motivating others)

and generally serving as another very experienced member of the team!

We did a quick brainstorming yesterday and started to produce results (#smileyface)! We do have a team docs space to share information (non-public for now, will open it gradually as we grow) and came up with the idea to use Kiwi TCMS as a check-list for our on-boarding/internship process!

I don't know how it will play out but I do expect from the team to self-improve, be inspired, become more focused and more productive! All of this also applies to myself, even more so!

Existing team members progress

Last year we started with 2 existing team members (Tony and myself) and 3 new interns (Ivo, Kaloyan and Tseko) who built this website!

Tony is the #4 contributor to Kiwi TCMS in terms of number of commits and is on track to surpass one of the original authors (before Kiwi TCMS was forked)! He's been working mostly on internal refactoring and resolving the thousands of pylint errors that we had (down to around 500 I think). This summer Tony and I visited the OSCAL conference in Tirana and hosted an info booth for the project.

Ivo is the #5 contributor in terms of numbers of commits. He did learn very quickly and is working on getting rid of the remaining pylint errors. His ability to adapt and learn is quite impressive actually. Last month he co-hosted a git workshop at HackConf, a 1000+ people IT event in Sofia.

Kaloyan did most of the work on our website initially (IIRC). Now he is studying in the Netherlands and not active on the project. We are working to reboot his on-boarding and I'm hoping he will find the time to contribute to Kiwi TCMS regularly.

From the starting team only Tseko decided to move on to other ventures after he contributed to the website.

Internship program

At Kiwi TCMS we have a set of training programs that teach all the necessary technical skills before we let anyone actively work on the project, let alone become a team member.

Our new interns are Denitsa Uzunova and Desislava Koleva. Both of them are coming from Vratsa Software Community and were mentors at the recently held CodeWeek hackathon in their home city! I wish them fast learning and good luck!

Happy testing!

Command line quick tips: Reading files different ways

Posted by Fedora Magazine on October 12, 2018 08:00 AM

Fedora is delightful to use as a graphical operating system. You can point and click your way through just about any task easily. But you’ve probably seen there is a powerful command line under the hood. To try it out in a shell, just open the Terminal application in your Fedora system. This article is one in a series that will show you some common command line utilities.

In this installment you’ll learn how to read files in different ways. If you open a Terminal to do some work on your system, chances are good that you’ll need to read a file or two.

The whole enchilada

The cat command is well known to terminal users. When you cat a file, you’re simply displaying the whole file to the screen. Really what’s happening under the hood is the file is read one line at a time, then each line is written to the screen.

Imagine you have a file with one word per line, called myfile. To make this clear, the file will contain the word equivalent for a number on each line, like this:

one
two
three
four
five

So if you cat that file, you’ll see this output:

$ cat myfile
one
two
three
four
five

Nothing too surprising there, right? But here’s an interesting twist. You can also cat that file backward. For this, use the tac command. (Note that Fedora takes no blame for this debatable humor!)

$ tac myfile
five
four
three
two
one

The cat file also lets you ornament the file in different ways, in case that’s helpful. For instance, you can number lines:

$ cat -n myfile
     1 one
     2 two
     3 three
     4 four
     5 five

There are additional options that will show special characters and other features. To learn more, run the command man cat, and when done just hit q to exit back to the shell.

Picking over your food

Often a file is too long to fit on a screen, and you may want to be able to go through it like a document. In that case, try the less command:

$ less myfile

You can use your arrow keys as well as PgUp/PgDn to move around the file. Again, you can use the q key to quit back to the shell.

There’s actually a more command too, based on an older UNIX command. If it’s important to you to still see the file when you’re done, you might want to use it. The less command brings you back to the shell the way you left it, and clears the display of any sign of the file you looked at.

Just the appetizer (or dessert)

Sometimes the output you want is just the beginning of a file. For instance, the file might be so long that when you cat the whole thing, the first few lines scroll past before you can see them. The head command will help you grab just those lines:

$ head -n 2 myfile
one
two

In the same way, you can use tail to just grab the end of a file:

$ tail -n 3 myfile
three
four
five

Of course these are only a few simple commands in this area. But they’ll get you started when it comes to reading files.

PHP version 7.1.23 and 7.2.11

Posted by Remi Collet on October 11, 2018 03:24 PM

RPM of PHP version 7.2.11 are available in remi repository for Fedora 28-29 and in remi-php72 repository for Fedora 26-27 and Enterprise Linux  6 (RHEL, CentOS).

RPM of PHP version 7.1.23 are available in remi repository for Fedora 26-27 and in remi-php71 repository for Enterprise Linux (RHEL, CentOS).

emblem-notice-24.pngNo security fix this month, so no update for version 5.6.38 and 7.0.32.

emblem-important-2-24.pngPHP version 5.5 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.2 installation (simplest):

yum-config-manager --enable remi-php72
yum update php\*

Parallel installation of version 7.2 as Software Collection (x86_64 only):

yum install php72

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update

Parallel installation of version 7.1 as Software Collection (x86_64 only):

yum install php71

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.5
  • EL6 rpm are build using RHEL-6.10
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70 / php71 / php72)

Moving away from the 1.6 freedesktop runtime

Posted by Alexander Larsson on October 11, 2018 02:08 PM

A flatpak runtime contains the basic dependencies that an application needs. It is shared by applications so that application authors don’t have to bother with complicated low-level dependencies, but also so that these dependencies can be shared and get shared updates.

Most flatpaks these days use the freedesktop runtime or one of its derivates (like the Gnome and KDE runtimes). Historically, these have been using the 1.6 version of the freedesktop runtime which is based on Yocto.

The 1.6 runtime has served its place to kickstart flatpak and flathub well, but it is getting quite long in the tooth. We still fix security issues in it now and then, but it is not seeing a lot of maintenance recently. Additionally, not a lot of people know enough yocto to work on it, so we were never able to build a larger community around it.

However, earlier this summer a complete reimplementation, version 18.08, was announced, and starting with version 3.30 the Gnome runtime is now based on it as well, with a KDE version is in the works. This runtime is based on BuildStream, making it much easier to work with, which has resulted in a much larger team working on this runtime. Partly this is due to the awesome fact that Codethink has several people paid to work on this, but there are also lots of community support.

The result is a better supported, easier to maintain runtime with more modern content. What we need to do now is to phase out the old runtime and start using the new one in apps.

So, this is a call to action!

Anyone who maintains a flatpak application, especially on flathub, please try to move to a runtime based on 18.08. And if you have any problems, please report them to the upstream freedesktop-sdk project.

Introducing SkipTheLine

Posted by Sayan Chowdhury on October 11, 2018 11:51 AM

2013

Introducing SkipTheLine

The year I graduated from a third-tier college in Durgapur. I did not expect much from the college, mostly because of the poor placement record in the college.

To get myself a job, I was primarily going through hasjob.co and any job opportunities that came through Bangpypers mailing list.

Finally, I landed myself into an internship and later which turned into a full-time at HackerEarth was through the hasjob.co.


2018

5 years later, finding a job in a suitable company is still tough. In the last 5 years, I have come to know about more options, be it AngelList, Hasjob, StackOverflow, Facebook Job Groups etc.

But the problem still I think is the same, even if you apply through these websites a lot of times you don't get a reply from the company you applied to. On other hand, referrals work. I've referred good people to companies I know. This way, there is a chance of getting an interview in the company. Cracking the interview is a different story altogether which I believe depends a lot on the candidate.

Recently, I got to know that Prashant, one of my school senior and a close friend, who famously known in the tech community for his "Bitcoin Wedding" started an effort called "SkipTheLine".

SkipTheLine is a newsletter where Prashant publishes profiles of three developers. These developers are from the community, who are active in open source, or competitive programming, or just good at technologies they work on. He goes on to introduce the developers through email with the companies point of contact via email and then the candidate and the POC take the discussion forward.

I personally loved the initiative that he took as at the end of the day if people whom I know come asking for a job referral I would just direct them to SkipTheLine. Prashant has quite a stronghold in the startup community and does great work in connecting the folks with some really good startup across the country.

I know people personally who got hired a within few days of their newsletter published so if you are looking out for a job, do fill out the SkipTheLine form.

If you are looking to hire, do drop me an email at gmail AT yudocaa DOT in.

Modularity Test Day 2018-10-12

Posted by Fedora Community Blog on October 11, 2018 09:31 AM
Fedora 29 Modularity Test Day

Friday, 2018-10-12  is the Fedora 29  Modularity Test Day!
We need your help to test if everything runs smoothly!

Why Modularity Test Day?

Many of you would have read the amazing article which came out months ago!
Featuring one of major change[1] of Fedora 29  we would test to make sure that all the functionalities are performing as they should.
Modularity is testable today on any Workstation, Labs, Spins  and we will focus on testing the functionality.
It’s also pretty easy to join in: all you’ll need is Fedora 29(which you can grab from the wiki page).

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Modularity Test Day 2018-10-12 appeared first on Fedora Community Blog.

Updated packages of varnish-6.0.1 with matching vmods, for el6 and el7

Posted by Ingvar Hagelund on October 11, 2018 09:24 AM

Recently, the Varnish Cache project released an updated upstream version 6.0.1 of Varnish Cache. This is a maintenance and stability release of varnish 6.0. I have updated the fedora rawhide package, and also updated the varnish 6.0 copr repo with packages for el6 and el7 based on the fedora package. A selection of matching vmods is also included in the copr repo.

Packages are available at https://copr.fedorainfracloud.org/coprs/ingvar/varnish60/

The following vmods are available:

Included in varnish-modules:
vmod-bodyaccess
vmod-cookie
vmod-header
vmod-saintmode
vmod-tcp
vmod-var
vmod-vsthrottle
vmod-xkey

Packaged separately:
vmod-curl
vmod-digest
vmod-geoip
vmod-memcached
vmod-querystring
vmod-uuid

Please test and report bugs. If there is enough interest, I may consider pushing these to fedora as well.

Varnish Cache is a powerful and feature rich front side web cache. It is also very fast, and that is, fast as in powered by The Dark Side of the Force. On steroids. And it is Free Software.

Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, data center, and cloud, contact us at www.redpill-linpro.com.

Kiwi TCMS winter conference presence

Posted by Kiwi TCMS on October 11, 2018 08:53 AM

We are happy to announce that OpenFest - the biggest open source conference in Bulgaria has provided an info booth for our project. This year the event will be held on 3rd and 4th of November at Sofia Tech Park!

Last time the team went to a conference together we had a lot of fun! Join us at OpenFest to learn more about Kiwi TCMS and have fun with us!

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

In case you are unable to visit Sofia, which you totally should, you can catch up with us in Russia until the end of the year:

Feel free to ping us at @KiwiTCMS or look for the kiwi bird logo and come to say hi. Happy testing!

Fedora Women’s Day 2018 – Mexico City

Posted by Fedora Community Blog on October 11, 2018 08:30 AM
Fedora Women's Day 2018 event report

Fedora Women’s Day (FWD) is a day to celebrate and bring visibility to female contributors in open source projects including Fedora. The initiative is led by Fedora’s Diversity and Inclusion team. The number of women in tech have been increasing year over year,  further highlighting the importance of a more inclusive culture in tech.

On September 21, We had our first Fedora Women’s Day in the UAM Azcapotzalco (Mexico City) and we loved to do it.

<figure class="wp-block-image">Taken during the first Fedora Women's Day at UAM Azcapotzalco (Mexico City)<figcaption>Taken during the first Fedora Women’s Day at UAM Azcapotzalco (Mexico City)</figcaption></figure>

Activities

The agenda was:

<figure class="wp-block-image">Vivana Nava presenting on Women in Tech during Fedora Women's Day 2018 at UAM Azcapotzalco (Mexico City)<figcaption>Vivana Nava presenting on Women in Tech during Fedora Women’s Day 2018 at UAM Azcapotzalco (Mexico City)</figcaption></figure>

Fedora Women’s Day in numbers

16 attendees

  • 14 women.
  • 2 men.
  • 8 pizzas.

All the attendees are in Science, Technology, Engineering, and Math (STEM) Careers

<figure class="wp-block-image">chart 2</figure>

Thanks D&I Team

The girls who participated in this event are very grateful with this initiative and want to host the event the next year. Thanks!

<figure class="wp-block-image">Group photo with all attendees at end of Fedora Women's Day 2018 event in Mexico City<figcaption>Group photo with all attendees at end of Fedora Women’s Day 2018 event in Mexico City</figcaption></figure>

Photo by Arièle Bonte on Unsplash

The post Fedora Women’s Day 2018 – Mexico City appeared first on Fedora Community Blog.

Compte rendu des Embedded et Kernel recipes 2018

Posted by Charles-Antoine Couret on October 10, 2018 09:12 PM

J'avais déjà rapporté le fait que j'assistais aux Embedded et Kernel recipes 2018.

Kernel-recipes-entry.jpg

C'était une expérience enrichissante même si assez chargée. Les conférences se sont enchaînées et les pauses ont donné lieu à de nombreuses et instructives conversations. J'ai beaucoup apprécié le format, on n'avait pas non plus à courir de partout ou de choisir entre deux conférences car tout est conçu pour se concentrer sur une salle et un sujet. Et discuter ensuite des confs qu'on a vu, avec le conférencier également. Le fait qu'on soit relativement peu nombreux (une centaine) facilite les échanges et le bon déroulement de l'organisation.

Les locaux de Mozilla étaient en effet superbes même s'il faisait un peu frais dans l'ensemble. Dommage qu'ils quittent ce lieu prochainement. L'installation pour la conf était vraiment de bonne qualité. Je comprends mieux pourquoi cette salle ait été utilisée pour de nombreuses manifestations.

C'était l'occasion de croiser pas mal de monde, dont quelques personnes que je connaissais déjà comme un ex-collègue de ma période marseillaise et Benjamin Tissoire, co-mainteneur des entrées du noyau. Quelques personnalités du noyau étaient présents bien entendu. Et étant sur Paris, on a pu manger un morceau avec des contributeurs de Fedora. Là encore, c'était agréable.

Draw-Embedded-Recipes-Couret.jpg

J'ai pu effectuer ma conférence sur la mise à jour des systèmes embarquées dans de bonnes conditions. Et un dessinateur présent pendant l'évènement a pu faire un beau portrait. Très amusant et original. La vidéo de ce résultat, une première pour moi en anglais, est disponible par ici.

On a reçu également une carte de Libre computer (Le potatoe), ce qui est toujours apprécié. N'en ayant pas l'utilité personnellement (j'ai un RPi 1 qui traîne dans les tiroirs), c'est mon boulot qui l'a récupéré pour des workshops et autres tests.

Vraiment c'était une très bonne semaine. Merci aux organisateurs pour ce travail, une qualité impeccable. L'ambiance, le confort, mais aussi la qualité des sujets abordés, c'était très bien. Merci à mon employeur aussi de m'avoir offert cette opportunité. Cela donne envie d'y retourner, dommage que ce soit aussi difficile d'y participer, fautes de places libres, mais c'est aussi ce qui rend cet évènement si particulier.

Couret-Swupdate-Conf.jpeg

J'espère à une prochaine fois.

Fedora 28 : Testing Blender 2.80 .

Posted by mythcat on October 10, 2018 08:09 AM
I tested the new Blender 2.80 alpha 2 version and is working well.
You can start to download it from official download page.
The next step: unarchive the tar.bz file and run the blender from the newly created folder.
I try to create a package but seems the tool not working with the .spec file.
This is a screenshot with this Blender 3D software running in my Fedora 28.

Design faster web pages, part 1: Image compression

Posted by Fedora Magazine on October 10, 2018 08:00 AM

Lots of web developers want to achieve fast loading web pages. As more page views come from mobile devices, making websites look better on smaller screens using responsive design is just one side of the coin. Browser Calories can make the difference in loading times, which satisfies not just the user but search engines that rank on loading speed. This article series covers how to slim down your web pages with tools Fedora offers.

Preparation

Before you sart to slim down your web pages, you need to identify the core issues. For this, you can use Browserdiet. It’s a browser add-on available for Firefox, Opera and Chrome and other browsers. It analyzes the performance values of the actual open web page, so you know where to start slimming down.

Next you’ll need some pages to work on. The example screenshot shows a test of getfedora.org. At first it looks very simple and responsive.

Browser Diet - values of getfedora.org

Browser Diet – values of getfedora.org

However, BrowserDiet’s page analysis shows there are 1.8MB in files downloaded. Therefore, there’s some work to do!

Web optimization

There are over 281 KB of JavaScript files, 203 KB more in CSS files, and 1.2 MB in images. Start with the biggest issue — the images. The tool set you need for this is GIMP, ImageMagick, and optipng. You can easily install them using the following command:

sudo dnf install gimp imagemagick optipng

For example, take the following file which is 6.4 KB:

First, use the file command to get some basic information about this image:

$ file cinnamon.png
cinnamon.png: PNG image data, 60 x 60, 8-bit/color RGBA, non-interlaced

The image — which is only in grey and white — is saved in 8-bit/color RGBA mode. That’s not as efficient as it could be.

Start GIMP so you can set a more appropriate color mode. Open cinnamon.png in GIMP. Then go to Image>Mode and set it to greyscale. Export the image as PNG with compression factor 9. All other settings in the export dialog should be the default.

$ file cinnamon.png
cinnamon.png: PNG image data, 60 x 60, 8-bit gray+alpha, non-interlaced

The output shows the file’s now in 8bit gray+alpha mode. The file size has shrunk from 6.4 KB to 2.8 KB. That’s already only 43.75% of the original size. But there’s more you can do!

You can also use the ImageMagick tool identify to provide more information about the image.

$ identify cinnamon2.png
cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2831B 0.000u 0:00.000

This tells you the file is 2831 bytes. Jump back into GIMP, and export the file. In the export dialog disable the storing of the time stamp and the alpha channel color values to reduce this a little more. Now the file output shows:

$ identify cinnamon.png
cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2798B 0.000u 0:00.000

Next, use optipng to losslessly optimize your PNG images. There are other tools that do similar things, including advdef (which is part of advancecomp), pngquant and pngcrush.

Run optipng on your file. Note that this will replace your original:

$ optipng -o7 cinnamon.png 
** Processing: cinnamon.png
60x60 pixels, 2x8 bits/pixel, grayscale+alpha
Reducing image to 8 bits/pixel, grayscale
Input IDAT size = 2720 bytes
Input file size = 2812 bytes

Trying:
 zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 1922
 zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920
 
Selecting parameters:
 zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920

Output IDAT size = 1920 bytes (800 bytes decrease)
Output file size = 2012 bytes (800 bytes = 28.45% decrease)

The option -o7 is the slowest to process, but provides the best end results. You’ve knocked 800 more bytes off the file size, which is now 2012 bytes.

To resize all of the PNGs in a directory, use this command:

$ optipng -o7 -dir=<directory> *.png

The option -dir lets you give a target directory for the output. If this option is not used, optipng would overwrite the original images.

Choosing the right file format

When it comes to pictures for the usage in the internet, you have the choice between:

JPG-LS and JPG 2000 are not widely used. Only a few digital cameras support these formats, so they can be ignored. aPNG is an animated PNG, and not widely used either.

You could save a few bytes more through changing the compression rate or choosing another file format. The first option you can’t do in GIMP, as it’s already using the highest compression rate. As there are no alpha channels in the picture, you can choose JPG as file format instead. For now use the default value of 90% quality — you could change it down to 85%, but then alias effects become visible. This saves a few bytes more:

$ identify cinnamon.jpg
cinnamon.jpg JPEG 60x60 60x60+0+0 8-bit sRGB 2676B 0.000u 0:00.000

Alone this conversion to the right color space and choosing JPG as file format brought down the file size from 23 KB to 12.3 KB, a reduction of nearly 50%.

PNG vs. JPG: quality and compression rate

So what about the rest of the images? This method would work for all the other pictures, except the Fedora “flavor” logos and the logos for the four foundations. Those are presented on a white background.

One of the main differences between PNG and JPG is that JPG has no alpha channel. Therefore it can’t handle transparency. If you rework these images by using a JPG on a white background, you can reduce the file size from 40.7 KB to 28.3 KB.

Now there are four more images you can rework: the backgrounds. For the grey background, set the mode to greyscale again. With this bigger picture, the savings also is bigger. It shrinks from 216.2 KB to 51.0 KB — it’s now barely 25% of its original size. All in all, you’ve shrunk 481.1 KB down to 191.5 KB — only 39.8% of the starting size.

Quality vs. Quantity

Another difference between PNG and JPG is the quality. PNG is a lossless compressed raster graphics format. But JPG loses size through compression, and thus affects quality. That doesn’t mean you shouldn’t use JPG, though. But you have to find a balance between file size and quality.

Achievement

This is the end of Part 1. After following the techniques described above, here are the results:

You brought image size down to 488.9 KB versus 1.2MB at the start. That’s only about a third of the size, just through optimizing with optipng. This page can probably be made to load faster still. On the scale from snail to hypersonic, it’s not reached racing car speed yet!

Finally you can check the results in Google Insights, for example:

In the Mobile area the page gathered 10 points on scoring, but is still in the Medium sector. It looks totally different for the Desktop, which has gone from 62/100 to 91/100 and went up to Good. As mentioned before, this test isn’t the be all and end all. Consider scores such as these to help you go in the right direction. Keep in mind you’re optimizing for the user experience, and not for a search engine.

Bodhi 3.10.1 released

Posted by Bodhi on October 09, 2018 08:23 PM

This release fixes a crash while composing modular repositories (#2631).

NeuroFedora SIG: Call For Participation

Posted by Fedora Community Blog on October 09, 2018 08:17 PM

I’ve recently resurrected the NeuroFedora SIG. Many thanks to Igor and
the others who’d worked on it in the past and have given us a firm base.

NeuroFedora: The goal

The (current) goal of the NeuroFedora SIG is to make Fedora an easy to
use platform for neuroscientists.

Neuroscience is an extremely multidisciplinary field. It brings together mathematicians, chemists, biologists, physicists, psychologists, engineers (electrical and others) computer scientists and more. A lot of software is used nowadays in Neuroscience:

  • data collection, analysis, and sharing
  • lots of image processing (a lot of ML is used here, think Data Science)
  • simulation of brain networks (https://neuron.yale.edu/neuron/, http://nest-simulator.org/) dissemination of scientific results (peer reviewed and otherwise, think LaTeX)

Neuroscience isn’t just about understanding how the brain functions, we also want to understand how it processes information—how it “computes”. (Some of you will already be aware of the Human Brain Project, a flagship EU project) Now, given that a large proportion of neuroscientists are not trained in computer science, a lot of time and effort is spent setting up systems, installing software (often from source). This can be hard for people not well-versed in build systems and so on. So, at NeuroFedora, we will try provide a ready to use Fedora based system for neuroscientists to work with, so they can quickly get their environment set up and work on the science.

Please join us!

If you are interested in neuroscience, please consider joining the SIG. Packaging software is only one way in which one can contribute. Writing docs and answering questions about the software in NeuroFedora are other ways too, for example. You can get in touch with us here.

What is in it for you?

In general, it will increase your awareness of neuroscience (which is a fascinating field—but of course, I am biased). We also hope to use the Fedora classroom sessions to host beginner level classes on using the software we package. If you’d like to get into neuroscience research work, it is an excellent opportunity to learn.

Fedora and Science

In general, furthering Open Science is quite in line with our goals of further FOSS—Open science shares the philosophy of FOSS. The data, the tools, the results, should be accessible to all to understand, use, learn from, and develop. I’ve just written to the Mindshare team asking if we can get the various Science related SIGs together and do more. You can find my e-mail here.

Comments/suggestions/feedback/questions are all welcome!

NeuroFedora logo designed by Terezahl from the Fedora Design team

(This is based on an e-mail that was initially sent to the devel mailing list).

The post NeuroFedora SIG: Call For Participation appeared first on Fedora Community Blog.

Fedora at LinuxDays 2018 in Prague

Posted by Jiri Eischmann on October 09, 2018 01:53 PM

LinuxDays, the biggest Linux event in the Czech Republic, took place at the Faculty of Information Technology of Czech Technical University in Prague. The number of registered attendees was a bit lower this year, it could be caused by municipality and senate elections happening on Fri and Sat, but the number got almost to the 1300 mark anyway.

Besides a busy schedule of talks and workshops the conference also has a pretty large booth area and as every year I organized the Fedora one. I drove by car to Prague with Carlos Soriano and Felipe Borges from the Red Hat desktop team on Saturday morning and we were joined by František Zatloukal (Fedora QA) at the booth.

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_1559" style="width: 1354px">linuxdays-fedora<figcaption class="wp-caption-text">František and me at the booth.</figcaption></figure>

Our focus for this year was Silverblue and modularity. I prepared one laptop with an external monitor to showcase Silverblue, the atomic version of Fedora Workstation. I must say that the interest of people in Silverblue surprised me. There were even some coming next day saying: “It sounded so cool yesterday and I couldn’t resist and install it when I got home and played with it in the night…” With Silverblue comes distribution of applications in Flatpak and there was a lot of user interest in this direction as well.

<figure class="wp-caption aligncenter" data-shortcode="caption" id="attachment_1560" style="width: 179px">DSC_0563<figcaption class="wp-caption-text">Reasons to use Fedora.</figcaption></figure>

I was hoping for more interest in modularity, but people don’t seem to be so aware of it. It doesn’t have the same reach outside the Fedora Project as Flatpak does, it’s not so easy to explain its benefits and use cases. We as a project have to do a better job selling it.

The highlight of Saturday was when one of the sysadmins at National Library of Technology, which is on the same campus, took us to the library to show us public computers where they run Fedora Workstation. It’s 120 computers with over 1000 users (in the last 90 days). Those computers serve a very diverse group of users, from elderly people to computer science students. And they have received very few complaints since they switched from Windows to Fedora. Also they’ve hit almost no problems as sysadmins. They only mentioned one corner case bug in GDM which we promised to look into.

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_1561" style="width: 1354px">linuxdays-ntk<figcaption class="wp-caption-text">Carlos and Felipe checking out Fedora in the library.</figcaption></figure>

It was also interesting to see the setup. They authenticate users against AD using the SSSD client, mount /home from a remote server using NFS. They enable several GNOME Shell extensions by default: AlternateTab (because of Windows users), Places (to show the Places menu next to Activities)… They also created one custom extension that replaces the “Power Off” button with “Log Out” button in the user menu because users are not supposed to power the computers off. They also create very useful stats of application usage based on “recently-used” XML files that GNOME creates to generate the menu of frequently used applications. All computers are administrated using Ansible scripts.

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_1562" style="width: 1200px">Dj_-sJtW0AY6u0Q<figcaption class="wp-caption-text">Default wallpaper with instructions.</figcaption></figure>

The only talk I attended on Saturday was “Why and How I Switched to Flatpak for App Distribution and Development in Sandbox” by Jiří Janoušek who develops Nuvola apps. It was an interesting talk and due to his experience developing and distributing apps on Linux Jiří was able to name and describe all the problems with app distribution on Linux and why Flatpak helps here.

On Sunday, we organized a workshop to teach to build flatpaks. It was the only disappointment of the weekend. Only 3 ppl showed up and none of them didn’t really need to learn to build flatpaks. We’ll have the same workshop at OpenAlt in Brno and if the attendance is also low, we’ll know that workshop primarily for app developers is not a good fit for such conferences.
But it was not a complete waste of time, we discussed some questions around Flatpak and worked on flatpaking applications. The result is GNOME Recorder already available in Flathub and Datovka in the review process.

The event is also a great opportunity to talk to many people from the Czech community and other global FLOSS projects. SUSE has traditionally a lot of people there, there was Xfce, FFMPEG, FreeBSD, Vim, LibreOffice…

 

Farewell, application menus!

Posted by Allan Day on October 09, 2018 01:25 PM

Application menus – or app menus, as they are often called – are the menu that you see in the GNOME 3 top bar, with the name and icon for the current app. These menus have been with us since the beginning of the GNOME 3.0 series, but we’re planning on retiring them for the next GNOME release (version 3.32). This post is intended to provide some background on this change, as well as information on how the transition will happen.

<figure class="wp-caption alignnone" id="attachment_2751" style="width: 1920px"><figcaption class="wp-caption-text">The development version of Web, whose app menu has been removed</figcaption></figure>

Background

When app menus were first introduced, they were intended to play two main roles. First, they were supposed to contain application-level menu items, like Preferences, About and Quit. Secondly, they were supposed to indicate which app was focused.

Unfortunately, we’ve seen app menus not performing well over the years, despite efforts to improve them. People don’t always engage with them. Often, they haven’t realised that the menus are interactive, or they haven’t remembered that they’re there.

My feeling is that this hasn’t been helped by the fact that we’ve had a split between app menus and the menus in application windows. With two different locations for menu items, it becomes easy to look in the wrong place, particularly when one menu is more frequently visited than the other.

One of the other issues we’ve had with application menus is that adoption by third-party applications has been limited. This has meant that they’re often empty, other than the default quit item, and people have learned to ignore them.

As a result of these issues, there’s a consensus that they should be removed.

The plan

<figure class="wp-caption alignnone" id="attachment_2753" style="width: 1920px">Software, which has moved its app menu into the window<figcaption class="wp-caption-text">Software, which has also removed its app menu</figcaption></figure>

We are planning on removing application menus from GNOME in time for the next release, version 3.32. The application menu will no longer be shown in the top bar (neither the menu or the application name and icon will be shown). Each GNOME application will move the items from its app menu to a menu inside the application window (more detailed design guidelines are on the GNOME Gitlab instance).

If an application fails to remove its app menu by 3.32, it will be shown in the app’s header bar, using the fallback UI that is already provided by GTK. This means that there’s no danger of menu items not being accessible, if an app fails to migrate in time.

We are aiming to complete the entire migration away from app menus in time for GNOME 3.32, and to avoid being in an awkward in-between state for the next release. The new menu arrangement should feel natural to existing GNOME users, and they hopefully shouldn’t experience any difficulties.

The technical changes involved in removing app menus are quite simple, but there are a lot of apps to convert (so far we’ve
fixed 11 out of 63!) Therefore, help with this initiative would be most welcome, and it’s a great opportunity for new contributors to get involved.

App menus, it was nice knowing you…

Building Fedora Vagrant boxes for VirtualBox using Packer

Posted by Amit Saha on October 09, 2018 01:00 PM

In a previous post, I shared that we are going to have Fedora Scientific Vagrant boxes with the upcoming Fedora 29 release. Few weeks back, I wanted to try out a more recent build to script some of the testing I do on Fedora Scientific boxes to make sure that the expected libraries/programs are installed. Unexpectedly, vagrant ssh would not succeed.
I filed a issue with rel-eng where I was suggested to see if a package in Fedora Scientific was mucking around with the SSH config. To do so, I had to find a way to manually build Vagrant boxes.

The post here seems to be one way of doing it. Unfortunately, I was in a Windows environment where I wanted to build the box, so I needed to try out something else. chef/bento uses Packer and hence this approach looked promising.

After creating a config file for Fedora 29 and making sure I had my kickstart files right, the following command will build a virtual box vagrant image:

$ packer build -force -only=virtualbox-iso .\fedora-29-scientific-x86_64.json

Once I had the box build environment ready, it was then a matter of a manual commenting/uncomenting out of package/package groups to find out the culprit.

I am writing an introductory book to web application deployment

Posted by Josef Strzibny on October 09, 2018 10:23 AM

I decided to write a book (at the very least attempt to). And yes, there will be some Fedora inside!

Who is the target audience?

Everybody who want to start with system administration for the purposes of web server deployment. Beginners or false beginners. Ideally people with web development experience and comfortable with command line.

What will it be about?

The book will touch some general topics in system administration and provide some practical examples including deployment of Ruby on Rails & PostgreSQL application. I might add other stack/s once I have this ready.

What will be most likely in the book?

I am slowly working on the final Table of Contents. Here is something that will be there:

  • Creating Virtual Private Server (VPS) on something like Digital Ocean (Fedora/CentOS)
  • Managing users, processes, services
  • Basic Nginx configuration
  • Running with SELinux
  • PostgreSQL database setup
  • SSL certificates with Let’s Encrypt for HTTPS
  • git-push deployment for convenience

In general it’s an intersection of various things that make up for a web application deployment on a VPS.

What will it be not about?

There will be no Ansible, no Chef, no Salt, no Terraform. I think it would be too much for this introductory book. I might include a configuration management chapter that discuss this topic in general though.

How can I follow up with the progress on the book?

Check out vpsformakers.com. I will continuously update it as I progress and there is an option to join a mailing list for any book related news.

Firefox on Wayland update

Posted by Martin Stransky on October 09, 2018 09:38 AM

As a next step in the Wayland effort we have new fresh Firefox packages [1] with all the goodies from Firefox 63/64 (Nightly) for you. They come with better (and fixed) rendering, v-sync support, and working HiDPI. Support for hi-res displays is not perfect yet and more fixes are on the way – thanks to Jan Horak who wrote that patches.

The builds also ship PipeWire WebRTC patch for desktop sharing created by Jan Grulich and Tomas Popela. Wayland applications are isolated from desktop and don’t have access to other windows (as X11) thus PipeWire supplies the missing functionality along the browser sandbox.

I think the rendering is generally covered now and the browser should work smoothly with Wayland backend. That’s also a reason why I make it default on Fedora 30 (Rawhide) and firefox-x11 package is available as a X11 fallback. Fedora 29 and earlier stay with default X11 backend and Wayland is provided by firefox-wayland package.

And there’s surely some work left to make Firefox perfect on Wayland – for instance correctly place popups on Gtk 3.24, update WebRender/EGL, fix KDE compositor and so on.

[1] Fedora 27 Fedora 28 Fedora 29

Goodbye JJB, Hello Jenkies Pipeline

Posted by Randy Barlow on October 08, 2018 09:21 PM

I've spent the last couple of weeks working on tidying up Bodhi's Continuous Integration story. This essentially comes down to two pieces: writing a new test running script for the CI system to use (and humans too in the development environment!), and switching from Jenkins Job Builder to Jenkies Pipeline …

Flatpak, after 1.0

Posted by Matthias Clasen on October 08, 2018 05:45 PM

Flatpak 1.0 has happened a while ago, and we now have a stable base. We’re up to the 1.0.3 bug-fix release at this point, and hope to get 1.0.x adopted in all major distros before too long.

Does that mean that Flatpak is done ? Far from it! We have just created a stable branch and started landing some queued-up feature work on the master branch. This includes things like:

  • Better life-cycle control with ps and kill commands
  • Logging and history
  • File copy-paste and DND
  • A better testsuite, including coverage reports

Beyond these, we have a laundry list of things we want to work on in the near future, including

  • Using  host GL drivers (possibly with libcapsule)
  • Application renaming and end-of-life migration
  • A portal for dconf/gsettings (a stopgap measure until we have D-Bus container support)
  • A portal for webcam access
  • More tests!

We are also looking at improving the scalability of the flathub infrastructure. The repository has grown to more than 400 apps, and buildbot is not really meant for using it the way we do.

What about releases?

We have not set a strict schedule, but the consensus seems to be that we are aiming for roughly quarterly releases, with more frequent devel snapshots as needed. Looking at the calendar, that would mean we should expect a stable 1.2 release around the end of the year.

Open for contribution

One of the easiest ways to help Flatpak is to get your favorite applications on flathub, either by packaging it yourself, or by convincing the upstream to do it.

If you feel like contributing to Flatpak itself,  please do! Flatpak is still a young project, and there are plenty of small to medium-size features that can be added. The tests are also a nice place to stick your toe in and see if you can improve the coverage a bit and maybe find a bug or two.

Or, if that is more your thing, we have a nice design for improving the flatpak commandline user experience that is waiting to be implemented.

[F29] Participez à la journée de test consacrée à la mise à niveau

Posted by Charles-Antoine Couret on October 08, 2018 10:01 AM

Aujourd'hui, ce lundi 8 octobre, est une journée dédiée à un test précis : sur la mise à niveau de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Nous sommes proches de la diffusion de Fedora 29 édition finale. Et pour que ce lancement soit un succès, il est nécessaire de s'assurer que le mécanisme de mise à niveau fonctionne correctement. C'est-à-dire que votre Fedora 27 ou 28 devienne une Fedora 29 sans réinstallation, en conservant vos documents, vos paramètres et vos programmes. Une très grosse mise à jour en somme.

Les tests du jour couvrent :

  • Mise à niveau depuis Fedora 27 ou 28, avec un système chiffré ou non ;
  • Même que précédemment mais avec KDE comme environnement ou une version Spin quelconque ;
  • De même avec la version Server au lieu de Workstation ;
  • En utilisant GNOME Logiciels plutôt que dnf.

En effet, Fedora propose depuis quelques temps déjà la possibilité de faire la mise à niveau graphiquement avec GNOME Logiciels et en ligne de commande avec dnf. Dans les deux cas le téléchargement se fait en utilisation normale de votre ordinateur, une fois que ce sera prêt l'installation se déroulera lors du redémarrage.

Pour ceux qui veulent bénéficier de F29 avant sa sortie officielle, profitez-en pour réaliser ce test, que cette expérience bénéficie à tout le monde. :-)

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Play Windows games on Fedora with Steam Play and Proton

Posted by Fedora Magazine on October 08, 2018 08:00 AM

Some weeks ago, Steam announced a new addition to Steam Play with Linux support for Windows games using Proton, a fork from WINE. This capability is still in beta, and not all games work. Here are some more details about Steam and Proton.

According to the Steam website, there are new features in the beta release:

  • Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support.
  • DirectX 11 and 12 implementations are now based on Vulkan, which improves game compatibility and reduces performance impact.
  • Fullscreen support has been improved. Fullscreen games seamlessly stretch to the desired display without interfering with the native monitor resolution or requiring the use of a virtual desktop.
  • Improved game controller support. Games automatically recognize all controllers supported by Steam. Expect more out-of-the-box controller compatibility than even the original version of the game.
  • Performance for multi-threaded games has been greatly improved compared to vanilla WINE.

Installation

If you’re interested in trying Steam with Proton out, just follow these easy steps. (Note that you can ignore the first steps to enable the Steam Beta if you have the latest updated version of Steam installed. In that case you no longer need Steam Beta to use Proton.)

Open up Steam and log in to your account. This example screenshot shows support for only 22 games before enabling Proton.

Now click on Steam option on top of the client. This displays a drop down menu. Then select Settings.

Now the settings window pops up. Select the Account option and next to Beta participation, click on change.

Now change None to Steam Beta Update.

Click on OK and a prompt asks you to restart.

Let Steam download the update. This can take a while depending on your internet speed and computer resources.

After restarting, go back to the Settings window. This time you’ll see a new option. Make sure the check boxes for Enable Steam Play for supported titles, Enable Steam Play for all titles and Use this tool instead of game-specific selections from Steam are enabled. The compatibility tool should be Proton.

The Steam client asks you to restart. Do so, and once you log back into your Steam account, your game library for Linux should be extended.

Installing a Windows game using Steam Play

Now that you have Proton enabled, install a game. Select the title you want and you’ll find the process is similar to installing a normal game on Steam, as shown in these screenshots.

After the game is done downloading and installing, you can play it.

Some games may be affected by the beta nature of Proton. The game in this example, Chantelise, had no audio and a low frame rate. Keep in mind this capability is still in beta and Fedora is not responsible for results. If you’d like to read further, the community has created a Google doc with a list of games that have been tested.

Episode 117 - Will security follow Linus' lead on being nice?

Posted by Open Source Security Podcast on October 08, 2018 12:01 AM
Josh and Kurt talk about Linus' effort to work on his attitude. What will this mean for security and IT in general?

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/7128768/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

Pushing composed images to vSphere

Posted by Weldr on October 08, 2018 12:00 AM

Weldr aka. Composer can generate images suitable for uploading to a VMWare ESXi or vSphere system, and running as a virtual machine there. The images have the right format, and include the necessary agents.

Prerequisites

We’ll use Fedora 29 as our OS of choice for running this. Run this in its own VM with at least 8 gigabytes of memory and 40 gigabytes of disk space. Lorax makes some changes to the operating system its running on.

First install Composer:

$ sudo yum install lorax-composer cockpit-composer cockpit composer-cli

Next make sure to turn off SELinux on the system. Lorax doesn’t yet work properly with SELinux running, as it installs an entire OS image in an alternate directory:

$ sudo setenforce 0
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config

Now start lorax-composer system service:

$ sudo systemctl enable --now lorax-composer.socket

If you’re going to use Cockpit UI to drive Composer (see below), you can also enable it like this:

$ sudo systemctl enable --now cockpit.socket
$ sudo firewall-cmd --add-service=cockpit && firewall-cmd --add-service=cockpit --permanent

Compose an image from the CLI

To compose an image in Composer from the command line, we first have to have a blueprint defined. This blueprint describes what goes into the image. For the purposes of this example we’ll use the example-http-server blueprint, which builds an image that contains a basic HTTP server.

Because VMWare deployments typically does not have cloud-init configured to inject user credentials to virtual machines, we must perform that task ourselves on the blueprint. Use the following command to extract the blueprint to a example-http-server.toml file in the current directory:

$ composer-cli blueprints save example-http-server

Add the following lines to the end of the example-http-server.toml file to set the initial root password to foobar. You can also use a crypted password string for the password or set an SSH key.

[[customizations.user]]
name = "root"
password = "foobar"
key = "..."

Now save the blueprint back into composer with the following command:

$ composer-cli blueprints push example-http-server.toml

We run the following command to start a compose. Notice that we pass the image type of vmdk which indicates we want an image appropriate for pushing to VMWare in the Virtual Machine Disk format.

$ sudo composer-cli compose start example-http-server vmdk
Compose 55070ff6-d637-40fe-80f9-9518f2ee0f21 added to the queue

Now check the status of the compose like this:

$ sudo composer-cli compose status
55070ff6-d637-40fe-80f9-9518f2ee0f21 RUNNING  Mon Oct  8 11:40:50 2018 example-http-server 0.0.1 vmdk

In order to diagnose a failure or look for more detailed progress, see:

$ sudo journalctl -fu lorax-composer
...

When it’s done you can download the resulting image into the current directory:

$ sudo composer-cli compose image 55070ff6-d637-40fe-80f9-9518f2ee0f21
55070ff6-d637-40fe-80f9-9518f2ee0f21-disk.ami: 4460.00 MB

Pushing and using the image

You can upload the image into vSphere via HTTP, or by pushing it into your shared VMWare storage. We’ll use the former mechanism. Click on *Upload Files’ in the vCenter:

Upload files

When you create a VM, on the Device Configuration, delete the default New Hard Disk and use the drop down to select an Existing Hard Disk disk image:

Disk Image Selection

And lastly, make sure you use an IDE device as the Virtual Device Node for the disk you create. The default is SCSI, which will result in an unbootable virtual machine. Disk Image Selection