July 01, 2016

Clearing the Keystone Environment

If you spend a lot of time switching between different cloud, different users, or even different projects for the same user when working with openstack, you’ve come across the problem where one environment variable from an old sourceing pollutes the current environment.  I’ve been hit by that enough times that I wrote a small script to clear the environment.

I call it clear_os_env

unset OS_AUTH_TYPE
unset OS_AUTH_URL
unset OS_CACERT
unset OS_COMPUTE_API_VERSION
unset OS_DEFAULT_DOMAIN
unset OS_DOMAIN_ID
unset OS_DOMAIN_NAME
unset OS_IDENTITY_API_VERSION
unset OS_IDENTITY_PROVIDER
unset OS_IDENTITY_PROVIDER_URL
unset OS_IMAGE_API_VERSION
unset OS_NETWORK_API_VERSION
unset OS_OBJECT_API_VERSION
unset OS_PASSWORD
unset OS_PROJECT_DOMAIN_ID
unset OS_PROJECT_DOMAIN_NAME
unset OS_PROJECT_ID
unset OS_PROJECT_NAME
unset OS_REGION_NAME
unset OS_SERVICE_ENDPOINT
unset OS_SERVICE_PROVIDER_ENDPOINT
unset OS_SERVICE_TOKEN
unset OS_TENANT_ID
unset OS_TENANT_NAME
unset OS_TOKEN
unset OS_TRUST_ID
unset OS_URL
unset OS_USERNAME
unset OS_USER_DOMAIN_ID
unset OS_USER_DOMAIN_NAME
unset OS_USER_ID
unset OS_USER_ID
unset OS_VOLUME_API_VERSION

Source this prior to sourcing any keystone.rc file, and you should have cleared out the old variables, regardless of how vigilant the new source file writer was in clearing old variables. THis includes some old variables that should no longer be used, like OS_SERVICE_TOKEN

Pulp Smash Introduction

Pulp Smash is a functional test suite for Pulp. It’s used by the Pulp developers and Pulp QE team on a daily basis. It’s implemented as a GPL licensed pure Python library, and getting started is as simple as installing Python and executing the following:

pip install pulp-smash
python -m pulp_smash  # follow the instructions

The video below is an introduction to Pulp Smash. It demonstrates how to install Pulp, demonstrates how to install, configure and run Pulp Smash, shows where to find more information, and covers some additional topics. It’s designed to be viewed full-screen.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="281" mozallowfullscreen="mozallowfullscreen" src="https://player.vimeo.com/video/172759696" title="Pulp Smash Introduction" webkitallowfullscreen="webkitallowfullscreen" width="500"></iframe>

Skype com suporte a vídeo no Fedora 23 (o caso da "webcam verde")
Pode ocorrer da sua webcam ficar "verde" no Skype para Linux. Mas temos solução para isso! Vamos ao passo-a-passo. 
Para baixar o Skype para Fedora clique aqui:
http://www.skype.com/intl/en-us/get-skype/on-your-computer/linux/downloading.fedora
Instale o arquivo rpm dando dois cliques e fornecendo a senha. Depois disso, você precisará editar o atalho do Skype para que seja possível o suporte a webcam. Sugiro que você siga a seguinte sequência, que começa no terminal: 
a) abra o terminal e digite su;
b) depois de colocar a senha digite nautilus;
c) na tela que abrirá navegue até /usr/share/applications (você pode digitar "ctrl + L" se quiser colar isso na barra de endereços e chegar até este caminho com mais facilidade). Ali procure pelo aplicativo (atalho) do Skype, clique com o botão direito sobre ele, e com o esquerdo em "propriedades". Na linha de comando, digite:
-Se for Fedora 32 bits: env LD_PRELOAD=/usr/lib/libv4l/v4l2compat.so skype
-Se for Fedora 64 bits: env LD_PRELOAD=/usr/lib/libv4l/v4l2convert.so skype
Entre no Skype e verá que o suporte a webcam está funcionando perfeitamente. Existe um software nativo para gerenciar a imagem da sua webcam, que já vem instalado com o Fedora (Cheese).

NOTA: se a instalação do skype não funcionar dando duplo clique, é barbada: abra um terminal, digite su (entre com sua senha de root), navegue até a pasta de downloads (cd /home/seulogin/Downloads), e digite
rpm -ivh skype-4.3.0.37-fedora.i586.rpm (essa é a versão do download no momento da edição deste arquivo).

Saludos! E siga LIVRE.................
Fresh DNF for RHEL 7 and CentOS 7

DNF is in EPEL for more than one year, unfortunately there was still the old DNF-0.6.4 version. Over that time in DNF were implemented a lot of great features and plenty of bugs have been fixed. DNF (especially its libraries) could not be updated in EPEL repository because of its policy. Now we have prepared fresh DNF-1.1.9 for RHEL 7 and CentOS 7 users in our COPR repository. Note this is still experimental preview not supported by Red Hat.

In order to get DNF-1.1.9 in RHEL 7 or CentOS 7 run:

# cat <<EOF > /etc/yum.repos.d/dnf-stack-el7.repo
[dnf-stack-el7]
name=Copr repo for dnf-stack-el7 owned by @rpm-software-management
baseurl=https://copr-be.cloud.fedoraproject.org/results/@rpm-software-management/dnf-stack-el7/epel-7-$basearch/
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/@rpm-software-management/dnf-stack-el7/pubkey.gpg
enabled=1
enabled_metadata=1
EOF
# yum install dnf

Enjoy the newest DNF ;)

ANNOUNCE: libosinfo 0.3.1 released

I am happy to announce a new release of libosinfo, version 0.3.1 is now available, signed with key DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF (4096R). All historical releases are available from the project download page.

Changes in this release include:

  • Require glib2 >= 2.36
  • Replace GSimpleAsyncResult usage with GTask
  • Fix VPATH based builds
  • Don’t include autogenerated enum files in dist
  • Fix build with older GCC versions
  • Add/improve/fix data for
    • Debian
    • SLES/SLED
    • OpenSUSE
    • FreeBSD
    • Windows
    • RHEL
    • Ubuntu
  • Update README content
  • Fix string comparison for bootable media detection
  • Fix linker flags for OS-X & solaris
  • Fix darwin detection code
  • Fix multiple memory leaks

Thanks to everyone who contributed towards this release.

A special note to downstream vendors/distributors.

The next major release of libosinfo will include a major change in the way libosinfo is released and distributed. The current single release will be replaced with three indepedently released artefacts:

  • libosinfo – this will continue to provide the libosinfo shared library and most associated command line tools
  • osinfo-db – this will contain only the database XML files and RNG schema, no code at all.
  • osinfo-db-tools – a set of command line tools for managing deployment of osinfo-db archives for vendors & users.

The libosinfo and osinfo-db-tools releases will be fairly infrequently as they are today. The osinfo-db releases will be done very frequently, with automated releases made available no more than 1 day after updated DB content is submitted to the project.

End Of Life für Fedora 22 ist am 19.07.2016!

Gemäß dem Fedora Lifecycle wird der Support für vorletzte Fedora Version (N-1) einen Monat nach dem Erscheinen einer neuen Version (N+1) eingestellt. Somit endet der Support für Fedora 22 am 19. Juli 2016.

Ab diesem Datum wird es keine weiteren Updates mehr für Fedora 22 geben und allen Nutzern von Fedora 22 wird dringend geraten, auf Fedora 23 oder 24 umzusteigen.

All systems go
New status good: Everything seems to be working. for services: Fedora Infrastructure Cloud, COPR Build System
Fedora 22: End Of Life, 2016 July 19

With the recent release of Fedora 24, Fedora 22 will officially enter End Of Life (EOL) status on July 19th, 2016. After July 19th, all packages in the Fedora 22 repositories will no longer receive security, bugfix, or enhancement updates, and no new packages will be added to the Fedora 22 collection.

Upgrading to Fedora 23 or Fedora 24 before July 19th 2016 is highly recommended for all users still running Fedora 22.

Looking back at Fedora 22

Fedora 22 was released in May 2015, and one of the more notable changes at release time was that yum was replaced by dnf as the default package manager in Fedora.

Screenshot of Fedora 22 Workstation

Fedora 22 Workstation screenshot

About the Fedora Release Cycle

The Fedora Project provides updates for a particular release up until a month after the second subsequent version of Fedora is released. For example, updates for Fedora 23 will continue until one month after the release of Fedora 25, and Fedora 24 will continue to be supported up until one month after the release of Fedora 26.

The Fedora Project wiki contains more detailed information about the entire Fedora Release Life Cycle, from development to release, and the post-release support period.

Major service disruption
New status major: Network outage going on, being worked on. for services: COPR Build System, Fedora Infrastructure Cloud
Major service disruption
Service 'Fedora Infrastructure Cloud' now has status: major: Cloud IS DOWN
Major service disruption
Service 'COPR Build System' now has status: major: Cloud IS DOWN

June 30, 2016

Hacked!
Laptop in chains

 

With our school’s graduation ceremony last night, the school year is now officially finished. This year will definitely go down in my memory as the year that the students got the best of me… twice!

IP-gate
To give some background on the first “hack,” our current network uses a flat IP network with IP subnets used for each different set of machines (for organizational purposes). We don’t use IP-based security for obvious reasons, but we do use the subnets for deciding internet speed. IP addresses (fixed except for the guest subnet) are given out using DHCP, and each of the subnets except the guest subnet gets decent speed.

When I set up this system ten years ago, I was well aware of the obvious drawback: any person could set a static IP address on any subnet they chose, and, given our lack of managed switches (at the time we had none, though things are changing), there wasn’t much of anything I could do about it. On the flip side, the worst that could happen is that these users would get faster internet, hardly the end of the world.

It took ten years, but, finally, someone figured it out. One of our more intelligent students decided that his IP address of 10.10.10.113 didn’t make a whole lot of sense, given that the gateway is 10.10.1.1. He set his IP to 10.10.1.113, and, voilà, his internet speed shot through the roof!

Naturally, he shared his findings with his friends, who managed to keep it under the radar until one of the friends decided to see how well BitTorrent would work with the school internet. What none of these students realized is that the 10.10.1.* subnet was for servers, and, oddly enough, none of our servers uses BitTorrent. The traffic stuck out like a sore thumb, and I finally caught on.

My first step was to blacklist all unrecognized MAC addresses using the server subnet. The next step was more difficult. Now that the cat was out of the bag and everyone knew how to get faster internet, I needed a way to block anybody not using the IP they’d been assigned through our DHCP server. Obviously, there is a correct way of doing this, but that seems to be using 802.1x, and we’re just not there yet. My quick and dirty solution was to copy the dhcp configuration file containing all the host and IP information to our firewall, and then generate a list of iptables rules that only allow traffic through if the IP address matches the expected MAC address.

The problem with this solution is that it doesn’t account for the fact that spoofing MAC addresses is actually relatively simple, so it looks like one of my summer projects is going to be a complete revamp of our network. I’m hoping I can configure our FreeIPA server to also operate as the backend for a RADIUS server so we can implement 802.1x security.

In this case, the consequences of the “hack” for us were pretty insignificant. Students got some extra bandwidth for a while. The students who changed their IP addresses also didn’t suffer any major consequences. Their devices were blacklisted from the internet until they came to speak with me, and then were put on the guest subnet. All of the students were in their final year, so they were only stuck on the guest subnet for the last month or so.

The most obvious lesson I learned from “IP-gate” is that security through obscurity works great…until someone turns on the light. And when that happens, you’d better have a plan.

The Grade-changing Scandal
This was a far messier situation. One of our teachers allowed a student to access their computer to set up a video for class. On the computer, the teacher had saved their login credentials for LESSON, our web-based marking system. While the teacher was distracted, the student used this trick to find the teacher’s password, and then shared the password with different members of the class. Throughout the next few days, the class average for that teacher’s subjects rose at a remarkable rate.

Three days later, one of the students finally told the principal what had happened, and the principal called me. What followed was a day of tying together evidence from multiple sources to work out who changed what and when.

What the students weren’t aware of was that LESSON logs everything at the assignment level, so I could see which IP addresses changed which assignments. If the IP was an internal school address, I could also see which user changed the assignment. One of the students used their laptop (registered on the network, so I knew who it was) to change some marks, then logged in from a lab computer (so once again, I knew who it was), and then finally logged in from home.

The students who logged in from home were harder to track, at least until they did something foolish, like logging in as themselves to verify that the marks had actually changed ten seconds after logging out as the teacher.

We also do daily backups of the LESSON database that we keep for a full year, so it was a piece of cake to restore all of the marks back to their original scores.

Obviously though, this went much further than the IP-spoofing going in in “IP-gate.” This wasn’t just some kids wanting faster internet, this was a case of flagrant academic dishonesty.

In the end, we came up with the following consequences:

  • The students who masterminded the break-in received a zero for the subject for the term
  • The students who we caught changing the marks received zeros for any assignment of theirs that had a changed mark
  • The students we knew that they knew their marks were changed received three Saturday detentions (they have to sit in complete silence for four hours on a Saturday)
  • The students we suspected that they knew their marks were changed received one Saturday detention, though these students were allowed to appeal, and most who did had their Saturday detention reversed

One of the things I’ve learned from this is that there’s never too much audit information. LESSON is going to be changed to record not just who changes each assignment, but who changes each mark, and there will be a history of every changed mark so that teachers can see when marks are changed.

Apart from this, I would be curious as to what others think about the consequences for these two “hacks.” Were we too lenient on the first? Too harsh on the second? What should we have done differently? And what should we do differently going forward?

Laptop computer locked with chain and padlock by Santeri Viinamäki. Used under a CC BY-SA 4.0 license.


Pretty Print directory of .json files

I had a bunch of compressed json files that I needed to pretty print to make them more readable. This little snippet will create a new pretty printed json file prefixed with pp:

ls *.json | xargs -I {} sh -c "cat {} | python -mjson.tool > pp{}"

Instead of having to look at files that look like this:

{ "attributes": [ { "name": "type", "value": "PKT" }, { "name": "arch", "value": "x86_64,x86" }, { "name": "name", "value": "Awesome OS" } ], "dependentProductIds": [], "href": "/products/00", "id": "00", "multiplier": 1, "name": "Awesome OS", "productContent": [ { "content": { "arches": null, "contentUrl": "/content/6/$releasever/$basearch/debug", "gpgUrl": "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-awesome-os", "id": "FFFF", "label": "awesome-os-debug-rpms", "metadataExpire": 86400, "modifiedProductIds": [ "0A" ], "name": "Awesome OS (Debug RPMs)", "releaseVer": null, "requiredTags": "awesome-os-server", "type": "yum", "vendor": "Candlepin" }, "enabled": false } ] }

You get a bunch of files that look like this:

{
    "attributes": [
        {
            "name": "type",
            "value": "PKT"
        },
        {
            "name": "arch",
            "value": "x86_64,x86"
        },
        {
            "name": "name",
            "value": "Awesome OS"
        }
    ],
    "dependentProductIds": [],
    "href": "/products/00",
    "id": "00",
    "multiplier": 1,
    "name": "Awesome OS",
    "productContent": [
        {
            "content": {
                "arches": null,
                "contentUrl": "/content/6/$releasever/$basearch/debug",
                "gpgUrl": "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-awesome-os",
                "id": "FFFF",
                "label": "awesome-os-debug-rpms",
                "metadataExpire": 86400,
                "modifiedProductIds": [
                    "0A"
                ],
                "name": "Awesome OS (Debug RPMs)",
                "releaseVer": null,
                "requiredTags": "awesome-os-server",
                "type": "yum",
                "vendor": "Candlepin"
            },
            "enabled": false
        }
    ]
}

Blender and Flatpak

I recently wrote about having made an XDG App build of Blender.

Since then, XDG App got renamed to Flatpak.

As the command-line tool was also renamed, I figured I'd write a new post with updated instructions.

So first, install the Freedesktop.org runtime:

$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg
$ flatpak --user remote-add --gpg-import=./gnome-sdk.gpg gnome https://sdk.gnome.org/repo/
$ flatpak --user install gnome org.freedesktop.Platform 1.4

Next, install Blender from my repository:

$ flatpak --user remote-add --no-gpg-verify bochecha https://www.daitauha.fr/static/flatpak/repo-apps/
$ flatpak --user install bochecha org.blender.app

At this point, you should be able to run Blender from the command line:

$ flatpak run org.blender.app

It should also work if you try running it from the GNOME Shell application picker.

As before, do let me know how it works for you, and especially if it doesn't.

When is a kernel bug not a kernel bug?
Think of this scenario: You're sitting at your shiny Fedora install and notice a kernel update is available. You get all excited, update it through dnf or Gnome Software, or whatever you use, reboot and then things stop working. "STUPID KERNEL UPDATE WHY DID YOU BREAK MY MACHINE" you might say. Clearly it's at fault, so you dutifully file a bug against the kernel (you do that instead of just complaining, right?). Then you get told it isn't a kernel problem, and you probably think the developers are crazy. How can a kernel update that doesn't work NOT be a kernel problem?

This scenario happens quite often. To be sure, a good portion of the issues people run into with kernel updates are clearly kernel bugs. However, there is a whole set of situations where it seems that way but really it isn't. So what is at fault? Lots of stuff. How? Let's talk about the boot sequence a bit.

Booting: a primer


Booting a computer is a terribly hacky thing. If you want a really deep understanding of how it works, you should probably talk to Peter Jones[1]. For the purposes of this discussion, we're going to skip all the weird and crufty stuff that happens before grub is started and just call it black magic.

Essentially there are 3 main pieces of software that are responsible for getting your machine from power-on to whatever state userspace is supposed to be in. Those are grub, the kernel, and the initramfs. Grub loads the kernel and initramfs into memory, then hands off control to the kernel. The kernel does the remainder of the hardware and low-level subsystem init, uncompresses the initramfs and jumps into the userspace code contained within. The initramfs bootstraps userspace as it sees fit, mounting the rootfs and switching control to that to finish up the boot sequence. Seems simple.

The initramfs


So what is this "initramfs"? In technical terms, it's a weird version of a CPIO archive that contains a subset of userspace binaries needed to get you to the rootfs. I say weird because it can also have CPU microcode tacked onto the front of it, which the kernel strips off before unpacking it and applies during the early microcode update. This is a good thing, but it's also kind of odd.

The binaries contained within the initramfs are typically your init process (systemd), system libraries, kernel modules needed for your hardware (though not all of them), firmware files, udev, dbus, etc. It's almost equivalent to the bare minimum you can get to a prompt with. If you want to inspect the contents for yourself, the lsinitrd command is very handy.

There are actually a couple of different 'flavors' of initramfs as well. The initramfs found in the install images is a generic initramfs that has content which should work on the widest variety of machines possible, and can be used as a rescue mechanism. It tends to be large though, which is why after an install the initramfs is switched to HostOnly mode. That means it is specific to the machine it is created on. The tool that creates the initramfs is called dracut, and if you're interested in how it works I would suggest reading the documentation.

The problems


OK, so now that we have the components involved, let's get to the actual problems that look like kernel bugs but aren't.

Cannot mount rootfs


One of the more common issues we see after an update is that the kernel cannot mount the rootfs, which results in the system panicking. How does this happen? Actually, there are a number of different ways. A few are:

* The initramfs wasn't included in the grub config file for unknown reasons and therefore wasn't loaded.
* The initramfs was corrupted on install.
* The kernel command line specified in the grub config file didn't include the proper rootfs arguments.

All of those happen, and none of them are the kernel's fault. Fortunately, they tend to be fairly easy to repair but it is certainly misleading to a user when they see the kernel panic.

A different update breaks your kernel update


We've established that the initramfs is a collection of binaries from the distro. It's worth clarifying that these binaries are pulled into the initramfs from what is already installed on the system. Why is that important? Because it leads to the biggest source of confusion when we say the kernel isn't at fault.

Fedora tends to update fast and frequently across the entire package set. We aren't really a rolling release, but even within a release our updates are somewhat of a firehose. That leads to situations where packages can, and often do, update independently across a given timeframe. In fact, the only time we test a package collection as a whole is around a release milestone (keep reading for more on this). So let's look at how this plays out in terms of a kernel update.

Say you're happily running a kernel from the GA release. A week goes by and you decided to update, which brings in a slew of packages, but no kernel update (rinse and repeat this N times). Finally, a kernel update is released. The kernel is installed, and the initramfs is built from the set of binaries that are on the system at the time of the install. Then you reboot and suddenly everything is broken.

In our theoretical example, let's assume there were lvm updates in the timeframe between release and your kernel update. Now, the GA kernel is using the initramfs that was generated at install time of the GA. It continues to do so forever. The initramfs is never regenerated automatically for a given kernel after it is created during the kernel install transaction. That means you've been using the lvm component shipped with GA, even though a newer version is available on the rootfs.

Again, theoretically say that lvm update contained a bug that made it not see particular volumes, like your root volume. When the new kernel is installed, the initramfs will suck in this new lvm with the bug. Then you reboot and suddenly lvm cannot see your root volume. Except it is never that obvious and it just looks like a kernel problem. Compounding that issue, everything works when you boot the old kernel. Why? Because the old kernel initramfs is still using the old lvm version contained within it, which doesn't have the bug.

This problem isn't specific to lvm at all. We've seen issues with lvm, mdraid, systemd, selinux, and more. However, because of the nature of updates and the initramfs creation, it only triggers when that new kernel is booted. This winds up taking quite a bit of time to figure out, with a lot of resistance (understandably) from users that insist it is a kernel problem.

Solution: ideas wanted


Unfortunately, we don't really have a great solution to any of these, particularly the one mentioned immediately above. People have suggested regenerating the initramfs after every update transaction, but that actually makes the problem worse. It takes something known to be working and suddenly introduces the possibility that it breaks.

Another solution that has been suggested is to keep the userspace components in the initramfs fixed, and only update to include newer firmware and modules. This sounds somewhat appealing at first, but there are a few issues with it. The first is that the interaction between the kernel and userspace isn't always disjoint. In rare cases, we might actually need a newer userspace component (such as the xorg drivers) to work properly with a kernel rebase. Today that is handled via RPM Requires, and fixing the initramfs contents cannot take that into account. Other times there may be changes within the userspace components themselves that mean something in the initramfs cannot interact with an update on the rootfs. That problem also exists in the current setup as well, but switching from today's known scenarios to a completely different setup while still having that problem doesn't sound like a good idea.

A more robust solution would be to stop shipping updates in the manner in which they are shipped in Fedora. Namely, treat them more like "service packs" or micro-releases that could be tested as a whole. Indeed, Fedora Atomic Host very much operates like this with a two week release cadence. However, that isn't prevalent across all of our Editions (yet). It also means individual package maintainers are impacted in their workflow. That might not be a bad thing in the long run, but a change of that proportion requires time, discussion, and significant planning to accomplish. It also needs to take into account urgent security fixes. All of that is something I think should be done, but none of it guarantees we solve these kinds of "kernel-ish" problems.

So should we all despair and throw up our hands and just live with it? I don't think so, no. I believe the distro as a whole will eventually help here, and in the meantime hopefully posts like this provide a little more clarity around how things work and why they may be broken. At the very least, hopefully we can use this to educate people and make the "no, this isn't a kernel problem" discussions a bit easier for everyone.


[1] It should be noted that Peter might not actually want to talk to you about it. It may bring up repressed memories of kludges and he is probably busy doing other things.
Investigating Python

I have been trying to implement private projects on Pagure, while doing that I was struggling with certain design of a function and while doing that I constantly have to switch between shell, editor and at times browser.

I am not saying it is a bad thing or  a good thing but this lead me to find a debugger I thought that might ease my task of finding what is going wrong and where and it actually helped.

I used a python debugger called pudb. This looks like Turbo C but its a lot more useful. This can be used in one of the two ways:

  1. You can directly debug a script  pudb <your_script_name>.py
  2.  You may need to call certain function when you are working with big projects in that case you can just put import pudb; pu.db

 

The most beautiful thing is, it just pops an IDE out of no where, it gives a deep insight about what code i s doing, how it is doing , what stack it is maintaining and what are the various values of variables.

You can always set breakpoints so that you can investigate about the code,  you actually play a detective. This is actually an important  point in developing in an opensource project since there are a lot of functions doing a lot of Hocus Pocus.

This is one of those tools that might even help you to understand the code base better , it really helped me to design the code better.

This is the script that I am trying to debug, the screen looks like this and ‘n’ can let you to go to the next line which can be investigated using the stacks shown.

Selection_017

These are the few windows that you can navigate to and see what is going on.

Selection_018

Not only this but you can jump between different modules and you can set breakpoints.

Selection_019

 

This might help you get some cool insight about the project.

Happy Hacking !


ANNOUNCE: virt-viewer 4.0 release

I am happy to announce a new bugfix release of virt-viewer 4.0 (gpg), including experimental Windows installers for Win x86 MSI (gpg) and Win x64 MSI (gpg). Signatures are created with key DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF (4096R)

All historical releases are available from:

http://virt-manager.org/download/

Changes in this release include:

  • Drop support for gtk2 builds
  • Require spice-gtk >= 0.31
  • Require glib2 >= 2.38
  • Require gtk3 >= 3.10
  • Require libvirt-glib >= 0.1.8
  • Increase minimum window size fo 320×200 instead of 50×50
  • Remove use of GSLice
  • Don’t show usbredir button if not connected yet
  • Only compute monitor mapping in full screen
  • Don’t ignore usb-filter in spiec vv-file
  • Port to use GtkApplication API
  • Don’t leave window open after connection failure
  • Validate symbols from max glib/gdk versions
  • Don’t use GtkStock
  • Don’t use gtk_widget-modify_{fg,bg} APIs
  • Drop use of built-in eventloop in favour of libvirt-glib
  • Don’t open X display while parsing command line
  • Fix window title
  • Use GResource for building ui files into binary
  • Fix crash with invalid spice monitor mapping
  • Add dialog to show file transfer progress and allow cancelling
  • Remove unused nsis installer support
  • Include adwaita icon theme in msi builds
  • Add more menu mnemonics
  • Fix support for windows consoles to allow I/O redirection
  • Add support for ovirt sso-token in vv-file
  • Fix crash with zooming window while not connected
  • Remove custom auto drawer widget with GtkRevealer
  • Add appdata file for gnome software
  • Misc other bug fixes
  • Refresh translations

Thanks to everyone who contributed towards this release.

Fedora Ambassadors: Communicating about Design

This week is busy and continues to keep the pace of previous weeks. A lot has happened this week in the Fedora Project and I’ve taken on a few new tasks too. In addition to existing work on Google Summer of Code, Community Operations, Marketing, and more, I wanted to take some time this week to focus on CommOps Ticket #71. This ticket originally focused on improving accessibility of design resources for Fedora Ambassadors. However, after an interesting conversation with Máirín Duffy on the Design Team workflow, I discovered the availability was not the main issue. Instead, it seemed like communicating was an area needing focus.

Communicating between Ambassadors and Design

From our conversation, I learned that there was a disconnect between the Ambassadors and the Design team. As a sponsored Ambassador myself, I had never seen anywhere documenting the steps or process I should take to ask for art assets when needed for an event. There were also things I had not considered about what goes into the printing and production process for items too. Every region of the world seems to do things a little differently!

With the information I learned from our conversation in a CommOps meeting, I penned up a first draft of what the communication process should look like between Ambassadors and the Design team. The page is not official yet, and I posted a bit ago to the Design Team mailing list requesting feedback on the page. Hopefully, if the information passes approval from the Design Team, we can work on socializing this information with all Ambassadors across the four regions of the world. The end goal of this is to make it easier on both the Ambassadors and the Design Team by doing the following…

  • Making it clear what to do as an Ambassador for requesting art assets / printed items
  • Reducing strain / load on Design Team from repetitive situations / “common questions”
  • Creating a faster and more efficient workflow for Ambassadors organizing events and Designers creating art and deliverables

Long-term, though…

In this discussion, we acknowledged a wiki page is not a long-term solution to this problem. There are now initiatives in the project to help bring greater unity and cohesion between different sub-projects. CommOps is definitely one of the biggest players to this. The future formation of FOSCo will help specifically towards communication between groups like Ambassadors, Design, and Marketing. Fedora Hubs will also contribute to making this process easier by having improved methods of communicating key information like this.


Communication by Lorenzo Stella from the Noun Project.

The post Fedora Ambassadors: Communicating about Design appeared first on Justin W. Flory's Blog.

Fedora Design Suite considered “best of the basics”

Do-it-yourself site MakeUseOf recently highlighted Fedora Design Suite from their article “6 Linux Distros Designed for Artists, Musicians and Editors“. They also called the Fedora Design Suite as the “best of the basics”.

Design Suite Highlights

“Fedora Design Suite does a great job of introducing you to graphic design via its extensive list of tutorials, which is accessible from the main Applications menu. As for bundled software, Entangle is a fantastic app that lets you control a digital camera from your computer.”

You can read the full article here.

The post Fedora Design Suite considered “best of the basics” appeared first on Fedora Community Blog.

What makes up the Fedora kernel?

Every Fedora system runs a kernel. Many pieces of code come together to make this a reality.

Each release of the Fedora kernel starts with a baseline release from the upstream community. This is often called a ‘vanilla’ kernel. The upstream kernel is the standard. The goal is to have as much code upstream as possible. This makes it easier for bug fixes and API updates to happen as well as having more people review the code. In an ideal world, Fedora would be able to to take the kernel straight from kernel.org and send that out to all users.

Realistically, using the vanilla kernel isn’t complete enough for Fedora. Some features Fedora users want may not be available. The Fedora kernel that users actually receive contains a number of patches on top of the vanilla kernel. These patches are considered ‘out of tree’. Many of these patches will not exist out of tree patches very long. If patches are available to fix an issue, the patches may be pulled in to the Fedora tree so the fix can go out to users faster. When the kernel is rebased to a new version, the patches will be removed if they are in the new version.

Some patches remain in the Fedora kernel tree for an extended period of time. A good example of patches that fall into this category are the secure boot patches. These patches provide a feature Fedora wants to support even though the upstream community has not yet accepted them. It takes effort to keep these patches up to date so Fedora tries to minimize the number of patches that are carried without being accepted by an upstream kernel maintainer.

Generally, the best way to get a patch included in the Fedora kernel is to send it to the Linux Kernel Mailing List (LKML) first and then ask for it to be included in Fedora. If a patch has been accepted by a maintainer it stands a very high chance of being included in the Fedora kernel tree. Patches that come from places like github which have not been submitted to LKML are unlikely to be taken into the tree. It’s important to send the patches to LKML first to ensure Fedora is carrying the correct patches in its tree. Without the community review, Fedora could end up carrying patches which are buggy and cause problems.

The Fedora kernel contains code from many places. All of it is necessary to give the best experience possible.

June 29, 2016

Fast Tracing with GDB

Even though GDB is a traditional debugger, it provides support for dynamic fast user-space tracing. Tracing in simple terms is super fast data logging from a target application or the kernel. The data is usually a superset of what a user would normally want from debugging but cannot get because of the debugger overhead. The traditional debugging approach can indeed alter the correctness of the application output or alter its behavior. Thus, the need for tracing arises. GDB in fact is one of the first projects which tried to have an integrated approach of debugging and tracing using the same tool. It has been designed in a manner such that sufficient decoupling is maintained – allowing it to expand and be flexible. An example is the use of In-Process Agent (IPA) which is crucial to fast tracing in GDB but is not necessary for TRAP-based normal tracepoints.

GDB’s Tracing Infrastructure

The tracing is performed by trace and collect commands. The location where the user wants to collect some data is called a tracepoint. It is just a special type of breakpoint without support of running GDB commands on a tracepoint hit. As the program progresses, and passes the tracepoint, data (such as register values, memory values etc) gets collected based on certain conditions (if desired so). The data collection is done in a trace buffer when the tracepoint is hit. Later, that data can be examined from the collected trace snapshot using tfind. However, tracing for now is restricted to remote targets (such as gdbserver). Apart from this type of dynamic tracing, there is also support for static tracepoints in which instrumentation points known as markers are embedded in the target and can be activated or deactivated.  The process of installing these static tracepoints is known as probing a marker. Considering that you have started GDB and your binary is loaded, a sample trace session may look something like this :

(gdb) trace foo
(gdb) actions
Enter actions for tracepoint #1, one per line.
> collect $regs,$locals
> while-stepping 9
  > collect $regs
  > end
> end
(gdb) tstart
[program executes/continues...]
(gdb) tstop

This puts up a tracepoint at foo, collects all register values at tracepoint hit, and then for subsequent 9 instruction executions, collects all register values. We can now analyze the data using tfind or tdump.

(gdb) tfind 0
Found trace frame 0, tracepoint 1
54    bar    = (bar & 0xFF000000) >> 24;

(gdb) tdump
Data collected at tracepoint 1, trace frame 0:
rax    0x2000028 33554472
rbx    0x0 0
rcx    0x33402e76b0 220120118960
rdx    0x1 1
rsi    0x33405bca30 220123089456
rdi    0x2000028 33554472
rbp    0x7fffffffdca0 0x7fffffffdca0
rsp    0x7fffffffdca0 0x7fffffffdca0
.
.
rip    0x4006f1 0x4006f1 <foo+7>
[and so on...]

(gdb) tfind 4
Found trace frame 4, tracepoint 1
0x0000000000400700 55    r1 = (bar & 0x00F00000) >> 20;

(gdb) p $rip
$1 = (void (*)()) 0x400700 <foo+22>

So one can observe data collected from different trace frames in this manner and even output to a separate file if required. Going more in depth to know how tracing works, lets see the GDB’s two tracing mechanisms :

Normal Tracepoints

These type of tracepoints are the basic default tracepoints. The idea of their use is similar to breakpoints where GDB replaces the target instruction with a TRAP or any other exception causing instruction. On x86, this can usually be an int 3 which has a special single byte instruction – 0xCC reserved for it. Replacing a target instruction with this 1 byte ensures that the normal instructions are not corrupted. So, during the execution of the process, the OS hits the int 3 where it halts and program state is saved. The OS sends a SIGTRAP signal to the process. As GDB is attached or is running the process, it receives a SIGCHLD as a notification, that something happened with a child. It does a wait(), which will tell it that process has received a SIGTRAP. Thus the SIGTRAP never reaches the process as GDB intercepts it. The original instruction is first restored, or executed out-of-line for non-stop multi-threaded debugging. GDB transfers the control to the trace collection which does the data collection part upon evaluating any condition set. The data is stored into a trace buffer. Then, the original instruction is replaced again with the tracepoint and normal execution continues. This all fine and good, but there is a catch – the TRAP mechanism alters the flow of the application and the control is passed to the OS which leads to some delay a compromise in speed. But even with that, because of a very restrictive conditional tracing design, and better interaction of interrupt-driven approaches with instruction caches, normal interrupt- based tracing in GDB is a robust technique. A faster solution would indeed be a pure user-space approach, where everything is done at the application level.

Fast Tracepoints

Owing to the limitations stated above, a fast tracepoint approach was developed. This special type of tracepoint uses a dynamic tracing approach. Instead of using the breakpoint approach, GDB uses a mix of IPA and remote target (gdbserver) to replace the target instruction with a 5 byte jump to a special section in memory called a jump-pad. This jump-pad, also known as a trampoline, first pushes all registers to stack (saving the program state). Then, it calls the collector  function for trace data collection, it executes the displaced instruction out-of-line, and jumps back to the original instruction stream. I will probably write something more about how trampolines work and some techniques used in dynamic instrumentation frameworks like Dyninst in a subsequent post later on.

gdb-working

Fast tracepoints can be used with command ftrace, almost exactly like with the trace command. A special command in the following format is sent to the In-Process Agent by gdbserver as,

FastTrace:<tracepoint_object><jump_pad>

where <tracepoint object> is the object containing bytecode for conditional tracing, address, type, action etc. and <jump pad> is the 8-byte long head of the jump pad location in the memory. The IPA prepares all that and if all goes well, responds to such a  query by,

OK<target_address><jump_pad><fjump_size><fjump>

where <target address> is the address where the tracepoint is put in the inferior, <jump_pad> is the updated address of the jump pad head and the <fjump> and <fjump_size> are the jump instruction sequence and its size copied to the command buffer, sent back by IPA. The remote target (gdbserver) then modifies the memory of the target process. Much more fine grained information about fast tracepoint installation is available in the GDB documentation. This is a very pure user-space approach to tracing. However, there is a catch – the target instruction to be replaced should be at least 5 bytes long for this to work as the jump is itself 5 byte long (on Intel x86). This means that fast tracepoints using GDB cannot be put everywhere. How code is modified when patching a 5 bytes instruction is a discussion of another time. This is probably the fastest way yet to perform dynamic tracing and is a good candidate for such work.

The GDB traces can be saved with tsave and with the –ctf switch may be exported to CTF also. I have not tried this but hopefully it should at least open the traces with TraceCompass for further analysis. The GDB’s fast tracepoint mechanism is quite fast I must say – but in terms of usability, LTTng is a far better and advanced option. GDB however allows dynamic insertion of tracepoints and the tracing features are well integrated in your friendly neighborhood debugger. Happy Tracing!


Notes on PXE booting with Fedora

The typical method of installing Fedora on a desktop distribution is via some physical media (CD/DVD once upon a time, USB sticks these days). Fedora also supports PXE boot installation. I ended up doing a PXE install for some recent hardware that was shipped to me as that was the best supported method. The Fedora instructions are good but I still ran into a few hiccups. These are my notes which might be useful for others (or be wrong, YMMV). This was also a UEFI only setup.

A very hand-wavy explanation of what happens with the PXE Boot protocol is that the booting computer does a DHCP request to get an IP address and in addition to the IP address the booting computer gets back information about a server and what file to get off the server. If your home network setup is like mine, there is a router which serves as the DHCP server for all computers. Setting up another DHCP server on the network is a recipe for a bad time. The PXE boot protocol does include support for having a proxy DHCP server for cases like this and there is software to do so. I'm lazy and didn't want to set that up for something which would only be temporary. The option I chose was to run an ethernet cable directly from the booting machine to another machine acting as a server. I used the 192.168.0.X as my network space and set it up on the server

# ifconfig <interface name> 192.168.0.2

and setup a dhcpd.conf:

subnet 192.168.0.0 netmask 255.255.255.0 {
 default-lease-time 600;
 max-lease-time 7200;
 ddns-update-style none;

 option routers 192.168.0.1;
 option domain-name-servers 196.168.0.1;
 option subnet-mask 255.255.255.0;
 next-server 192.168.0.2;

 host my-booting-machine {
 hardware ethernet 00:01:73:02:3B:9C;
 fixed-address 192.168.0.123;
 }

 filename "uefi/shim.efi";
}

I have a really limited networking background but the important points are setting the MAC address for your target booting system correctly, making sure next-server is set to the TFTP server (same machine as DHCP in my case) and not munging up an existing IP name space.

The tftp server mostly takes care of itself. I had to unblock tftp in the firewall

# firewall-cmd --add-service=tftp

When something doesn't work (I say when and not if because I'm cynical), you can enable some logging and run some tests. The suggested command in the Fedora install instructions journalctl --unit dhcpd --since -2m --follow is very helpful to see if the DHCP request is making it to your server. I used that plus some wireshark to first discover I needed to use a different port on the machine for PXE booting and then that I typoed the MAC address in the DHCP configuration file.

If your DHCP is working but TFTP is failing, you can do a short test on another computer

$ tftp
(to) <ip addr of server>
tftp> get
(files) <name of file in your tftp path e.g. uefi/shim.efi>

If everything is setup correctly, you should be able grab the file. If not, you can check the journal on the server to see if that's throwing any errors.

Make sure you use the grub files specified. These are set up to do the network install and download from the internet. This sounds obvious but was yet another case of me not reading the fine manual properly and wondering why things aren't working.

UEFI virt roms now in official Fedora repos
Kamil got to it first, but just a note that UEFI roms for x86 and aarch64 virt are now shipped in the standard Fedora repos, where previously the recommended place to grab them was an external nightly repo. Kamil has updated the UEFI+QEMU wiki page to reflect this change.

On up to date Fedora 23+ these roms will be installed automatically with the relevant qemu packages, and libvirt is properly configured to advertise the rom files to applications, so enabling this with tools like virt-manager is available out of the box.

For the curious, the reason we can now ship these binaries in Fedora is because the problematic EDK2 'FatPkg' code, which had a Fedora incompatible license, was replaced with an implementation with a less restrictive (and more Fedora friendly) license.
Profiling in python

When working on FMN's new architecture I been wanted to profile a little bit the application, to see where it spends most of its time.

I knew about the classic cProfile builtin in python but it didn't quite fit my needs since I wanted to profile a very specific part of my code, preferrably without refactoring it in such a way that I could use cProfile.

Searching for a solution using cProfile (or something else), I ran into the pycon presentation of A. Jesse Jiryu Davis entitled 'Python performance profiling: The guts and the glory'. It is really quite an interesting talk and if you have not seen it, I would encourage you to watch it (on youtube)

In this talk is presented yappi, standing for Yet Another Python Profiling Implementation and writen by Sümer Cip, together with some code allowing to easy use it and write the output in a format compatible with callgrind (allowing us to use KCacheGrind to visualize the results).

To give you an example, this is how it looked before (without profiling):

t = time.time()
results = fmn.lib.recipients(PREFS, msg, valid_paths, CONFIG)
log.debug("results retrieved in: %0.2fs", time.time() - t)

And this is the same code, integrated with yappi

import yappi
yappi.set_clock_type('cpu')
t = time.time()
yappi.start(builtins=True)
results = fmn.lib.recipients(PREFS, msg, valid_paths, CONFIG)
stats = yappi.get_func_stats()
stats.save('output_callgrind.out', type='callgrind')
log.debug("results retrieved in: %0.2fs", time.time() - t)

As you can see, all it takes is 5 lines of code to profile the function fmn.lib.recipients and dump the stats in a callgrind format.

And this is how the output looks like in KCacheGrind :) kcachegrind_fmn.png

GSoC - Journey So Far ( Badges, Milestones and more..)

2 days ago, I woke up to a mail from Google saying that I passed the mid term evaluations of GSoC and could continue working towards my final evaluation. "What a wonderful way to kick start a day, I thought".

Google Summer of Code Mid Term E-Mail

Image : E-mail from Google Summer of Code


Working on the statistics tool was an amazing experience. You can browse my previous posts for a very detailed idea of what I've been working on. Apart from all the code written, I also got an opportunity to communicate with a lot of amazing people who are part of the Fedora Community as well get bootstrapped to the fedora-infrastructure team (and got an awesome badge for it)

Getting sponsored to the fi-apprentice group allows one to access the Fedora Infrastructure machines and setup in read only mode (via SSH) and udnerstand how things are done. However, write access is only given to those in the sysadmin group, who are generally the l33t people in #fedora-admin on Freenode ;)

Apart from that, I got the opportunity to attend a lot of CommOps Hack sessions and IRC Meetings where we discussed and tackled various pending tickets from the CommOps Trac. We are currently working on Onboarding Badges Series for various Fedora groups. Onboarding badges are generally awarded to all those who complete a set of specific tasks (pre-requisites) and get sponsored into the respective Fedora group. One such badge is to be born very soon - take a look at the CommOps Onboarding ticket here.

Life-Cycle of a badge :

Getting a new badge ready is not a very easy task. A badge is born when someone files a badge idea in the Fedora Badges Trac. The person who files a ticket is expected to describe the badge as accurately as possible - including the description, name and particulars. After that is done, a person needs to triage the badge and suggest changes to it (if required). The triager is also expected to fill in additional details about the badge so that the YAML can be made to automate the badge awarding process. The next step is to write yaml definitions for the badge and attach initial concept artwork of the badge. This is reviewed by the fedora-design team and is either approved or further hacked upon.After the approval, the badge is all set to be shipped. QR codes might be printed to manually award the badge, especially when it is an event badge.

Having talked about the badges, I was awarded the following badges during my GSoC Period :

Image : Coding period badges (and counting ..)

Badges are a great way to track your progress in the community. It is also super-useful for the new-contributors as they can work keeping the badges as goals. Read bee's blog post about it here.

To keep a check on myself, I have compiled all my data over here. This repo has all the things I have done inside the community and also has SVG graphs that holds the metrics of it. Hoping to have a great summer ahead.

Useful Links for new contributors :

You can also find me hanging out in #fedora-commops on Freenode. Feel free to drop-in and say hello :)

Talk recap: The friendship of OpenStack and Ansible

When flexibility met simplicity: The friendship of OpenStack and AnsibleThe 2016 Red Hat Summit is underway in San Francisco this week and I delivered a talk with Robyn Bergeron earlier today. Our talk, When flexibility met simplicity: The friendship of OpenStack and Ansible, explained how Ansible can reduce the complexity of OpenStack environments without sacrificing the flexibility that private clouds offer.

The talk started at the same time as lunch began and the Partner Pavilion first opened, so we had some stiff competition for attendees’ attention. However, the live demo worked without issues and we had some good questions when the talk was finished.

This post will cover some of the main points from the talk and I’ll share some links for the talk itself and some of the playbooks we ran during the live demo.

IT is complex and difficult

Getting resources for projects at many companies is challenging. OpenStack makes this a little easier by delivering compute, network, and storage resources on demand. However, OpenStack’s flexibility is a double-edged sword. It makes it very easy to obtain virtual machines, but it can be challenging to install and configure.

Ansible reduces some of that complexity without sacrificing flexibility. Ansible comes with plenty of pre-written modules that manage an OpenStack cloud at multiple levels for multiple types of users. Consumers, operators, and deployers can save time and reduce errors by using these modules and providing the parameters that fit their environment.

Ansible and OpenStack

Ansible and OpenStack are both open source projects that are heavily based on Python. Many of the same dependencies needed for Ansible are needed for OpenStack, so there is very little additional software required. Ansible tasks are written in YAML and the user only needs to pass some simple parameters to an existing module to get something done.

Operators are in a unique position since they can use Ansible to perform typical IT tasks, like creating projects and users. They can also assign fine-grained permissions to users with roles via reusable and extensible playbooks. Deployers can use projects like OpenStack-Ansible to deploy a production-ready OpenStack cloud.

Let’s build something

In the talk, we went through a scenario for a live demo. In the scenario, the marketing team needed a new website for a new campaign. The IT department needed to create a project and user for them, and then the marketing team needed to build a server. This required some additional tasks, such as adding ssh keys, creating a security group (with rules) and adding a new private network.

The files from the live demo are up on GitHub:

In the operator-prep.yml, we created a project and added a user to the project. That user was given the admin role so that the marketing team could have full access to their own project.

From there, we went through the tasks as if we were a member of the marketing team. The marketing.yml playbook went through all of the tasks to prepare for building an instance, actually building the instance, and then adding that instance to the dynamic inventory in Ansible. That playbook also verified the instance was up and performed additional configuration of the virtual machine itself — all in the same playbook.

What’s next?

Robyn shared lots of ways to get involved in the Ansible community. AnsibleFest 2016 is rapidly approaching and the OpenStack Summit in Barcelona is happening this October.

Downloads

The presentation is available in a few formats:

The post Talk recap: The friendship of OpenStack and Ansible appeared first on major.io.

June 28, 2016

Container technologies in Fedora: systemd-nspawn

Welcome to the “Container technologies in Fedora” series! This is the first article in a series of articles that will explain how you can use the various container technologies available in Fedora. This first article will deal with systemd-nspawn.

What is a container?

A container is a user-space instance which can be used to run a program or an operating system in isolation from the system hosting the container (called the host system). The idea is very similar to a chroot or a virtual machine. The processes running in a container are managed by the same kernel as the host operating system, but they are isolated from the host file system, and from the other processes.

What is systemd-nspawn?

The systemd project considers container technologies as something that should fundamentally be part of the desktop and that should integrate with the rest of the user’s systems. To this end, systemd provides systemd-nspawn, a tool which is able to create containers using various Linux technologies. It also provides some container management tools.

In many ways, systemd-nspawn is similar to chroot, but is much more powerful. It virtualizes the file system, process tree, and inter-process communication of the guest system. Much of its appeal lies in the fact that it provides a number of tools, such as machinectl, for managing containers. Containers run by systemd-nspawn will integrate with the systemd components running on the host system. As an example, journal entries can be logged from a container in the host system’s journal.

In Fedora 24, systemd-nspawn has been split out from the systemd package, so you’ll need to install the systemd-container package. As usual, you can do that with a dnf install systemd-container.

Creating the container

Creating a container with systemd-nspawn is easy. Let’s say you have an application made for Debian, and it doesn’t run well anywhere else. That’s not a problem, we can make a container! To set up a container with the latest version of Debian (at this point in time, Jessie), you need to pick a directory to set up your system in. I’ll be using ~/DebianJessie for now.

Once the directory has been created, you need to run debootstrap, which you can install from the Fedora repositories. For Debian Jessie, you run the following command to initialize a Debian file system.

$ debootstrap --arch=amd64 stable ~/DebianJessie

This assumes your architecture is x86_64. If it isn’t, you must change amd64 to the name of your architecture. You can find your machine’s architecture with uname -m.

Once your root directory is set up, you will start your container with the following command.

$ systemd-nspawn -bD ~/DebianJessie

You’ll be up and running within seconds. You’ll notice something as soon as you try to log in: you can’t use any accounts on your system. This is because systemd-nspawn virtualizes users. The fix is simple: remove -b from the previous command. You’ll boot directly to the root shell in the container. From there, you can just use passwd to set a password for root, or you can use adduser to add a new user. As soon as you’re done with that, go ahead and put the -b flag back. You’ll boot to the familiar login console and you log in with the credentials you set.

All of this applies for any distribution you would want to run in the container, but you need to create the system using the correct package manager. For Fedora, you would use DNF instead of debootstrap. To set up a minimal Fedora system, you can run the following command, replacing the absolute path with wherever you want the container to be.

$ sudo dnf --releasever=24 --installroot=/absolute/path/ install systemd passwd dnf fedora-release

Debian Jessie in systemd-nspawn

Setting up the network

You’ll notice an issue if you attempt to start a service that binds to a port currently in use on your host system. Your container is using the same network interface. Luckily, systemd-nspawn provides several ways to achieve separate networking from the host machine.

Local networking

The first method uses the --private-network flag, which only creates a loopback device by default. This is ideal for environments where you don’t need networking, such as build systems and other continuous integration systems.

Multiple networking interfaces

If you have multiple network devices, you can give one to the container with the --network-interface flag. To give eno1 to my container, I would add the flag --network-interface=eno1. While an interface is assigned to a container, the host can’t use it at the same time. When the container is completely shut down, it will be available to the host again.

Sharing network interfaces

For those of us who don’t have spare network devices, there are other options for providing access to the container. One of those is the --port flag. This forwards a port on the container to the host. The format is protocol:host:container, where protocol is either tcp or udp, host is a valid port number on the host, and container is a valid port on the container. You can omit the protocol and specify only host:container. I often use something similar to --port=2222:22.

You can enable complete, host-only networking with the --network-veth flag, which creates a virtual Ethernet interface between the host and the container. You can also bridge two connections with --network-bridge.

Using systemd components

If the system in your container has D-Bus, you can use systemd’s provided utilities to control and monitor your container. Debian doesn’t include dbus in the base install. If you want to use it with Debian Jessie, you’ll want to run apt install dbus.

machinectl

To easily manage containers, systemd provides the machinectl utility. Using machinectl, you can log in to a container with machinectl login name, check the status with machinectl status name, reboot with machinectl reboot name, or power it off with machinectl poweroff name.

Other systemd commands

Most systemd commands, such as journalctl, systemd-analyze, and systemctl, support containers with the --machine option. For example, if you want to see the journals of a container named “foobar”, you can use journalctl --machine=foobar. You can also see the status of a service running in this container with systemctl --machine=foobar status service.

Status of the systemd-nspawn container Debian Jessie

Working with SELinux

If you’re running with SELinux enforcing (the default in Fedora), you’ll need to set the SELinux context for your container. To do that, you need to run the following two commands on the host system.

$ semanage fcontext -a -t svirt_sandbox_file_t "/path/to/container(/.*)?"
$ restorecon -R /path/to/container/

Make sure you replace “/path/to/container” with the path to your container. For my container, “DebianJessie”, I would run the following:

$ semanage fcontext -a -t svirt_sandbox_file_t "/home/johnmh/DebianJessie(/.*)?"
$ restorecon -R /home/johnmh/DebianJessie/
Fedora Flock 2016

I’ve been working on a shirt design for this year’s Fedora Flock in Krakow, Poland and figured that I’d share what I’ve put together! I’m also including some of my earlier attempts at the design as well to show my thought process as well. Ps. for those who may not be familiar with landmarks and iconic images of Krakow (and yes, I too am one of you too… much research was needed!) here’s a list of some of the imagery that I tied to incorporate in the designs.

  1. the symbol of Kraków is the Wawel dragon
  2. Wawel Castle is a significant landmark
  3. the Kościół Mariacki on the main square is distinctive because it features a pair of towers of uneven height.
  4. significant pattern used for decorating in Poland called”strój krakowski.”

Previous designs (before the final)

Screenshot from 2016-06-23 14-33-09

And here we have, what I am considering to be, the final design!

flock shirt_ design #1.5.5_ basic colors


Fedora 24 upgrade

Fedora 24 was released last week, so of course I had to upgrade my machines. As has become the norm, there weren’t any serious issues, but I hit a few annoyances this time around. The first was due to packages in the RPMFusion repos not being signed. This isn’t Fedora’s fault, as RPMFusion is a completely separate project. And it was temporary: by the time I upgraded my laptop on Sunday night, the packages had all been signed.

Several packages had to be dropped by using the –allowerasing argument to dnf. Mostly these were packages installed from RPMFusion, but there were a couple of Fedora packages as well.

The biggest annoyance was that post-upgrade, I had no graphical login. I had to explicitly start the desktop manager service with:

systemctl enable kdm
systemctl start kdm

kdm had previously been enabled on both machines, but the upgrade nuked that in both cases. It looks like I’m not the only person to hit this: https://bugzilla.redhat.com/show_bug.cgi?id=1337546

And now, my traditional meaningless torrent stats!

Here’s my seeding ratios for Fedora 23:

Flavor i686 x86_64
KDE 16.2 35.6
Security 10.3 21.1
Workstation 30.9 46.7
Server 17.5 25.0

The “ratio ratio” as I call it is a comparison of seeding ratios between the two main architectures:

Flavor x86_64:i686
KDE 2.20
Security 2.05
Workstation 1.51
Server 1.43

So what does all of this tell us? Nothing, of course. Just because someone downloads a torrent that doesn’t mean they use it. Still, if we pretend that it’s a proxy for usage, all of the seeding ratios are higher than on the last release day. That tells me that Fedora is becoming more popular (yay!). 64-bit architectures are continuing to be a larger portion of the pie, as well.

Now that I’m starting to build a record of these, I can start reporting trends with the Fedora 25 release.

A F24 user story

Honestly, nothing from the features in the announcement of the Fedora 24 release didn't manage to excite me intro upgrading my desktop from an old, out-of-support Fedora. It's main task is to edit digital photography and for some years a Linux solution is decent at it.

Still, the devil is in the details! I wanted to switch to the (relatively) recent 2.x release of darktable and maybe play a bit with development versions of GIMP. Obscure tasks from a release notes point of view, but a big use case for me. It was a 2 weeks window from the F24 release to the next big incoming project, plenty of time to fix any small annoyances, right?

Well, not so fast :) The first thing to notice, was GIMP (2.8.16, stable) being close to useless: it can't import RAW images, due to the import plugin (UFRaw) segfaulting in the process. You can still use the app, but only for JPEG snapshots, not for anything serious.

Of course, there are workarounds: use another app, like darktable (remember, my first reason for the upgrade), for the RAW-to-JPEG conversion and then polish it with GIMP (something which is part of my workflow for some scenarios). Not so fast, again! I wasn't able to discover the cause, but my darktable in Fedora 24 crashes a lot. A lot more than... say Windows 98 (first edition) in a bad day. Anecdote: for one particular photo, it crashed 4 times in a row (open the app -> select image thumbnail -> press the export button). It worked with a Windows 98 "solution": close everything else, try once more.

Another possible workaround: use GIMP devel, which is available via Flatpack (Flatpak is so much talked as a feature, but so much not ready for primetime in F24!). No luck either: that GIMP comes with no plugins and no obvious way to install them. And Damn! the new GIMP is so fugly...

Perhaps in a couple of months it will be smooth, but for now the upgrade don't look like a good investment.

fedora 24
Me, As A Social Media Expert (To Be)

I have been appointed as a social media expert at some point in the relatively recent past. The problem was, I was clueless when it comes to social media, and was relatively unsure of how and what I should do. I knew I had to manage social media accounts, but from this point on I can honestly say I played by the ear and did not know what I was doing. In order to help all of you out there in the same situation, I would like to share with you all the things I have learned about social media in this process.

What’s The Deal With Social Media?

Despite the fact that online networking has been around for some time, individuals are still exceptionally distrustful with regards to utilizing online networking for overseeing advertising of a law office. For some proprietors of law offices and senior accomplices, online networking appears to be something which is for more youthful individuals just and something which is not sufficiently genuine with a specific end goal to be utilized as an approach to oversee advertising of a law office. Obviously, that the very actuality that online networking has been around for a specific measure of time demonstrates to us that it is not going anyplace in any late time. Likewise, this may be and the correct path for law offices to utilize this further bolstering their good fortune by opening more profiles on online networking which will permit them to achieve a client base which they can just long for.

Expanding Through The Use Of Social Media

Utilizing the online networking can be leeway in the advertising of your law office. Particularly in a considerable measure of other law offices neglected to utilize online networking, this implies in a way you will achieve a novel client base. On the off chance that you might want to grow your business, maybe you ought to take a stab at promoting your law office is online networking as opposed to going for conventional methods for publicizing in the daily papers or on TV.

Managing Your Social Media

With regards to overseeing advertising opening of few profiles on the informal organizations is certainly the least demanding approach. Not just that it is fundamentally cheap (all parts from some delicate expenses), however, you will additionally have the capacity to oversee it yourself. Moreover, you will have the capacity to spread your own sentiments and reach your customers and your potential customers. You will likewise have the capacity to give a public statement which will be totally precise to the message you need it to pass on.

DIY – Social Media Experts!

I would recommend that you, in any event, deal with the advertising yourself in the online networking. Later on, if there’s a requirement for it, you can appoint a group that will work steadily on safeguarding the picture that you have in the online networking and which will permit just for customers and potential customers to get in touch with you at any given time.

The post Me, As A Social Media Expert (To Be) appeared first on Road to Fudconlatam.

PHP version 7.0 in Fedora 25

FESCO have approved, for Fedora 25 the upgrade from PHP 5.6 to PHP 7.0.

 

For memory, this is the result of lot of work, started more than 1 year ago:

And since, each minor release have been published in the repository on upstream announcement day.

Since yesterday, PHP version 7.0.8 is the version available in Fedora rawhide. It will be used for PHP stack QA.

Notice, removed extensions and packages:

  • php-ereg
  • php-mssql
  • php-mysql
  • php-pecl-jsonc (but php-json is back)
  • php-pecl-mongo (php-pecl-mongodb is under review)
  • php-pecl-xhprof
  • php-pecl-mysqlnd-ms
  • php-pecl-mysqlnd-qc
  • php-xcache

Some others will probably be removed later by their owner. For now, all compatible extensions have been updated: amqp, apcu, apfd, event, fann, geoip, gmagick, http, lorde_lz4, igbinary, json_post, libsodium, libvirt, lzf, mailparse, memcache, memcached, msgpack, oauth, pq, propro, raphf, redis, rrd, selinux, smbclient, solr2, ssdeep, ssh2, twig, uuid, xattr, xdebug, xmldiff, yac, yaml, zip, zmq.

Now, we have to manage and fix all the problems detected by Koschei in the php group.

And of course, I have already start working on PHP 7.1 which will probably be proposed for Fedora 26.

So things happen here first, in remi repository, which is used as upstream for Fedora and later for RHEL and CentOS.

Great thanks to my employer, and to everyone using my packages, and helping me to make this possible.

June 27, 2016

What is the Fedora Modularity project and how do you get involved ?
The Fedora Modularity Project is an effort to fix several problems that all distributions face. One of them is the disconnect between Fedora's release cycle and the release cycle of larger Fedora components like for example GNOME, KDE or even the kernel. Those components obviously don't have the same lifecycle that Fedora follows and Fedora can't always wait for major components to be released upstream and on the other hand doesn't want to ship outdated software.
An earlier attempt to work around this disconnect were the Fedora Rings with a central core 'base design', a concentric ring #2 around it for 'environments and stacks' and a ring #3 for applications. It wasn't possible to have different release cycles for packages in ring #2 as dependencies wouldn't allow that most of the time.

Another problem that the Modularity project is attempting to fix is the need for parallel installations of different versions of components without having to use compatibilty packages. The reason for this is that users want the latest set of components and they also want to keep their older applications running even though they require older components.

Fedora Modularity takes the 'Fedora Rings' approach to the next level. Instead of having complete rings we're now using something like 'ring segments' around a core module where the segments on the same ring # are independent on each other whereas segments on outer rings depend on certain inner segments.

There's a Wiki page describing how to get involved with Modularity. I'd strongly suggest reading those onboarding docs including the links on that page. As we're using agile software development methods for Modularity with 2 week sprints it is very easy to get involved even if you're unsure if this really is something you'd like to spend more of your time on. Just pick a task that you'll be able to finish during one of the sprints  from the 'New' column of our Taiga board, move it to 'In progress' and assign it to yourself. Also make sure to get involved in our ongoing discussions in the #fedora-modularization channel on Freenode IRC.
At the end of each sprint each of us creates a short demo about the work that was done during that sprint. These videos are available on YouTube and show for example how to install the modularity environment by checking out our git repositories, how to set up and configure product definition-center, how to use fm-tools for modules and so on.

As it difficult to follow the URL's in the videos, here's a short writeup about the git repositories and which git branches you need to get started:
cd
export MODULARITY_DIR="modularity"
sudo rm -rf ~/modularity_workdir 2>/dev/null
# Set up the pungi / pdc modularity environment
#Create a new working directory first and chdir into it
mkdir $MODULARITY_DIR
cd $MODULARITY_DIR
#check out the pungi-modularity git repository
git clone https://pagure.io/pungi-modularity.git
#check out the modularity-prototype branch of Lubos's pungi git repository
git clone --branch modularity-prototype http://pagure.io/forks/lkocman/pungi.git pungi
#check out the modularity branch of Lubos's productmd git repository
git clone --branch modularity http://github.com/lkocman/productmd.git productmd
#fm-metadata got renamed to modulemd, check out the git repository
git clone http://pagure.io/modulemd.git modulemd

#set up some environment variables so that python finds the files
export MODULARITY_DIR="$HOME/modularity-work"
export PYTHONPATH="$MODULARITY_DIR/productmd:$MODULARITY_DIR/pungi:$MODULARITY_DIR/modulemd"
export PATH="$MODULARITY_DIR/pungi/bin:$PATH"

#now do some work
#create a directory to store the compose, I've used ~/modularity_workdir
sudo mkdir ~/modularity_workdir
sudo chown $USER.$USER ~/modularity_workdir
cd $MODULARITY_DIR/pungi/bin
# get all the packages required by the packages referenced 
# in ../../pungi-modularity/pungi-inputs/core.yaml
./pungi-gather-prototype \
  --arch x86_64 \
  --config ../../pungi-modularity/pungi-inputs/core.yaml \
  --target-dir ~/modularity_workdir \
  --source-repo-from-path /home/24/Everything/x86_64/os/ \
  --source-repo-from-path /home/24/Everything/source/tree/ \
  --debug
# get all the packages required by the packages referenced 
# in ../../pungi-modularity/pungi-inputs/shells.yaml
./pungi-gather-prototype \
  --arch x86_64 \
  --config ../../pungi-modularity/pungi-inputs/shells.yaml \
  --target-dir ~/modularity_workdir \
  --source-repo-from-path /home/24/Everything/x86_64/os/ \
  --source-repo-from-path /home/24/Everything/source/tree/ \
  --debug
#create repositories with those RPMs
./pungi-createrepo-prototype \
  --arch x86_64  \
  --static-content-manifest ~/modularity_workdir/manifest_core-x86_64-*/  \
  --target-dir ~/modularity_workdir \
  --source-repo-from-path /home/24/Everything/x86_64/os/ \
  --source-repo-from-path /home/24/Everything/source/tree/ \
  --debug
./pungi-createrepo-prototype \
  --arch x86_64  \
  --static-content-manifest ~/modularity_workdir/manifest_shells-x86_64-*/ \
  --target-dir ~/modularity_workdir \
  --source-repo-from-path /home/24/Everything/x86_64/os/ \
  --source-repo-from-path /home/24/Everything/source/tree/ \
  --debug
#and finally do a compose
pungi-compose-prototype \
  --release fedora-24 \
  --variants-file /home/karsten/tmp/modularity/pungi-modularity/pungi-inputs/variants-fm.xml \
  --arch x86_64 \
  --target-dir ~/modularity_workdir \
  --debug


After the last step you have several json files in ~/modularity_workdir/compose*/compose/metadata that describe the exact package versions used for this module and that will be later will be used by the Orchestrator to create koji buildroots with these packages and then build the module components and the module itself.
[Howto] Writing an Ansible module for a REST API

Ansible LogoAnsible comes along with a great set of modules. But maybe your favorite tool is not covered yet and you need to develop your own module. This guide shows you how to write an Ansible module – when you have a REST API to speak to.

Background: Ansible modules

Ansible is a great tool to automate almost everything in an IT environment. One of the huge benefits of Ansible are the so called modules: they provide a way to address automation tasks in the native language of the problem. For example, given a user needs to be created: this is usually done by calling certain commandos on the shell. In that case the automation developer has to think about which command line tool needs to be used, which parameters and options need to be provided, and the result is most likely not idempotent. And its hard t run tests (“checks”) with such an approach.

Enter Ansible user modules: with them the automation developer only has to provide the data needed for the actual problem like the user name, group name, etc. There is no need to remember the user management tool of the target platform or to look up parameters:

$ ansible server -m user -a "name=abc group=wheel" -b

Ansible comes along with hundreds of modules. But what is if your favorite task or tool is not supported by any module? You have to write your own Ansible module. If your tools support REST API, there are a few things to know which makes it much easier to get your module running fine with Ansible. These few things are outlined below.

REST APIs and Python libraries in Ansible modules

According to Wikipedia, REST is:

… the software architectural style of the World Wide Web.

In short, its a way to write, provide and access an API via usual HTTP tools and libraries (Apache web server, Curl, you name it), and it is very common in everything related to the WWW.

To access a REST API via an Ansible module, there are a few things to note. Ansible modules are usually written in Python. The library of choice to access URLs and thus REST APIs in Python is usually urllib. However, the library is not the easiest to use and there are some security topics to keep in mind when these are used. Out of these reasons alternative libraries like Python requests came up in the past and are pretty common.

However, using an external library in an Ansible module would add an extra dependency, thus the Ansible developers added their own library inside Ansible to access URLs: ansible.module_utils.urls. This one is already shipped with Ansible – the code can be found at lib/ansible/module_utils/urls.py – and it covers the shortcomings and security concerns of urllib. If you submit a module to Ansible calling REST APIs the Ansible developers usually require that you use the inbuilt library.

Unfortunately, currently the documentation on the Ansible url library is sparse at best. If you need information about it, look at other modules like the Github, Kubernetes or a10 modules. To cover that documentation gap I will try to cover the most important basics in the following lines – at least as far as I know.

Creating REST calls in an Ansible module

To access the Ansible urls library right in your modules, it needs to be imported in the same way as the basic library is imported in the module:

from ansible.module_utils.basic import *
from ansible.module_utils.urls import *

The main function call to access a URL via this library is open_url. It can take multiple parameters:

def open_url(url, data=None, headers=None, method=None, use_proxy=True,
        force=False, last_mod_time=None, timeout=10, validate_certs=True,
        url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2'):

The parameters in detail are:

  • url: the actual URL, the communication endpoint of your REST API
  • data: the payload for the URL request, for example a JSON structure
  • headers: additional headers, often this includes the content-type of the data stream
  • method: a URL call can be of various methods: GET, DELETE, PUT, etc.
  • use_proxy: if a proxy is to be used or not
  • force: force an update even if a 304 indicates that nothing has changed (I think…)
  • last_mod_time: the time stamp to add to the header in case we get a 304
  • timeout: set a timeout
  • validate_certs: if certificates should be validated or not; important for test setups where you have self signed certificates
  • url_username: the user name to authenticate
  • url_password: the password for the above listed username
  • http_agent: if you wnat to set the http agent
  • force_basic_auth: for ce the usage of the basic authentication
  • follow_redirects: determine how redirects are handled

For example, to fire a simple GET to a given source like Google most parameters are not needed and it would look like:

open_url('https://www.google.com',method="GET")

A more sophisticated example is to push actual information to a REST API. For example, if you want to search for the domain example on a Satellite server you need to change the method to PUT, add a data structure to set the actual search string ({"search":"example"}) and add a corresponding content type as header information ({'Content-Type':'application/json'}). Also, a username and password must be provided. Given we access a test system here the certification validation needs to be turned off also. The resulting string looks like this:

open_url('https://satellite-server.example.com/api/v2/domains',method="PUT",url_username="admin",url_password="abcd",data=json.dumps({"search":"example"}),force_basic_auth=True,validate_certs=False,headers={'Content-Type':'application/json'})

Beware that the data json structure needs to be processed by json.dumps. The result of the query can be formatted as json and further used as a json structure:

resp = open_url(...)
resp_json = json.loads(resp.read())

Full example

In the following example, we query a Satellite server to find a so called environment ID for two given parameters, an organization ID and an environment name. To create a REST call for this task in a module multiple, separate steps have to be done: first, create the actual URL endpoint. This usually consists of the server name as a variable and the API endpoint as the flexible part which is different in each REST call.

server_name = 'https://satellite.example.com'
api_endpoint = '/katello/api/v2/environments/'
my_url = server_name + api_endpoint

Besides the actual URL, the payload must be pieced together and the headers need to be set according to the content type of the payload – here json:

headers = {'Content-Type':'application/json'}
payload = {"organization_id":orga_id,"name":env_name}

Other content types depends on the REST API itself and on what the developer prefers. JSON is widely accepted as a good way to go for REST calls.

Next, we set the user and password and launch the call. The return data from the call are saved in a variable to analyze later on.

user = 'abc'
pwd = 'def'
resp = open_url(url_action,method="GET",headers=headers,url_username=module.params.get('user'),url_password=module.params.get('pwd'),force_basic_auth=True,data=json.dumps(payload))

Last but not least we transform the return value into a json construct, and analyze it: if the return value does not contain any data – that means the value for the key total is zero – we want the module to exit with an error. Something went wrong, and the automation administrator needs to know that. The module calls the built-in error functionmodule.fail_json. But if the total is not zero, we get out the actual environment ID we were looking for with this REST call from the beginning – it is deeply hidden in the json structure, btw.

resp_json = json.loads(resp.read())
if resp_json["total"] == 0:
    module.fail_json(msg="Environment %s not found." % env_name)
env_id = resp_json["results"][0]["id"]

Summary

It is fairly easy to write Ansible modules to access REST APIs. The most important part to know is that an internal, Ansible provided library should be used, instead of the better known urllib or requests library. Also, the actual library documentation is still pretty limited, but that gap is partially filled by the above post.


Filed under: Ansible, Business, Cloud, Debian & Ubuntu, Fedora & RHEL, Google, HowTo, Linux, Microsoft, RPM, Shell, SUSE, Technology
Zodbot… upgraded

Kneel before the new Zodbot!

We have upgraded our beloved evil super villain IRC bot on freenode from an old version of supybot-gribble to a new shiny version of limnoria ( https://github.com/ProgVal/Limnoria ).  This doesn’t change much in the interface, but it does mean we are using something that is maintained and gets updates and is a good deal more secure. If you notice problems please do let us know with a Fedora Infrastructure ticket.

Also, as one of the maintainers of supybot-gribble in Fedora and EPEL, I will be retiring that in favor of the limnoria package very soon. It should be now available in updates repos for your upgrading pleasure.

Ursa: You are master of all you survey.

General Zod: [bored] So I was yesterday. And the day before.

Shortcuts - online tool help.
This website come with one of the incredible number of shortcuts.
You can see bellow the icons for each applications.
So when you have time, just take a look ...
The future of security
The Red Hat Summit is happening this week in San Francisco. It's a big deal if you're part of the Red Hat universe, which I am. I'm giving the Red Hat security roadmap talk this year. The topic has me thinking about the future of security quite a lot. It's easy to think about this in the context of an organization like Red Hat, we have a lot of resources, and there are a lot of really interesting things happening. Everything from container security, to operating system security, to middleware security. My talk will end up youtube at some point, I'll link to it, but I also keep thinking about the bigger picture. Where will security be in the next 5, 10, 15 years?

Will ransomware still be a thing in ten years? Will bitcoin still be around? What about flash? How will open source adapt to all the changes? Will we even call them containers?

The better question here is "what do we want security to look like?"

If we look at some of the problems that always make the news, stolen personal information, password leaks, ransomware, hacking. These aren't new problems, most are almost as old as the Internet. The question is really, can we fix any of these problems? The answer might be "no". Some problems aren't fixable, crime is an example of this. When you have unfixable problems the goal is to control the problem, not prevent it.

How do we control security?

I think we're headed down this path today. It's still slow going and there are a lot of old habits that will die hard. Most decent security organizations aren't focused on pure prevention anymore, they understand that security is process and people, it's all about having nice policies and good staff. If you have those things you can start to work on controlling some aspects of what's happening. If you want users to behave you have to make it easy for them to do the right thing. If you don't want them opening email attachments, make it easy to not use email attachments.

There are still a lot of people who think it's enough to tell people not to do something, or yell at them if they behave in a way that is quite honestly expected. People don't like getting yelled at, they don't like having to go out of their way to do anything, they will always pick the option that is easiest.

Back to the point though. What will the future of security look like? I think the future of security is people. Technology is great, but all our fancy technology is to solve problems that are in the past. If we want to solve the problems of the future, we need good people to first understand those problems, then we can understand how to solve them. This is of course easier said than done, but sometimes just understanding the problem is.

Are you a people? Do you have ideas how to make things better? Tell me: @joshbressers
UEFI for QEMU now in Fedora repositories

I haven’t seen any announcement, but I noticed Fedora repositories now contain edk2-ovmf package. That is the package that is necessary to emulate UEFI in QEMU/KVM virtual machines. It seems all licensing issues having been finally resolved and now you can easily run UEFI systems in your virtual machines!

I have updated Using_UEFI_with_QEMU wiki page accordingly.

Enjoy.


June 26, 2016

Issue 98, HyperKitty, Fedora-Apps

This week I focused on iterating on the feedback I received for the previous issue and worked on the help widget for the hubs. I also found something that caught my interest and wanted to improve the design of the Fedora Apps website. Lastly, I also conducted hyper kitty’s heuristics evaluation for the purpose of evaluating the design choices for the website.

Issue 98

This was the first task I worked on this week. The issue expresses a need for the user to have the same functionality of #help in IRC, as a widget in the hub.

Following is a high-level overview of the flow: The user asks for help in the IRC channel which might be specified in a semi-structured format. The Json has to be populated and it is sent to the back-end. Next, it gets pushed to the help widget where it is converted into a format which is understandable to the user.

Some use cases that I can think of are:

  1. How would the user specify in IRC channel ? (Tagging maybe one solution)
  2. How will we display the message in the help widget?
  3. What all we would want to show in the help widget?
  4. How to configure the help widget?
  5. Further, I wanted to keep in mind the developer constraints as well.

Paper Mockups

Final Design

In the final design I followed the fedora guidelines and design standards for this project. The final design is provided below and feedback on the design can be provided here

Fedora-apps

This is project that I took on my own as I wanted to make the fedora apps front page easily understandable. Currently the categorization of projects is incorrect in the graph. Furthermore, the graph is not easily readable.

For this purpose, Mo and I had a discussion to identify the end users of this website. We concluded that our primary users would include system administrators, developers and users of the open source community. To improve the information on this page, we thought to clearly indicate the software category and software name. Furthermore, we decided to show some additional information about the software when the user clicks it. Following these discussions, I started gathering information for categorizing the apps accurately based on the software name, category and their description.

Hyperkitty Herustics evaluation

Last week I also did a heuristics evalaution on Hyper Kitty which a django based archiver for the mailman suite allowing the users to starts new threads, reply to mails and mark them as favorites, I focused on analysing the wesbite with regards to the principles that we have been taught in class. I will be updating the heuristics in a separate blog post.

My list of must-see sessions at Red Hat Summit 2016

Red Hat Summit 2016The Red Hat Summit starts this week in San Francisco, and a few folks asked me about the sessions that they shouldn’t miss. The schedule can be overwhelming for first timers and it can be difficult at times to discern the technical sessions from the ones that are more sales-oriented.

If you’re in San Francisco, and you want to learn a bit more about using Ansible to manage OpenStack environments, come to the session that I am co-presenting with Robyn Bergeron: When flexibility met simplicity: The friendship of OpenStack and Ansible.

Confirmed good sessions

Here’s a list of sessions that I’ve seen in the past and I still highly recommend:

Dan is also doing a lab for container security called “A practical introduction to container security” that should be really interesting if you enjoy his regular sessions.

Sessions I want to see this year

If you want to talk OpenStack, Ansible, or Rackspace at the Summit, send me something on Twitter. Have a great time!

The post My list of must-see sessions at Red Hat Summit 2016 appeared first on major.io.

June 25, 2016

New badge: Red Hat Summit 2016 !
Red Hat Summit 2016You visited Fedora at the Red Hat Summit in 2016!
New FMN architecture and tests

New FMN architecture and tests

Introduction

FMN is the FedMsg Notification service. It allows any contributors (or actually, anyone with a FAS account) to tune what notification they want to receive and how.

For example it allows saying things like:

  • Send me a notification on IRC for every package I maintain that has successfully built on koji
  • Send me a notification by email for every request made in pkgdb to a package I maintain
  • Send me a notification by IRC when a new version of a package I maintain is found

How it works

The principile is that anyone can log in on the web UI of FMN there, they can create filters on a specific backend (email or IRC mainly) and add rules to that filter. These rules must either be validated or invalited for the notification to be sent.

Then the FMN backend listens to all the messages sent on Fedora's fedmsg and for each message received, goes through all the rules in all the filters to figure out who wants to be notified about this action and how.

The challenge

Today, computing who wants to be notified and how takes about 6 seconds to 12 seconds per message and is really CPU intensive. This means that when we have an operation sending a few thousands messages on the bus (for example, mass-branching or a packager maintaining a lot of packages orphaning them), the queue of messages goes up and it can take hours to days for a notification to be delivered which could be problematic in some cases.

The architecture

This is the current architecture of FMN:

|                        +--------\
|                   read |  prefs | write
|                  +---->|  DB    |<--------+
|                  |     \--------+         |
|        +-----+---+---+            +---+---+---+---+   +----+
|        |     |fmn.lib|            |   |fmn.lib|   |   |user|
v        |     +-------+            |   +-------+   |   +--+-+
fedmsg+->|consumer     |            |central webapp |<-----+
+        +-----+  +---+|            +---------------+
|        |email|  |irc||
|        +-+---+--+-+-++
|          |        |
|          |        |
v          v        v

As you can see it is not clear where the CPU intensive part is and that's because it is in fact integrated in the fedmsg consumer. This design, while making things easier brings the downside of making it pratically impossible to scale it easily when we have an event producing lots of messages. We multi-threaded the application as much as we could, but we were quickly reaching the limit of the GIL.

To try improving on this situation, we reworked the architecture of the backend as follow:

                                                     +-------------+
                                              Read   |             |   Write
                                              +------+  prefs DB   +<------+
                                              |      |             |       |
   +                                          |      +-------------+       |
   |                                          |                            |   +------------------+   +--------+
   |                                          |                            |   |    |fmn.lib|     |   |        |
   |                                          v                            |   |    +-------+     |<--+  User  |
   |                                    +----------+                       +---+                  |   |        |
   |                                    |   fmn.lib|                           |  Central WebApp  |   +--------+
   |                                    |          |                           +------------------+
   |                             +----->|  Worker  +--------+
   |                             |      |          |        |
fedmsg                           |      +----------+        |
   |                             |                          |
   |                             |      +----------+        |
   |   +------------------+      |      |   fmn.lib|        |       +--------------------+
   |   | fedmsg consumer  |      |      |          |        |       | Backend            |
   +-->|                  +------------>|  Worker  +--------------->|                    |
   |   |                  |      |      |          |        |       +-----+   +---+  +---+
   |   +------------------+      |      +----------+        |       |email|   |IRC|  |SSE|
   |                             |                          |       +--+--+---+-+-+--+-+-+
   |                             |      +----------+        |          |        |      |
   |                             |      |   fmn.lib|        |          |        |      |
   |                             |      |          |        |          |        |      |
   |                             +----->|  Worker  +--------+          |        |      |
   |                         RabbitMQ   |          |    RabbitMQ       |        |      |
   |                                    +----------+                   |        |      |
   |                                                                   v        v      v
   |
   |
   |
   v

The idea is that the fedmsg consumer listens to Fedora's fedmsg, put the messages in a queue. These messages are then picked from the queue by multiple workers who will do the CPU intensive task and put their results in another queue. The results are then picked from this second queue by a backend process that will do the actually notification (sending the email, the IRC message).

We also included an SSE component to the backend, which is something we want to do for fedora-hubs but this still needs to be written.

Testing the new architecture

The new architecture looks fine on paper, but one would wonder how it performs in real-life and with real data.

In order to test it, we wrote two scripts (one for the current architecture and one for the new) sending messages via fedmsg or putting in messages in the queue that the workers listens to, therefore mimiking there the behavior of the fedmsg consumer. Then we ran different tests.

The machine

The machine on which the tests were run is:

  • CPU: Intel i5 760 @ 2.8GHz (quad-core)
  • RAM: 16G DDR2 (1333 Mhz)
  • Disk: ScanDisk SDSSDA12 (120G)
  • OS: RHEL 7.2, up to date
  • Dataset: 15,000 (15K) messages

The results

The current architecture

The current architecture only allows to run one test, send 15K fedmsg messages and let the fedmsg consumer process them and monitor how long it takes to digest them.

Test #0 - fedmsg based
  Lasted for 9:05:23.313368
  Maxed at:  14995
  Avg processing: 0.458672376874 msg/s

The new architecture

The new architecture being able to scale we performed a different tests with it, using 2 workers, then 4 workers, then 6 workers and finally 8 workers. This gives us an idea if the scaling is linear or not and how much improvement we get by adding more workers.

Test #1 - 2 workers - 1 backend
  Lasted for 4:32:48.870010
  Maxed at:  13470
  Avg processing: 0.824487297215 msg/s
Test #2 - 4 workers - 1 backend
  Lasted for 3:18:10.030542
  Maxed at:  13447
  Avg processing: 1.1342276217 msg/s
Test #3 - 6 workers - 1 backend
  Lasted for 3:06:02.881912
  Maxed at:  13392
  Avg processing: 1.20500359971 msg/s
Test #4 - 8 workers - 1 backend
  Lasted for 3:14:11.669631
  Maxed at:  13351
  Avg processing: 1.15160928467 msg/s

Conclusions

Looking at the results of the tests, the new architecture is clearly handling its load better and faster. However, the progress aren't as linear as we like. My feeling is that retrieve information from the cache (here redis) is at one point getting slower, eventually also because of the central lock we tell redis to use.

As time permits, I will try to investigate this further to see if we can still gain some speed.

EMEA Sponsorship Program for Flock 2016

In the past we have had a tradition of sponsoring EMEA contributors that would like to attend Flock but are not going to receive funding as speakers.

The allocated budget for this year was only $800, initially. The adjusted budget is even less. As we wish to assist as many contributors in need as possible, we are going to try and raise that budget. However, no promises made at this point.

This program is intended for contributors that currently reside in EMEA. Other regions may feel free to come up with similar programs for their contributors as they see fit. Priority will be given to contributors that have recently maintained a record of activity and also have some kind of work agenda for the conference (e.g. meet with fellow contributors, actively contribute on site).

In any case, the budget we are going to end up with will be limited. Most likely we will be able to offer only -partial- subsidies. Do not apply if you are not okay with that. If you are able to receive funding from other sources (e.g. your employer), please do so.

To request for sponsorship, file a ticket on the EMEA Trac, state the reason(s) for applying and what you expect to accomplish by attending the conference. Deadline should be July 5, 23:59:59 UTC +0. All tickets will be later evaluated by FAmSCo, same as last time.

We will get back to you once we have more information regarding the budget. Please spread the news to your fellow Fedorians.

Thank you and good luck!

June 24, 2016

Another bit of ARM server hardware: SoftIron Overdrive

overdrivetop

https://shop.softiron.co.uk/product/overdrive-1000/

The data sheet is here but in brief, quad core AMD Seattle with 8 GB of RAM (expandable to 64 GB). Approximately equivalent to the still missing AMD Cello developer board.


GSoC 2016 Weekly Rundown: Assembling the orchestra

This week is the Google Summer of Code 2016 midterm evaluation week. Over the past month since the program started, I’ve learned more about the technology I’m working with, implementing it within my infrastructure, and moving closer to completing my proposal. My original project proposal details how I am working with Ansible to bring improved automation for WordPress platforms within Fedora, particularly to the Fedora Community Blog and the Fedora Magazine.

Understanding background

My project proposal originated from a discussion based on an observation about managing the Fedora Magazine. Fedora’s infrastructure is entirely automated in some form, often times using Ansible playbooks to “conduct” the Fedora orchestra of services, applications, and servers. However, all the WordPress platforms within Fedora are absent from this automated setup. This has to do with the original context of setting up the platforms.

However, now that automation is present in so much of the Infrastructure through a variety of tasks and roles, it makes sense to merge the two existing WordPress platforms in Fedora into the automation. This was the grounds for my proposal back in March, and I’ve made progress towards learning a completely new technology and learning it by example.

Initial research

GSoC 2016: "Ansible For DevOps" as a learning resourceFrom the beginning, I’ve used two resources as guides and instructions for GSoC 2016. “Ansible For DevOps“, a book by Jeff Geerling, has played a significant part in helping bootstrap me with Ansible and the in’s and out’s. I’m about halfway through the book so far, and it has helped profoundly with learning the technology. Special thanks to Alex Wacker for introducing me to the book!

The second resource is, as one would expect, the Ansible documentation. The documentation for Ansible is complete and fully explanatory. Usually if there is an Ansible-specific concept I am struggling with learning, or finding a module for accomplishing a task, the Ansible documentation helps point me in the right direction quickly.

Research into practice

After making some strides through the book and the documentation, I began turning the different concepts into practical playbooks for my own personal infrastructure. I run a handful of machines for different purposes, ranging from my Minecraft server, a ZNC bouncer, some PHP forum websites, and more. Ever since I began using headless Linux servers, I’ve never explored automation too deeply. Every time I set up a new machine or a service, I would configure it all manually, file by file.

First playbook

After reading more about Ansible, I began seeing ways I could try automating things in my “normal” setup. This helped give a way to ease myself into Ansible without overwhelming myself with too large of tasks. I created repositories on Pagure for my personal playbooks and Minecraft playbooks. The very first one I wrote was my “first 30 minutes” on a new machine. This playbook sets up a RHEL / CentOS 7 machine with basic security measures and a few personal preferences ready to go. It’s nothing fancy, but it was a satisfying moment to run it in my Vagrant machine and see it do all of my usual tasks on a new machine instantly.

For more information on using Ansible in a Vagrant testing environment, check out my blog post about it below.

Setting up Vagrant for testing Ansible

<iframe class="wp-embedded-content" data-secret="aI0cn9NNEG" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://blog.justinwflory.com/2016/06/setting-vagrant-testing-ansible/embed/#?secret=aI0cn9NNEG" title="“Setting up Vagrant for testing Ansible” — Justin W. Flory's Blog" width="600"></iframe>

Moving to Minecraft

After writing the first playbook, I tried moving to focusing on some other areas I could try automating to improve my “Ansible chops”. Managing my Minecraft server network is one place where I recognized I could improve automation. I spend a lot of time repeating the same sort of tasks, and having an automated way to do these tasks would make sense.

I started writing playbooks in the adding and restarting Minecraft servers based on the popular open source server software, Spigot. Writing these playbooks helped introduce me to different core modules in Ansible, like lineinfile, template, copy, get_url, and more.

I have also been using sites like ServerFault to find answers for any starting questions I have. Some of the changes between Ansible 1.x and 2.x caused some hiccups in one case for me.

Using Infrastructure resources

After getting a better feel for the basics, I started focusing less on my infrastructure and more on the project proposal. One of the key differences from me writing playbooks, roles, and tasks for my infrastructure is that there are already countless Ansible resources available from Fedora Infrastructure. For example, to create a WordPress playbook for Fedora Infrastructure, I would want to use the mariadb_server role for setting up a database for the site. Doing that in my playbook (or writing a separate role for it just for WordPress) would increase the difficulty of maintaining the playbooks and make it inconvenient for other members of Fedora Infrastructure.

Creating a deliverable

In my personal Ansible repository, I have begun constructing the deliverable product for the end of the summer. So far, I have a playbook that creates a basic, single-site WordPress installation. The intention for the final deliverable is to have a playbook for creating a “base” installation of a WordPress network, and then any other tasks for creating extra sites added to the network. This will make sure that any WordPress sites in Fedora are running the same core version, receive the same updates, and are consistent in administration.

I also intend to write documentation for standing up a WordPress site in Fedora based on my deliverable product. Fortunately, there is already a guide on writing a new SOP, so after talking with my mentor, Patrick Uiterwijk, on documentation expectations and needs next week, I will be referring back to this document as a guide for writing my own.

Reflection on GSoC 2016 so far

I was hoping to have advanced farther by this point, but due to learning bumps and other tasks, I wasn’t able to move at a pace as I hoped. However, since starting GSoC 2016, I’ve made some personal observations about the project and how I can improve.

  • Despite being behind from where I wanted to be, I feel I am at a point where I am mostly on track and able to work towards completing my project proposal on schedule.
  • I recognize communication on my progress has not been handled well, and I am making plans to make sure shorter, more frequent updates are happening at a consistent and regular basis. This includes a consistent, weekly (if not twice every week) blog post about my findings, progress, commits, and more.
  • After talking with Patrick this week, we are going to begin doing more frequent check-ins about where I am in the project and making sure I am on track for where I should be.

Excerpt from GSoC 2016 evaluation form

As one last bit, I thought it would be helpful to share my answers from Google’s official midterm evaluation form from the experience section.

“What is your favorite part of participating in GSoC?”

“Participating in GSoC gave me a means to continue contributing to an open source community I was still getting involved in. I began contributing to Fedora in September 2015, and up until the point when I applied for GSoC, I had anticipated having to give up my activity levels of contributing to open source while I maintained a job over the summer. GSoC enabled me to remain active and engaged with the Fedora Project community and it has kept me involved with Fedora.

The Fedora Project is also a strong user of Ansible, which is what my project proposal mostly deals with. My proposal gives me a lot of experience and the opportunity to learn new technology that not only allows me to complete my proposal, but also understand different levels and depths of contributing to the project far beyond the end of the summer. With the skills I am learning, I am being enabled as a contributor for the present and the future. To me, this is exciting as the area that I am contributing in has always been one that’s interested to me, and this project is jump-starting me with the skills and abilities needed to be a successful contributor in the future.

GSoC is also actively teaching me lessons about time management and overcoming challenges of working remote (which I will detail in the next question). I believe the experience I am getting now from participating in GSoC allows me to improve on myself as an open source developer and contributor and learn important skills about working remotely with others on shared projects.”

“What is the most challenging part of participating in GSoC?”

“The hardest part for me was (is) learning how to work remotely. In the past, when I was contributing at school, I had resources available to me where I could reach out to others nearby for assistance, places I could leave to focus, and a more consistent schedule. Working from home has required me to reach out for help either by improving how well I can search for something or reaching out to others in the project community about how to accomplish an objective.

There are also different responsibilities at home, and creating a focused, constructive space for me to focus on project work is an extremely important part of helping me accomplish my work. Learning to be consistent in my own work and setting my own deadlines is a large part of what I’m working on doing now. Learning the ability to follow and set personal goals for working on the project was a hard lesson to learn at first, but finding that balance quickly and swiftly is something that is helping me move forward.”

The post GSoC 2016 Weekly Rundown: Assembling the orchestra appeared first on Justin W. Flory's Blog.

LVM Thin Provisioning

The previous blog post [1] I wrote about LVM described the foundations this great technology is based on. But as I already mentioned there, the real purpose of that blog post was to provide the basic "common ground" for this blog post. A blog post focusing on LVM Thin Provisioning which is a really great technology that gets probably 10 % of the focus and glory it deserves. So what is this so amazing thing? Continue reading to find out!

Explore Flatpak in Fedora 24

One of the main features of Fedora 24 Workstation is better support for Flatpak — a new, distribution agnostic format for packaging and distributing Linux desktop apps. The two main goals of Flatpak are the creation of a single installation file that can distributed to users across distributions, and running apps as isolated from the rest of the system as possible. The probably biggest practical benefit for users is that you can run any app no matter what version of Fedora you’re using.

We covered the Flatpak release announcement a few days ago here on the Fedora Magazine, but if you’ve never heard of Flatpak before that, you may have heard of xdg-app which was a development name for this technology. It was recently renamed to Flatpak to reflect the fact that it’s finally ready for broader usage. Besides Fedora Flatpak is already available in Arch, Debian (Experimental), Mageia, openSUSE (still as xdg-app). There are also personal repositories with Flatpak for Debian Stable and Ubuntu.

How does it work?

A application runs in a sandbox and it has all their dependencies present within the sandbox. The app can use a so-called runtime which is a well-defined environment that provides the app with the most common components of the platform the app is built on. There are currently three distinguished runtimes (GNOME, KDE, FreeDesktop). Dependencies that are missing in the runtime need to be bundled with the app. This way, the app can run pretty much independently on the underlying system.

The sandbox is currently not completely encapsulated. Most apps need to communicate with the rest of the system (load/save files, send notifications, sound/video server,…) and currently there need to be holes in the sandbox to make this possible. But the Flatpak developers are already working on an API that will allow controlled access outside the sandbox and put the user in charge of it. So in the future, the app won’t be able to access your data or hardware without you allowing it.

What apps are available for Flatpak?

GNOME has backed the project from the beginning, so it also provides the longest list of applications available in the form of Flatpak. You can install 17 GNOME apps in stable versions, which means 3.20. These are not very useful for users of Fedora 24 because GNOME 3.20 is included in the release, but it may come handy to Fedora 23 users who are on GNOME 3.18. But GNOME offers even longer list (23) of nightly versions which are going to become GNOME 3.22. So if you’d like to try what’s being brewed for Fedora 25 the GNOME nightly repository for Flatpak offers an easy way to do it.

The other major desktop project – KDE – also offers apps packaged for Flatpak. It’s currently 13 apps. These are also nightly builds.

The Document Foundation provides LibreOffice for Flatpak. It’s 5.2, so again a newer version that you can find in Fedora 24 (5.1).

The official Flatpak website lists other applications that are available in nightly versions: Darktable, GIMP, GTK+3 GIMP, Inkscape, MyPaint. An app, that is not listed on the Flatpak website, but is available for it, is Pitivi.

How to install and run apps?

In Fedora 24, Flatpak is only partly supported in GNOME Software. It will update already installed apps or runtimes or remove them if you’d like to. But to set a repository and install an app, you need to go to the command line and use flatpak command. If you don’t have it installed, just do sudo dnf install flatpak.

To install a Flatpak app it requires a couple of commands, but the respective websites give you step-by-step instructions, so it’s fairly easy even for those who are not very familiar with the command line.

Here is an example how to install a nightly version of a GNOME app:

Download a signing key and set a repository:

$ wget https://sdk.gnome.org/nightly/keys/nightly.gpg
$ flatpak remote-add --gpg-import=nightly.gpg gnome-nightly-apps https://sdk.gnome.org/nightly/repo-apps/

List available apps in the repository:

$ flatpak remote-ls gnome-nightly-apps --app

Install e.g. gedit:

$ flatpak install gnome-nightly-apps org.gnome.gedit master

It should create a standard launcher of the app, but you can also start it from the command line:

$ flatpak run org.gnome.gedit

Note that Flatpak allows to install apps without being a privileged user, just add –user to the commands. Then everything is located in your home directory.

Installing apps in the command line is quite easy, but it’s not the best user experience. In Fedora 25, Flatpak will be fully integrated in GNOME Software and you will just download a .flatpak file, double-click it and Software will take care of the rest.

PHP version 5.5.37, 5.6.23 and 7.0.8

RPM of PHP version 7.0.8 are available in remi-php70 repository for Fedora and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 5.6.23 are available in remi repository for Fedora ≥ 21 and  remi-php56 repository for Fedora and Enterprise Linux.

RPM of PHP version 5.5.37 are available in remi repository for Fedora 20 and in remi-php55 repository for Enterprise Linux.

emblem-important-2-24.pngPHP version 5.4 have reached its end of life and is no longer maintained by the project. Given the very important number of downloads by the users of my repository the version is still available in  remi repository for Enterprise Linux (RHEL, CentOS...) and includes  security fix (from version 5.5.37). The upgrade to a maintained version is strongly recommended.

These versions are also available as Software Collections.

security-medium-2-24.pngThese versions fix some security bugs, so update is strongly recommended.

Version announcements:

emblem-important-2-24.png 5.5.27 release was the last planned release that contains regular bugfixes. All the consequent releases contain only security-relevant fixes, for the term of one year (July 2016).

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.0 installation (simplest):

yum-config-manager --enable remi-php70
yum update

Parallel installation of version 7.0 as Software Collection (x86_64 only):

yum install php70

Replacement of default PHP by version 5.6 installation (simplest):

yum-config-manager --enable remi-php56
yum update

Parallel installation of version 5.6 as Software Collection (x86_64 only):

yum install php56

Replacement of default PHP by version 5.5 installation (simplest):

yum-config-manager --enable remi-php55
yum update

Parallel installation of version 5.5 as Software Collection (x86_64 only):

yum --enablerepo=remi install php55

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.2
  • EL6 rpm are build using RHEL-6.8
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php55 / php56 / php70)

FriendsFeaturesFreedomFirstForever
Fedorans,

When I packed up my entire life and moved to fair Rollywood this past January to be closer to Red Hat Tower and dive head-first into this role as the first ever Fedora Community Action and Impact Lead, it was an absolute dream for me. Every day, I'm working with some of the most talented, motivated, brilliant hackers, designers, and volunteers that build and ship the operating system at the core of my digital existence--not to mention much of the enterprises and infrastructure around the world. The scale of problems, and the speed at which we are solving them, is still absolutely mind-blowing to me.

I love my job.
I love my team.
I love the Fedora Community.
I love Red Hat.
Wholeheartedly.
Thank you, all of you.

Never did I imagine there would ever be another opportunity that could sway me from this path, which I flipped my life upside down to pursue... but then I got the call...

Things are moving quickly, and I don't have all the details yet, but it looks like I'm going to be the first ever Open Source Community Manager for the next President of the United States, Hillary Clinton.

I have to be at campaign HQ in Brooklyn NY ready to hit the ground running on Monday.

I will be spending my last 24 hours as a Red Hatter flipping my entire life upside down, again, and I wanted to give the community one last window to ping me with any outstanding business or questions, and direct you to the appropriate channels thereafter.

After tomorrow, I'm going to be packing up and moving to Brooklyn to work 8 days a week until roughly Thanksgiving ;)

In my absence, here is a list of points of contact for FCL business (cc'd above,) until my replacement arrives.

For all things related to:

  • Fedora Council, Matthew Miller: mattdm@fedoraproject.org
  • Fedora Budget, Events, and ambassadors, Joe Brockmeier: jzb@redhat.com
  • Fedora Engineering/Infrastructure, Paul Frields: pfrields@fedoraproject.org
  • Community Operations/CommBlog/Mktg/Misc, the Community Operations Team: commops@lists.fedoraproject.org

I know the timing is less than ideal, with Red Hat Summit next week, and Flock in August, but, I believe this position will bring the most visibility to the work and principles that Red Hat, Fedora, and the FOSS community stand for.

I find strength, purpose, and peace in these words of George Bernard Shaw:

"...I am of the opinion that my life belongs to the whole community, and as long as I live it is my privilege to do for it whatever I can. ...Life is no "brief candle" for me. It is a sort of splendid torch which I have got hold of for the moment, and I want to make it burn as brightly as possible before handing it on to future generations."

This is one of those "once in a lifetime" moments in our history, and if the Free and Open Source Software Movement can be part of that, then I'm willing to answer the call.

Happy Hacking,
--RemyD.

June 23, 2016

The Creative Process

This week I took on designs that were much more, shall I say, involved, but that’s a good thing! Starting out the week, I worked on an “infographic” which is not something that I had designed before, but it allowed me to work on an aspect of graphic design that, I think, gets a bit overlooked in the creatives: layout. “Isn’t your job just to make things look good?” Yes… and no. As a designer, the aesthetic my work is of great importance but part of what I need to look at also is functionality: the “is-it-readable” and “is-it-understandable” factors.

So, I figured that I would post a little something about just how much thought and organization I put into a design! With simpler, less specific designs I sometimes can get away with just diving right into Inkscape, but with things like the infographic logic model and t-shirt designs that I worked on this week, there were a number of requirements, color requirements/ restrictions, and layout constraints, so much more planning had to be done. Typically I hit on four(ish) main steps in my own “creative process”:

  1. research/ notes
  2. sketching
  3. vector sketching/ layout
  4. designing

I’m including an example of how I applied these steps in the logic model design (something similar in reference to the t-shirt design might be posted too, depending on time!).

  1. research/ notes
    I research the design I’m about to do (looking at ticket information, browsing similar designs for inspiration, researching topic, etc.) and compile the information into some form of notes or a chart… I’m that kid that color codes her binders for school to match the textbook. I really like organization, if that isn’t already clear!
    On the right is the rough draft posted in the design ticket, and on the left I transferred that information into a word doc so that I would be able to just copy/paste the information into the logic model at the end.
    notes step
  2. sketching
    This is the sketch I made for myself; it’s very rough, but it at least gets my mind thinking about logical locations for things (but throughout the design process this is bound to change many times!). Apologies about the portrait orientation.
    sketch step
  3. vector sketching/ layout
    In this project, I used another logic model design help me think through possible layouts. I usually organize generic squares/ rectangles on the artboard to get a better feel for where things fit best.
    On the left is the logic model that I was using as inspiration, and on the right is my “blocked out” artboard.
    layout step
  4. designing
    This step is pretty self-explanatory. At this point, the layout should be relatively final and should be almost 100% functional. So far, I only have two iteration of my design, but typically I will have between 3-5 iterations of a similar design by the time I apply feedback from others and myself.
    Here’s what we’ve got so far with this! It’s pretty close to being complete.
    logic model_ ticket #432_revision 1