Fedora People

Using the Connfa open source conference management software set: the CMS

Posted by Ankur Sinha "FranciscoD" on June 22, 2018 10:36 PM

The Connfa CMS server is a php application that uses a MySQL database. To begin with, the documentation here is quite good: http://connfa.com/documentation/. However, as is usually seen, it takes a few tweaks to deploy it. These steps are therefore, Fedora 28 specific.

On Fedora, one needs to use php71 from remi's repository: https://blog.remirepo.net/pages/Config-en

sudo dnf install php71 composer php71-php-mysqlnd mariadb httpd mariadb-server php71-php-xml
sudo systemctl start php71-php-fpm
sudo systemctl start httpd
sudo systemctl start mysqld
module load php71
php --version

This is because the tool doesn't work with the latest version of php that's in Fedora 28. The tool also has a bug or two, so I used this fork that appears to have a bugfix:

https://github.com/d-i-t-a/connfa-integration-server/tree/develop

Then, one must set up Mariadb as explained here: https://fedoraproject.org/wiki/MariaDB. Note that Mariadb is an Open source MySQL implementation.

One must also create a database as explained in the wiki page, and a user that connfa can use, which must match the values in the env file.

On Fedora, I enabled the UserDir httpd extension, and placed the connfa-integration-server in ~/public_html/, since I didn't want to work as root in /var/www/html all the time. httpd will needs to be started and enabled:

sudo systemctl start httpd; sudo systemctl enable httpd

On Fedora, Selinux must be asked to allow access to UserDirs:

sudo setsebool -P httpd_read_user_content 1
sudo setsebool -P httpd_unified 1
sudo setsebool -P httpd_can_network_connect_db 1
sudo setsebool -P httpd_can_network_connect 1

One can follow the steps from here next: http://connfa.com/integration-server/server-requirements

I also run composer update to update the php bits.

Then, update the env file and so on as explained here: http://connfa.com/integration-server/install/

php artisan key:generate #sets the keys in the env file
php artisan password:change --name=admin --password=connfa18

The site should be accessible at http://localhost/~<username>/public/login/ The username is admin@test.com here.

The API is accessible at: http://localhost/~<username>/public/api/v2/cns-2018/checkUpdates

I did have quite a few issues with permissions, but then I'm neither a web developer nor a server administrator, so my skills in the department are rather limited.

I'll look at the Android application next, and hopefully, I'll be able to sync it up with the CMS server.

PHP version 7.1.19 and 7.2.7

Posted by Remi Collet on June 22, 2018 08:22 PM

RPM of PHP version 7.2.7 are available in remi repository for Fedora 28 and in remi-php72 repository for Fedora 25-27 and Enterprise Linux  6 (RHEL, CentOS).

RPM of PHP version 7.1.19 are available in remi repository for Fedora 26-27 and in remi-php71 repository for Fedora 25 and Enterprise Linux (RHEL, CentOS).

emblem-notice-24.pngNo security fix this month, so no update for version 5.6.36 and 7.0.30.

emblem-important-2-24.pngPHP version 5.5 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.2 installation (simplest):

yum-config-manager --enable remi-php72
yum update php\*

Parallel installation of version 7.2 as Software Collection (x86_64 only):

yum install php72

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update

Parallel installation of version 7.1 as Software Collection (x86_64 only):

yum install php71

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.5
  • EL6 rpm are build using RHEL-6.9
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70 / php71 / php72)

Breaking out of the Slack walled garden

Posted by James Just James on June 22, 2018 05:50 PM
I’m old school cool. Real hackers chat on open, distributed platforms. Most technical discussion can be found on the Freenode IRC network. It’s not perfect, but the advantages clearly outweigh the drawbacks. Recently, I needed to join an existing large “community” on the centralized, proprietary walled garden that is the Slack network. The Problem: Connecting to the the Slack server requires that you use either the proprietary client or their proprietary web app.

Thomson 8-bit computers, a history

Posted by Bastien Nocera on June 22, 2018 03:06 PM
In March 1986, my dad was in the market for a Thomson TO7/70. I have the circled classified ads in “Téo” issue 1 to prove that :)



TO7/70 with its chiclet keyboard and optical pen, courtesy of MO5.com

The “Plan Informatique pour Tous” was in full swing, and Thomson were supplying schools with micro-computers. My dad, as a primary school teacher, needed to know how to operate those computers, and eventually teach them to kids.

The first thing he showed us when he got the computer, on the living room TV, was a game called “Panic” or “Panique” where you controlled a missile, protecting a town from flying saucers that flew across the screen from either side, faster and faster as the game went on. I still haven't been able to locate this game again.

A couple of years later, the TO7/70 was replaced by a TO9, with a floppy disk, and my dad used that computer to write an educational software about top-down additions, as part of a training program run by the teachers schools (“Écoles Normales” renamed to “IUFM“ in 1990).

After months of nagging, and some spring cleaning, he found the listings of his educational software, which I've liberated, with his permission. I'm currently still working out how to generate floppy disks that are usable directly in emulators. But here's an early screenshot.


Later on, my dad got an IBM PC compatible, an Olivetti PC/1, on which I'd play a clone of Asteroids for hours, but that's another story. The TO9 got passed down to me, and after spending a full summer doing planning for my hot-dog and chips van business (I was 10 or 11, and I had weird hobbies already), and entering every game from the “102 Programmes pour...” series of books, the TO9 got put to the side at Christmas, replaced by a Sega Master System, using that same handy SCART connector on the Thomson monitor.

But how does this concern you. Well, I've worked with RetroManCave on a Minitel episode not too long ago, and he agreed to do a history of the Thomson micro-computers. I did a fair bit of the research and fact-checking, as well as some needed repairs to the (prototype!) hardware I managed to find for the occasion. The result is this first look at the history of Thomson.

<iframe allowfullscreen="allowfullscreen" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/UPxQR2r6iVA/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/UPxQR2r6iVA?feature=player_embedded" width="320"></iframe>


Finally, if you fancy diving into the Thomson computers, there will be an episode coming shortly about the MO5E hardware, and some games worth running on it, on the same YouTube channel.

I'm currently working on bringing the “TeoTO8D emulator to Flathub, for Linux users. When that's ready, grab some games from the DCMOTO archival site, and have some fun!

I'll also be posting some nitty gritty details about Thomson repairs on my Micro Repairs Twitter feed for the more technically enclined among you.

LibreOffice color selector as GTK widgets

Posted by Caolán McNamara on June 22, 2018 02:21 PM
Here's what the native GTK widget mode for the color picker looks like at the moment under Wayland. A GtkMenuButton displaying a color preview of the currently selected color and a GtkPopover containing the color selection widgetry.
<iframe allow="autoplay; encrypted-media" allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/ZiisWMArKDw" width="560"></iframe>
While under X, because there the GtkPopover is constrained to its parents size, falling back to using a GtkWindow to contain the widgetry

Use LVM to Upgrade Fedora

Posted by Fedora Magazine on June 22, 2018 08:00 AM

Most users find it simple to upgrade from one Fedora release to the next with the standard process. However, there are inevitably many special cases that Fedora can also handle. This article shows one way to upgrade using DNF along with Logical Volume Management (LVM) to keep a bootable backup in case of problems. This example upgrades a Fedora 26 system to Fedora 28.

The process shown here is more complex than the standard upgrade process. You should have a strong grasp of how LVM works before you use this process. Without proper skill and care, you could lose data and/or be forced to reinstall your system! If you don’t know what you’re doing, it is highly recommended you stick to the supported upgrade methods only.

Preparing the system

Before you start, ensure your existing system is fully updated.

$ sudo dnf update
$ sudo systemctl reboot # or GUI equivalent

Check that your root filesystem is mounted via LVM.

$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_sdg-f26 20511312 14879816 4566536 77% /

$ sudo lvs
LV        VG           Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
f22       vg_sdg -wi-ao---- 15.00g
f24_64    vg_sdg -wi-ao---- 20.00g
f26       vg_sdg -wi-ao---- 20.00g
home      vg_sdg -wi-ao---- 100.00g
mockcache vg_sdg -wi-ao---- 10.00g
swap      vg_sdg -wi-ao---- 4.00g
test      vg_sdg -wi-a----- 1.00g
vg_vm     vg_sdg -wi-ao---- 20.00g

If you used the defaults when you installed Fedora, you may find the root filesystem is mounted on a LV named root. The name of your volume group will likely be different. Look at the total size of the root volume. In the example, the root filesystem is named f26 and is 20G in size.

Next, ensure you have enough free space in LVM.

$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
vg_sdg 1 8 0 wz--n- 232.39g 42.39g

This system has enough free space to allocate a 20G logical volume for the upgraded Fedora 28 root. If you used the default install, there will be no free space in LVM. Managing LVM in general is beyond the scope of this article, but here are some possibilities:

1. /home on its own LV, and lots of free space in /home

You can logout of the GUI and switch to a text console, logging in as root. Then you can unmount /home, and use lvreduce -r to resize and reallocate the /home LV. You might also boot from a Live image (so as not to use /home) and use the gparted GUI utility.

2. Most of the LVM space allocated to a root LV, with lots of free space in the filesystem

You can boot from a Live image and use the gparted GUI utility to reduce the root LV. Consider moving /home to its own filesystem at this point, but that is beyond the scope of this article.

3. Most of the file systems are full, but you have LVs you no longer need

You can delete the unneeded LVs, freeing space in the volume group for this operation.

Creating a backup

First, allocate a new LV for the upgraded system. Make sure to use the correct name for your system’s volume group (VG). In this example it’s vg_sdg.

$ sudo lvcreate -L20G -n f28 vg_sdg
Logical volume "f28" created.

Next, make a snapshot of your current root filesystem. This example creates a snapshot volume named f26_s.

$ sync
$ sudo lvcreate -s -L1G -n f26_s vg_sdg/f26
Using default stripesize 64.00 KiB.
Logical volume "f26_s" created.

The snapshot can now be copied to the new LV. Make sure you have the destination correct when you substitute your own volume names. If you are not careful you could delete data irrevocably. Also, make sure you are copying from the snapshot of your root, not your live root.

$ sudo dd if=/dev/vg_sdg/f26_s of=/dev/vg_sdg/f28 bs=256k
81920+0 records in
81920+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 149.179 s, 144 MB/s

Give the new filesystem copy a unique UUID. This is not strictly necessary, but UUIDs are supposed to be unique, so this avoids future confusion. Here is how for an ext4 root filesystem:

$ sudo e2fsck -f /dev/vg_sdg/f28
$ sudo tune2fs -U random /dev/vg_sdg/f28

Then remove the snapshot volume which is no longer needed:

$ sudo lvremove vg_sdg/f26_s
Do you really want to remove active logical volume vg_sdg/f26_s? [y/n]: y
Logical volume "f26_s" successfully removed

You may wish to make a snapshot of /home at this point if you have it mounted separately. Sometimes, upgraded applications make changes that are incompatible with a much older Fedora version. If needed, edit the /etc/fstab file on the old root filesystem to mount the snapshot on /home. Remember that when the snapshot is full, it will disappear! You may also wish to make a normal backup of /home for good measure.

Configuring to use the new root

First, mount the new LV and backup your existing GRUB settings:

$ sudo mkdir /mnt/f28
$ sudo mount /dev/vg_sdg/f28 /mnt/f28
$ sudo mkdir /mnt/f28/f26
$ cd /boot/grub2
$ sudo cp -p grub.cfg grub.cfg.old

Edit grub.conf and add this before the first menuentry, unless you already have it:

menuentry 'Old boot menu' {
        configfile /grub2/grub.cfg.old
}

Edit grub.conf and change the default menuentry to activate and mount the new root filesystem. Change this line:

linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f26 ro rd.lvm.lv=vg_sdg/f26 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8

So that it reads like this. Remember to use the correct names for your system’s VG and LV entries!

linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f28 ro rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8

Edit /mnt/f28/etc/default/grub and change the default root LV activated at boot:

GRUB_CMDLINE_LINUX="rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet"

Edit /mnt/f28/etc/fstab to change the mounting of the root filesystem from the old volume:

/dev/mapper/vg_sdg-f26 / ext4 defaults 1 1

to the new one:

/dev/mapper/vg_sdg-f28 / ext4 defaults 1 1

Then, add a read-only mount of the old system for reference purposes:

/dev/mapper/vg_sdg-f26 /f26 ext4 ro,nodev,noexec 0 0

If your root filesystem is mounted by UUID, you will need to change this. Here is how if your root is an ext4 filesystem:

$ sudo e2label /dev/vg_sdg/f28 F28

Now edit /mnt/f28/etc/fstab to use the label. Change the mount line for the root file system so it reads like this:

LABEL=F28 / ext4 defaults 1 1

Rebooting and upgrading

Reboot, and your system will be using the new root filesystem. It is still Fedora 26, but a copy with a new LV name, and ready for dnf system-upgrade! If anything goes wrong, use the Old Boot Menu to boot back into your working system, which this procedure avoids touching.

$ sudo systemctl reboot # or GUI equivalent
...
$ df / /f26
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_sdg-f28 20511312 14903196 4543156 77% /
/dev/mapper/vg_sdg-f26 20511312 14866412 4579940 77% /f26

You may wish to verify that using the Old Boot Menu does indeed get you back to having root mounted on the old root filesystem.

Now follow the instructions at this wiki page. If anything goes wrong with the system upgrade, you have a working system to boot back into.

Future ideas

The steps to create a new LV and copy a snapshot of root to it could be automated with a generic script.  It needs only the name of the new LV, since the size and device of the existing root are easy to determine. For example, one would be able to type this command:

$ sudo copyfs / f28

Supplying the mount-point to copy makes it clearer what is happening, and copying other mount-points like /home could be useful.

What's a kernel devel package anyway

Posted by Laura Abbott on June 21, 2018 06:00 PM

One of the first concepts you learn when building open source software is the existance of -devel packages. You have package foo to provide some functionality and foo-devel for building other programs with the foo functionality. The kernel follows this pattern in its own special kernel way for building external modules.

First a little bit about how a module is built. A module is really just a fancy ELF file compiled and linked with the right options. It has .text, .data, and other kernel specific sections. Some parts of the build environment also get embedded in modules. Modules are also just a socially acceptable way to run arbitrary code in kernel mode. Modules are loaded via a system call (either by fd or an mmaped address). The individual sections (.text., .data etc.) get placed based on the ELF header. The kernel does some basic checks on the ELF header to make sure it's not complete crap (loading for an incorrect arch etc.) but can also do some more complicated verification. Each module gets a version magic embedded in the ELF file. This needs to match the running kernel but can be overridden with a force option. There's also CONFIG_MODVERSIONS which will generate a crc over functions and exported symbols to make sure they match the kernel that was built. If the CRC in the module and kernel don't match, the module loading will fail.

Now consider an out of tree module. The upstream Linux kernel doesn't provide an ABI guarantee. In order to build an external module, you need to use the same tree that was used to build the kernel. You might be able to get away with using a different base but it's not guaranteed to work. These requirements are well documented. Actually packaging the entire build tree would be large and unecessary. Fedora ends up packaging a subset of the build tree:

  • Kconfigs and Makefiles
  • header files, both generic and architecture specific
  • Some userspace binaries built at make modules_prepare time
  • The kernel symbol map
  • Module.symvers
  • A few linker files for some arches

Annoyingly, because each distribution does something different, all of this has to be done manually. This also means we find bugs when there are new dependencies that need to be packaged. I really wish we could just get away with building the module dependencies at runtime but doesn't work with the requirements.

Flatpak – a history

Posted by Alexander Larsson on June 20, 2018 02:23 PM

I’ve been working on Flatpak for almost 4 years now, and 1.0 is getting closer. I think it might be interesting at this point to take a retrospective look at the history of Flatpak.

Early history

<figure class="wp-caption alignnone" style="width: 386px"><figcaption class="wp-caption-text">Ancient Egyptian Flatpak</figcaption></figure>

The earliest history goes back to the summer of 2007. I had played a bit with a application image system called Klik, which had some interesting ideas. However, I was not really satisfied with some technical details. One day at the beach I got an interesting ideas for a hack that could improve this.

Fast forward until August 2007 when I released Glick in the wild, based on these ideas. The name is sort of a pun on the old KDE/Gnome first-letter naming scheme, although neither Klik or Glick are really desktop-specific.

Glick was a a single-file-image system. It predates usable kernel container APIs, so it uses fuse and some weird hacks. It doesn’t integrate with the desktop in any way, and applications have to decide what to bundle, falling back to system-libraries for the non-bundled things. This means its not terribly robust., but it is completely stand-alone and need nothing installed on the host system.

Around 2011 the initial support for kernel namespaces had landed and started being useful. Using these I could avoid some of the hacks that my earlier experiment used. So, I got interested in bundling again and released Glick 2 based on this.

Glick 2 requires some software to be installed on the host, which allows it to integrate better with the system. For example, you can “install” bundles by putting the file in a known location, and doing this allows some level of desktop integration. Glick 2 also uses SHA1 checksums to try to automatically de-duplicate files shared between applicatins. Here we can see an early version of the ideas that make up OSTree.

Bundling using namespaces was lot more robust than the previous hacks, but it still relied on the system for the core libraries that the application doesn’t bundle. So an app would sometimes work on one distro, but not another.

Around this time I posted a blog  about how I thought application bundling combined with read-only OS images can make a really good model for an OS. This idea is very similar to what Project Atomic / SilverBlue  are doing now.

Containers, Portals and Runtimes

A few years later, around 2013 the kernel support for containers was starting to shape up, and Docker hit the market. I did a lot of work on the early docker, like porting it away from aufs in order to run on RHEL.

Around this time I also attended the Gnome Developer Experience hackfest  in Brussels where one of the topics was Application deployment and sandboxing. From the discussions there (and my previous experiences) a lot of the core ideas of Flatpak, like runtimes, sandboxing and portals originated.

In 2014 the first version (then called xdg-app) was released. The current Flatpak is a lot more polished, but the initial version of xdg-app is still very much recognizable today.

xdg-app used OSTree to download, store and de-duplicate applications. It uses kernel namespaces (via a helper called xdg-app-helper) to do unprivileged containers. It has a split between applications and runtimes which allow applications to be portable between distros in a very robust fashion, while still limiting the duplication between applications and allowing security updates. There is also integration with the desktop (icons, desktop files, mimetypes, etc), and some very early portal code can be seen.

The great renaming

<figure class="wp-caption alignnone" style="width: 256px"><figcaption class="wp-caption-text">Modern Flatpak</figcaption></figure>

The name xdg-app was just something I picked for the first commit without much consideration, and it was not very good. However, names are hard, and we spent a lot of time trying to come up with another, eventually settling on “Flatpak” (with the above logo). The 0.6.0 release in may 2016 was the first with the new name.

The 0.6 release was also the first that split out the unprivileged container launcher (xdg-app-helper) into its own project, now known as BubbleWrap , hosted by Project Atomic.

Soon thereafter we had the first release of xdg-desktop-portal which is the host-side implementation of the portal idea, allowing sandboxed applications to safely break out of the sandbox in a controlled fashion.

Version 0.8.0, released december 2016 was the first long-term stable release, which was included in Debian Stretch and RHEL 7. Since then we have had another stable release series called 0.10.x.

We want apps!

Flatpak was always a decentralized system, in that anyone can host their own applications and be on equal terms with everyone else. However, while this is an important feature, it leads to a poor initial experience, both for users (hard to find apps) or developers (need to maintain their own repository).

To solve this we started the Flathub project, which is a single repository where you can find most apps. In the last year it has gone from a minimal viable product building its first app to something with more than 300 apps and a diverse group of developers.

Onwards and upwards!

<figure class="wp-caption alignnone" style="width: 800px"><figcaption class="wp-caption-text">Future Flatpak</figcaption></figure>

No software is ever finished, or bug-free, but we have had a list of core things that we wanted to have before calling Flatpak 1.0, and that list is now empty. So, I’m planning to release a release-candidate (called 0.99.1) later this week.

Then 1.0 will be released later this summer.

Welcome to Fedora CoreOS

Posted by Fedora Magazine on June 20, 2018 02:02 PM
This is a message from Fedora Project Leader Matthew Miller.

Hi everyone. If you saw my talk at DevConf.cz this year, you’ll remember I discussed the Fedora / Red Hat relationship, and specifically how Fedora has historically worked with new technologies that come our way through acquisitions made by our primary sponsor. Little did I know, but at that very moment, something huge was in the works, and when my plane landed back in Boston my phone blew up with messages about CoreOS joining Red Hat.

That’s obviously gigantic news, directly relevant to Fedora, since we are the project Red Hat depends on for operating-system level integration and innovation. Now, most of the news is about Kubernetes, OpenShift, Tectonic, and Quay — but there’s also Container Linux (the operating system formerly known just as “CoreOS”). At Red Hat Summit, the company announced and clarified a bunch of things around product and corporate plans. Now, it’s time for us to figure out how we can welcome and include the Container Linux community in the circle of Fedora Friends.

What does this mean for the Fedora community?

Good things! Container Linux is exciting, innovative, and has a passionate user and developer community. The people who built it are awesome and well-aligned with the Fedora community foundations.

The “Fedora Editions” strategy intentionally makes space for exploring emerging areas in operating system distributions. CoreOS will help us push that even further and bring new ways of doing things to the project as a whole.

What does this mean for Container Linux users?

More good things! I know this is kind of scary. Fedora CoreOS is going to be built from Fedora content rather than in the way it’s made now. It won’t necessarily be made in the same way we make Fedora OS deliverables today, though. No matter what, we absolutely want the CoreOS user experience of “container cluster host OS that keeps itself up-to-date and you just don’t worry about it.” Again, technical details are a discussion for elsewhere, but the goal is for existing Container Linux users to be as happy as — or happier than! — you are with the OS today.

And here’s the super-important thing: Fedora really is a community-driven project, and this means that you can get involved and directly influence how the future Fedora CoreOS works to meet your needs. If you’re interested and need help getting involved, don’t hesitate to talk to me, to the Join Fedora team, or to the developers and community people already working on the project.

What does this mean for Fedora Atomic Host and other deliverables?

This isn’t the place for technical details — see “what next?” at the bottom of this message for more. I expect that over the next year or so, Fedora Atomic Host will be replaced by a new thing combining the best from Container Linux and Project Atomic. This new thing will be “Fedora CoreOS” and serve as the upstream to Red Hat CoreOS.

Hey, so… “Fedora Core”!

Everything’s a circle, right? But, this has nothing to do with the Red Hat vs. external split that was Fedora Core and Extras back in the day. We absolutely do not want to regress to that kind of community divide. “Core” just happens to be a pretty catchy name component for an OS that fits the “small, focused base” concept. This concept is powerful and useful for today’s information technology and computing world, and we want to give it proper focus in Fedora.

Okay, so, what next?

Visit the new website at https://coreos.fedoraproject.org. The project is just getting started, so there’s not much there yet, but we do have an initial FAQ. If you have questions that aren’t answered, or just want to get involved, join in discussion on the new Discourse board, the dev mailing list at coreos@lists.fedoraproject.org, and on Freenode IRC in #fedora-coreos.

Flatpak in detail, part 2

Posted by Matthias Clasen on June 20, 2018 03:44 AM

The first post in this series looked at runtimes and extensions. Here, we’ll look at how flatpak keeps the applications and runtimes on your system organized, with installations, repositories, branches, commits and deployments.

Installations and repositories

An installation is a place on your filesystem where flatpak can install apps and runtimes. By default, flatpak has a system-wide installation in /var/lib/flatpak, and a user installation in $HOME/.local/share/flatpak.

It is possible to define additional system-wide installations by placing a key file in /etc/flatpak/installations.d. For example, this can be used to keep apps on a portable drive.

Part of the data that flatpak keeps for each installation is a list of remotes. A remote is a reference to an ostree repository that is available somewhere on the network.

Each installation also has its own local ostree repository (for example, the system-wide installation has its repo in /var/lib/flatpak/repo). You can explore the contents of these repositories using the ostree utility;

$ ostree --repo=$HOME/.local/share/flatpak/repo/ refs
gnome-nightly:appstream/x86_64
flathub:runtime/org.freedesktop.Platform.Locale/x86_64/1.6
flathub:app/de.wolfvollprecht.UberWriter/x86_64/stable
...

Branches and versions

Similar to git, ostree organizes the data in a repository in commits, which are grouped in branches. Commits are identified by a hash and branches are identified by a name.

While ostree does not care about the format of a branch name, flatpak uses branch names of the form $KIND/$ID/$ARCH/$BRANCH to uniquely identify branches.

Here are some examples:

app/org.inkscape.Inkscape/x86_64/stable
runtime/org.gnome.Platform/x86_64/master

Most of the  time, it is clear from the context if an app or runtime is being named, and only one architecture is relevant. For this case, flatpak allows a shorthand notation for branch names omitting the $KIND and $ARCH parts: $ID//$BRANCH.

In this notation, the above examples shrink to:

org.inkscape.Inkscape//stable
org.gnome.Platform//master

Deployments

Installing an app or runtime really consists of two steps: first, flatpak caches that data in the local repo of the installation, and then it deploys it, which means it creates a check-out of the branch from the local repo. The check-outs are organized in a folder structure that reflects the branch name organization.

For example, Inkscape will be checked-out in $HOME/.local/share/flatpak/app/org.inkscape.Inkscape/x86_64/stable/$COMMIT, where $COMMIT is the hash of the commit that is being deployed.

It is possible to have multiple commits from the same branch deployed, but one of them is considered active and will be used by default. Flatpak maintains symlink in the check-out directory that points at the active commit.

It is also possible to have multiple branches of an app or runtime deployed at the same time; the directory structure of checkouts is designed to allow that. One of the branches is considered current. Flatpak maintains a symlink at the toplevel of the checkout that points at the current checkout.

Flatpak can run an app from any deployed commit, regardless whether it is active or current or not. To run a particular commit, you can use the –commit option of flatpak run.

The relevance of being active and current is that flatpak exports some data (in particular, desktop files) from the active commit of the current branch, by symlinking it into ~/.local/share/flatpak/exports,  where for example gnome-shell will find it and allow you to run the app from the overview.

Note: Even though it is perfectly ok to have multiple versions of the same app installed, running more than one at the same time will typically not work, since the different versions will claim the app ID as their unique bus name on the session bus. A way around this limitation is to explicitly give one of the versions a different ID, for example, by appending a “.nightly” suffix.

Application data

One last aspect of filesystem organization to mention here is that every app that is run with flatpak gets a some filesystem space to use for permanent storage. This space is in $HOME/.var/app/$ID, and it has subdirectories called cache, config and data. At runtime, flatpak sets the XDG_CACHE_DIR, XDG_CONFIG_HOME and XDG_DATA_HOME environment variables to point at these directores.

For example, the persistent data from the inkscape flatpak can be found in $HOME/.var/app/org.inkscape.Inkscape.

Summary

Flatpak installations may look a bit intimidating with their deep diretory tree, but they have a well-defined structure and this post hopefully helps to explain the various components.

Fedora IoT and Home Assistant

Posted by Fabian Affolter on June 19, 2018 08:28 PM

As announced is Fedora IoT still pretty close to Fedora Atomic but I was curious how it “looks and feels”. Ok, more “feels” than there is not much see beside the prompt. My encounters with different ARM hardware and Fedora in the past where not always successful, thus I decided to take a Raspberry Pi Model B instead of one out of the Orange Pi family.

First step is to get a nightly image build Fedora IoT as described in the Getting started documentation.

# wget https://ftp-stud.hs-esslingen.de/pub/Mirrors/alt.fedoraproject.org/iot/20180618.0/IoT/aarch64/images/Fedora-IoT-28-20180618.0.aarch64.raw.xz

Just a side note: With the ARM installer, which is available in Rawhide, the

Error: mount /dev/mmcblk0p4 /tmp/root failed
  error is gone. It might be that the same applies to Fedora 28 as well but was present in Fedora 27.

Let the ARM installer create the SD card.

# arm-image-installer --image=Fedora-IoT-28-20180618.0.aarch64.raw.xz --target=rpi3 --media=/dev/mmcblk0 --resizefs

=====================================================
= Selected Image:
= Fedora-IoT-28-20180618.0.aarch64.raw.xz
= Selected Media : /dev/mmcblk0
= U-Boot Target : rpi3
= Root partition will be resized
=====================================================

*****************************************************
*****************************************************
******** WARNING! ALL DATA WILL BE DESTROYED ********
*****************************************************
*****************************************************

Type 'YES' to proceed, anything else to exit now

= Proceed? YES
= Writing:
= Fedora-IoT-28-20180618.0.aarch64.raw.xz
= To: /dev/mmcblk0 ....
0+403205 records in
0+403205 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 352.665 s, 12.2 MB/s
= Writing image complete!
= Resizing /dev/mmcblk0 ....
e2fsck 1.44.2 (14-May-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mmcblk0p3: 35398/187680 files (0.7% non-contiguous), 307012/749568 blocks
resize2fs 1.44.2 (14-May-2018)
The filesystem is already 749568 (4k) blocks long. Nothing to do!

= Raspberry Pi 3 Uboot is already in place, no changes needed.

= Installation Complete! Insert into the rpi3 and boot.

After booting the Raspberry Pi up, create a new user. Now you are able to use SSH to log in. The current build of Fedora IoT allows you to use a recent kernel.

# uname -r
4.16.15-300.fc28.aarch64

Of course is the deployment up-to-date. One advantage of nightly build if you are using one the next morning 😉 .

# rpm-ostree status
State: idle; auto updates disabled
Deployments:
● ostree://fedora-iot:fedora/28/aarch64/iot
Version: 28.20180617.0 (2018-06-17 10:36:57)
Commit: a3aee4deaa6887bfc3088ad891dfa22b4a729e802905c5135c676e440124784a
GPGSignature: Valid signature by 128CF232A9371991C8A65695E08E7E629DB62FB1

Instead of Docker is

podman
  used to deal with all container-related tasks. To get the Home Assistant images run the command below:

# podman pull homeassistant/aarch64-homeassistant

As you can see it’s pretty much the same as using the 

docker
 command-line tool.

Create a directory to store the configuration.

# mk dir -p /opt/home-assistant

And start the Home Assistant after you have adjusted the host’s firewall to match your needs.

# podman run -it --rm --name="home-assistant" \
    --network=bridge --publish=8123:8123 \
    -v /opt/home-assistant:/config:Z \
    -v /et c/localtime:/et c/localtime:ro \
    homeassistant/aarch64-homeassistant

Et voilà, http://IP_ADDRESS_FEDORA_IOT_HOST:8123 is serving the Home Assistant frontend.

I will skip the autostart part as this is already covered in the blog post about Project Atomic and Home Assistant. No much news in this blog post beside

podman
. Pulling images are problematic at first because often the download stopped somewhere over 95 % (no, never at 20 % or 60 % ) in my case a local registry solved the issue for now.

Mastering en Bash ~ Primero de Scripting

Posted by Alvaro Castillo on June 19, 2018 06:55 PM

¡Empezamos la recta final!

Hace un mes más o menos que empezamos a ver cómo operar con Bash utilizando comandos básicos, ya podemos decir que sabemos más que antes y podemos dar el paso al Scripting. ¿Qué es el "Scripting"? Para nosotros es la habilidad que uno tiene para desarrollar un fichero de texto plano con el fin de automatizar determinadas tareas dentro del sistema. Y un script es el nombre que recibe este fichero.

Cuando hablamos de automatizar tareas, hablamos de que en vez d...

Fedora 28 : The Powertop tool from Intel.

Posted by mythcat on June 19, 2018 08:26 AM
The Powertop is a tool provided by Intel to enable various powersaving modes in userspace, kernel and hardware.
Using this tool is possible to monitor processes and show which of them are utilizing the CPU and wake it from its Idle-States.
The tool allowing you to identify applications with particular high power demands.
The name of this tool powertop.x86_64 come from : Power consumption monitor.
Let's install and set this tool to Fedora 28 distro linux:
[root@desk mythcat]# dnf install powertop.x86_64 
Last metadata expiration check: 0:32:08 ago on Tue 19 Jun 2018 10:19:39 AM EEST.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
powertop x86_64 2.9-8.fc28 updates 239 k

Transaction Summary
================================================================================
Install 1 Package

Total download size: 239 k
Installed size: 650 k
Is this ok [y/N]: y
Downloading Packages:
powertop-2.9-8.fc28.x86_64.rpm 112 kB/s | 239 kB 00:02
--------------------------------------------------------------------------------
Total 71 kB/s | 239 kB 00:03
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : powertop-2.9-8.fc28.x86_64 1/1
Running scriptlet: powertop-2.9-8.fc28.x86_64 1/1
Verifying : powertop-2.9-8.fc28.x86_64 1/1

Installed:
powertop.x86_64 2.9-8.fc28

Complete!
You need root privileges to run this tool :
[mythcat@desk ~]$ powertop --calibrate --debug
PowerTOP v2.9 must be run with root privileges.
exiting...
The help output of this tool :
[root@desk mythcat]# powertop --help

Usage: powertop [OPTIONS]

--auto-tune sets all tunable options to their GOOD setting
-c, --calibrate runs powertop in calibration mode
-C, --csv[=filename] generate a csv report
--debug run in "debug" mode
--extech[=devnode] uses an Extech Power Analyzer for measurements
-r, --html[=filename] generate a html report
-i, --iteration[=iterations] number of times to run each test
-q, --quiet suppress stderr output
-s, --sample[=seconds] interval for power consumption measurement
-t, --time[=seconds] generate a report for 'x' seconds
-w, --workload[=workload] file to execute for workload
-V, --version print version information
-h, --help print this help menu

For more help please refer to the 'man 8 powertop'
To run the program and enable its background services, use the commands below:
[root@desk mythcat]# systemctl start powertop.service
[root@desk mythcat]# systemctl enable powertop.service
Created symlink /etc/systemd/system/multi-user.target.wants/powertop.service → /usr/lib/systemd/system/powertop.service.
This is my output with powertop calibrate :
[root@desk mythcat]# powertop --calibrate --debug
modprobe cpufreq_stats failedLoaded 0 prior measurements
RAPL device for cpu 0
RAPL Using PowerCap Sysfs : Domain Mask d
RAPL device for cpu 0
RAPL Using PowerCap Sysfs : Domain Mask d
Devfreq not enabled
glob returned GLOB_ABORTED
Starting PowerTOP power estimate calibration
Calibrating idle
System is not available
System is not available
Calibrating: disk usage
Calibrating backlight
Calibrating idle
System is not available
System is not available
Calibrating: CPU usage on 1 threads
Calibrating: CPU usage on 4 threads
Calibrating: CPU wakeup power consumption
Calibrating: CPU wakeup power consumption
Calibrating: CPU wakeup power consumption
Calibrating USB devices
.... device /sys/bus/usb/devices/2-1/power/control
.... device /sys/bus/usb/devices/usb1/power/control
.... device /sys/bus/usb/devices/1-1.3/power/control
.... device /sys/bus/usb/devices/1-1/power/control
.... device /sys/bus/usb/devices/usb2/power/control
Calibrating radio devices
Finishing PowerTOP power estimate calibration
Parameters after calibration:


Parameter state
----------------------------------
Value Name
0.50 alsa-codec-power (12)
0.00 backlight (4)
0.00 backlight-boost-100 (8)
0.00 backlight-boost-40 (6)
0.00 backlight-boost-80 (7)
0.00 backlight-power (5)
2.33 base power (70)
1.56 cpu-consumption (3)
39.50 cpu-wakeups (2)
0.00 disk-operations (73)
0.20 disk-operations-hard (72)
0.00 enp2s0-link-100 (21)
0.00 enp2s0-link-1000 (22)
0.00 enp2s0-link-high (23)
0.00 enp2s0-packets (24)
0.00 enp2s0-powerunsave (20)
0.00 enp2s0-up (19)
0.56 gpu-operations (71)
0.00 runtime-0000:00:00.0 (34)
0.00 runtime-0000:00:02.0 (37)
0.00 runtime-0000:00:16.0 (44)
0.00 runtime-0000:00:1a.0 (35)
0.00 runtime-0000:00:1b.0 (45)
0.00 runtime-0000:00:1c.0 (41)
0.00 runtime-0000:00:1c.2 (38)
0.00 runtime-0000:00:1c.3 (31)
0.00 runtime-0000:00:1d.0 (36)
0.00 runtime-0000:00:1e.0 (32)
0.00 runtime-0000:00:1f.0 (42)
0.00 runtime-0000:00:1f.2 (39)
0.00 runtime-0000:00:1f.3 (33)
0.00 runtime-0000:02:00.0 (43)
0.00 runtime-0000:03:00.0 (40)
0.00 runtime-Fixed MDIO bus.0 (57)
0.00 runtime-INT0800:00 (52)
0.00 runtime-PNP0103:00 (49)
0.00 runtime-PNP0800:00 (48)
0.00 runtime-PNP0C04:00 (51)
0.00 runtime-PNP0C0C:00 (58)
0.00 runtime-alarmtimer (54)
0.00 runtime-coretemp.0 (55)
0.00 runtime-gpio_ich.1.auto (47)
0.00 runtime-i2c-0 (67)
0.00 runtime-i2c-1 (63)
0.00 runtime-i2c-2 (66)
0.00 runtime-i2c-3 (62)
0.00 runtime-i2c-4 (65)
0.00 runtime-i2c-5 (69)
0.00 runtime-i2c-6 (64)
0.00 runtime-i2c-7 (68)
0.00 runtime-i8042 (60)
0.00 runtime-iTCO_wdt.0.auto (61)
0.00 runtime-microcode (53)
0.00 runtime-pcspkr (56)
0.00 runtime-platform-framebuffer.0 (50)
0.00 runtime-reg-dummy (46)
0.00 runtime-serial8250 (59)
0.10 usb-device-1d6b-0002 (10)
0.10 usb-device-248a-8366 (11)
0.10 usb-device-8087-0024 (9)
0.00 virbr0-link-100 (15)
0.00 virbr0-link-1000 (16)
0.00 virbr0-link-high (17)
0.00 virbr0-nic-link-100 (27)
0.00 virbr0-nic-link-1000 (28)
0.00 virbr0-nic-link-high (29)
0.00 virbr0-nic-packets (30)
0.00 virbr0-nic-powerunsave (26)
0.00 virbr0-nic-up (25)
0.00 virbr0-packets (18)
0.00 virbr0-powerunsave (14)
0.00 virbr0-up (13)
0.10 xwakes (74)

Score: 0.0 ( 0.0)
Guess: 2.3
Actual: 0.0
----------------------------------
Learning debugging enabled


Parameter state
----------------------------------
Value Name
0.50 alsa-codec-power (12)
0.00 backlight (4)
0.00 backlight-boost-100 (8)
0.00 backlight-boost-40 (6)
0.00 backlight-boost-80 (7)
0.00 backlight-power (5)
2.33 base power (70)
1.56 cpu-consumption (3)
39.50 cpu-wakeups (2)
0.00 disk-operations (73)
0.20 disk-operations-hard (72)
0.00 enp2s0-link-100 (21)
0.00 enp2s0-link-1000 (22)
0.00 enp2s0-link-high (23)
0.00 enp2s0-packets (24)
0.00 enp2s0-powerunsave (20)
0.00 enp2s0-up (19)
0.56 gpu-operations (71)
0.00 runtime-0000:00:00.0 (34)
0.00 runtime-0000:00:02.0 (37)
0.00 runtime-0000:00:16.0 (44)
0.00 runtime-0000:00:1a.0 (35)
0.00 runtime-0000:00:1b.0 (45)
0.00 runtime-0000:00:1c.0 (41)
0.00 runtime-0000:00:1c.2 (38)
0.00 runtime-0000:00:1c.3 (31)
0.00 runtime-0000:00:1d.0 (36)
0.00 runtime-0000:00:1e.0 (32)
0.00 runtime-0000:00:1f.0 (42)
0.00 runtime-0000:00:1f.2 (39)
0.00 runtime-0000:00:1f.3 (33)
0.00 runtime-0000:02:00.0 (43)
0.00 runtime-0000:03:00.0 (40)
0.00 runtime-Fixed MDIO bus.0 (57)
0.00 runtime-INT0800:00 (52)
0.00 runtime-PNP0103:00 (49)
0.00 runtime-PNP0800:00 (48)
0.00 runtime-PNP0C04:00 (51)
0.00 runtime-PNP0C0C:00 (58)
0.00 runtime-alarmtimer (54)
0.00 runtime-coretemp.0 (55)
0.00 runtime-gpio_ich.1.auto (47)
0.00 runtime-i2c-0 (67)
0.00 runtime-i2c-1 (63)
0.00 runtime-i2c-2 (66)
0.00 runtime-i2c-3 (62)
0.00 runtime-i2c-4 (65)
0.00 runtime-i2c-5 (69)
0.00 runtime-i2c-6 (64)
0.00 runtime-i2c-7 (68)
0.00 runtime-i8042 (60)
0.00 runtime-iTCO_wdt.0.auto (61)
0.00 runtime-microcode (53)
0.00 runtime-pcspkr (56)
0.00 runtime-platform-framebuffer.0 (50)
0.00 runtime-reg-dummy (46)
0.00 runtime-serial8250 (59)
0.10 usb-device-1d6b-0002 (10)
0.10 usb-device-248a-8366 (11)
0.10 usb-device-8087-0024 (9)
0.00 virbr0-link-100 (15)
0.00 virbr0-link-1000 (16)
0.00 virbr0-link-high (17)
0.00 virbr0-nic-link-100 (27)
0.00 virbr0-nic-link-1000 (28)
0.00 virbr0-nic-link-high (29)
0.00 virbr0-nic-packets (30)
0.00 virbr0-nic-powerunsave (26)
0.00 virbr0-nic-up (25)
0.00 virbr0-packets (18)
0.00 virbr0-powerunsave (14)
0.00 virbr0-up (13)
0.10 xwakes (74)

Score: 0.0 ( 0.0)
Guess: 2.3
Actual: 0.0
----------------------------------
Now we can sets all tunable options to their GOOD setting:
[root@desk mythcat]# powertop --auto-tune
modprobe cpufreq_stats failedLoaded 0 prior measurements
RAPL device for cpu 0
RAPL Using PowerCap Sysfs : Domain Mask d
RAPL device for cpu 0
RAPL Using PowerCap Sysfs : Domain Mask d
Devfreq not enabled
glob returned GLOB_ABORTED
Leaving PowerTOP

One month of GSoC with The Fedora Project: Amitosh

Posted by Fedora Community Blog on June 18, 2018 11:13 PM
  • Fedora Account: amitosh
  • IRC: amitosh (found in #fedora, #fedora-dotnet, #fedora-summer-coding, #fedora-commops)
  • Fedora Wiki User Page: amitosh

It’s been a month since I have been working on my Google Summer of Code project with The Fedora Project. And in this month, I have been working on “Improving the back-end of the Fedora App”. This month has been particularly interesting – with challenges to solve and stuff to learn, it has been a very satisfying experience so far.

Month at a glance

We have successfully completed my first evaluation and here is the list of things that I have worked on in this month:

Week 1 – Porting, Upgrading, and Patching

Week 1 was all about patching and upgrading our SDKs and toolkits to the latest and greatest versions. The process started about a month before but wasn’t completed. We migrated the entire codebase to Ionic 3 and TypeScript. There was a lot of refactoring and code reorganization in this week, and the end result was a clean, structured and documented codebase with all broken facilities restored.

Week 2 – Offline caching

Week 2 was all about bringing offline capabilities to the app. The app now caches the content from Fedora Magazine, FedoCal and Fedora Social. Every time we load the app, we refersh the cache from the API end points in the background. We no longer block the user from interacting with the app.

Week 3 – Testing

We were missing out in the testing front. In this week I worked on improving the unit testing and integration testing components of the app. We are using the popular Karma test runner and the Jasmine test library to implement unit tests. For integration tests, we are using Protractor with Selenium Framework.

Week 4 – More tests, Localization and Package Search

Week 4 was a continuaton of testing work from Week 3. I implemented tests that covered all the services of our app and some UI tests. We also brought a new feature – “package search”. It allows to search for packages right from the app. Along with the package description, it shows the dnf command to install the package, link to the upstream website, and other information.

We also started work on localizing the app. The views now respect the system locale and format the content accordingly. We also have completed the groundwork for furthur localization (and perhaps transaltion) of the app.

Contributions

Here is the list of all pull requests I made in this month:

  1. Disable splash screen autohide (#47)
  2. Sort posts by their dates (#49)
  3. Add type annotations where necessary (#57)
  4. Response caching for Magazine, Social, Calendar and Ask (#61)
  5. Show locale-aware date strings (#64)
  6. Add config for unit testing (#70)
  7. Add integration test setup (#71)
  8. Package search (#72)
  9. Introduce unit tests for providers (#78)
  10. Show posts from official Fedora facebook page and Twitter handle (#79)
  11. Unbreak HTML layout for first evaluation build (#85)

What’s happening

In the coming 4 weeks, we will be working on integrating the UI work done by @littlewornder with the app backend. Some of the planned features are: system calendar integration, more robust offline storage implementation, bookmarking, and integration with FMN to show notifications from Fedora infrastructure apps.

Please send in your feedback at amitosh@fedoraproject.org

The post One month of GSoC with The Fedora Project: Amitosh appeared first on Fedora Community Blog.

Fedora 28 : Godot game engine.

Posted by mythcat on June 18, 2018 06:07 PM
Today I tested the new version of Godot game engine - version 3.0.3 .
You can download it from official webpage.
I used the 64 bit version.
After download and unzip you can run the binary file in your user terminal.
The Godot game engine start with a GUI.
Into the right area of screen into Scene tab two objects: Spatial and Camera.
You need to select or add a WoldEnvironment node.
Take a look to the Inspector and use with New Environment .
Run this with the play icon.
If is all right you can see something like this:

Create e new folder into your project and name it Export.
Copy the mscorelib.dll into this folder.
Go to main menu and select Project - Export and you will see this GUI for export your game.
Press Add button to select Linux/X11 output:
Select Linux and press Export Project button.
Go to Export folder and run from your linux terminal your game.
This game engine working well with Fedora 28 and the export running without errors.

[Week 5] GSoC Status Report for Fedora App: Abhishek

Posted by Fedora Community Blog on June 18, 2018 11:24 AM

This is Status Report for Fedora App filled by participants on a weekly basis.

Status Report for Abhishek Sharma (thelittlewonder)

  • Fedora Account: thelittlewonder
  • IRCthelittlewonder (found in #fedora-summer-coding, #fedora-india, #fedora-design)
  • Fedora User Wiki Page

Tasks Completed

New Magazine View

We worked on a brand-new magazine view which presents the date and number of comments of a post upfront along with the featured image, quite similar to the Fedora Magazine Website. The idea was to keep things consistent and provide a similar experience to the Magazine Users.

Pull Request – #82

Sorting the Magazine Articles

We added a new feature that allows the user to sort the fetched magazine articles by date and number of comments. You can check out the most talked or the latest article in few taps now.

Pull Request – #82

More Empty States Design

We realised that not all empty states had been taken care of during the last week’s sprint, so we worked on the design of few others empty and error states. These things though may sound very small, but ultimately helps to render a reliable user experience.

What’s Happening

Ask view

We have commenced work on the ask view of the application. We needed to figure out how to sort the questions according to votes, views and answers but Askbot API will do that for us, so we are sorted.

Look into SoundCloud API

In the upcoming weeks, we are thinking to implement the Fedora Podcast into the app so that the user can listen to it on the go. We need to look into the SoundCloud API and think how to proceed possible integration.

That’s all for this week 👋
Please send in your feedback at : guywhodesigns[at]gmail[dot]com

The post [Week 5] GSoC Status Report for Fedora App: Abhishek appeared first on Fedora Community Blog.

Anaconda improvements in Fedora 28

Posted by Fedora Magazine on June 18, 2018 08:00 AM

Fedora 28 was released last month, and the major update brought with it a raft of new features for the Fedora Installer (Anaconda).  Like Fedora, Anaconda is a dynamic software project with new features and updates every release. Some changes are user visible, while others happen under the hood — making Anaconda more robust and prepared for future improvements.

User & Root configuration on Fedora Workstation

When installing Fedora Workstation from the Live media, the user and root configuration screens are no longer in the installer. Setting up users is now only done in the Initial Setup screens after installation.

The progress hub on a Fedora 28 Workstation live installation.

The progress hub on a Fedora 28 Workstation live installation.

The back story is that the Fedora Workstation working group aimed to reduce the number of screens users see during installation.  Primarily, this included screens that let a user set option twice: both Anaconda and the Gnome Initial Setup tool upon first boot. The working group considered various options, such as Anaconda reporting which screens have been visited by the user and then hiding them in Gnome Initial Setup. In the end they opted for just always skipping the user and root configuration screens in Anaconda and just configuring a user with sudo rights in Gnome Initial Setup.

Because of this the respective screen (user creation) shows up just once (in Gnome Initial Setup), making the installation experience more consistent.

It’s also worth noting that this change only affects the Fedora Workstation live image. All other images, including the Fedora Workstation netinst image and other live images, are unaffected.

Anaconda on DBus

Last year we announced the commencement of our next major initiative — modularizing Anaconda. The main idea is to split the code into several modules that will communicate over DBus. This will provide better stability, extensibility and testability of Anaconda.

Fedora 28 is the first release where Anaconda operates via DBus. At startup, Anaconda starts its private message bus and ten simple modules. For now, the modules just hold data that are provided by a kickstart file and modified by the UI. The UI uses the data to drive installation. This means that you can use DBus to monitor current settings, but you should use the UI to change them.

You can easily explore the current Anaconda DBus API with the live version of Fedora Workstation 28. Just keep in mind that the API is still unstable, so it might change in the future.

To do so, boot the live image and install the D-Feet application:

sudo dnf install d-feet

Start the installer and get an address of the Anaconda message bus:

cat /var/run/anaconda/bus.address

Start D-Feet, choose the option ‘Connect to other Bus’ and copy the first part of the Anaconda bus address to the text field (see the picture below). Click on the ‘Connect’ button. The application will open a new tab and show you a list of available DBus services. Now you can view the interfaces, methods, signals and properties of Anaconda DBus modules and interact with them.

Connecting to the Anaconda DBUS session.

Connecting to the Anaconda DBUS session.

The Anaconda DBUS API as visible in D-Feet.

The Anaconda DBUS API as visible in D-Feet.

Blivet 3.0 and Pykickstart 3.0

Fedora 28 provides version 3 of blivet and Pykickstart, and Anaconda uses the updated versions too.  While this is not really visible from end user perspective, changes like this are important to assure a robust and maintainable future for the Anaconda installer.

The main change in Pykickstart 3 is the switch from the deprecated optparse module to argparse for kickstart parsing. This not only brings all the features argparse has, it was also one of the prerequisites for having automatically generated kickstart documentation on Read the Docs.

Blivet 3 is less radical  update, but includes significant API improvements and cleanups. Some installer-related code still sitting in Blivet was finally moved to Anaconda.

Migrating from authconfig to authselect

The authconfig tool is deprecated and replaced with authselect in Fedora 28, so Anaconda deprecated the kickstart command authconfig and introduced a new command: authselect. You can still use the authconfig command, but Anaconda will install and run the authselect-compat tool instead.

Enabled hibernation

Previously, Hibernation didn’t work after installation because of a missing kernel option, so it had to be set up manually. Starting with Fedora 28, Anaconda adds the kernel option ‘resume’ with a path to the largest available swap device by default on x86 architectures.

Reducing Initial Setup dependencies

The Initial Setup tool is basically a lightweight launcher for arbitrary configuration screens from Anaconda. And while Anaconda often runs from a dedicated installation image, Initial Setup always runs directly on the installed system. This also means all the dependencies of Initial Setup will end up on users system, and unless they are uninstalled, they will take up space more or less forever.

The situation is even more dire on ARM, where users generally just dd a Fedora image to memory card or internal storage on the ARM board and Initial Setup basically acts as the installer, customizing the otherwise identical image for the given user. In this case Initial Setup dependencies directly dictate how small the Fedora image can be.

In Fedora 28, the new anaconda-install-env-deps metapackage  depends on all installation-time-only dependencies. The anaconda-install-env-deps package is always installed on installation images (netinst, live), but is not an Initial Setup dependency and should thus prevent all the unnecessary packages from being pulled in to the installed system. There is also a nice side effect of finally consolidating all the install-time-only dependencies in the Anaconda spec file.

 

Episode 101 - Our unregulated future is here to stay

Posted by Open Source Security Podcast on June 17, 2018 11:56 PM
Josh and Kurt talk about Bird scooters. The implications of the scooters on the city, segways, bicycles. The topic of how these vehicles interact with pedestrians on the road and trails. It's an example of humans not wanting to follow the rules and generally making the situation annoying for everyone. It's the old security story of new technology without clear rules. The show ends with some horrifying numbers behind how bad things get before  people really care.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6710747/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Pre-release Fedora Scientific Vagrant Boxes

Posted by Amit Saha on June 17, 2018 02:00 PM

I am very excited to share that sometime back the Fedora project gave the go ahead on my idea of making Fedora Scientific available as Vagrant boxes starting with Fedora 29. This basically means (I think) that using Fedora Scientific in a virtual machine is even easier. Instead of downloading the ISO and then going through the installation process, you can now basically do:

  • Download Fedora Scientific Vagrant box
  • Initialize VM
  • vagrant up

Trying it out before Fedora 29 release

As of a few days back, Fedora 29 rawhide vagrant boxes for Fedora Scientific are now being published. Thanks to release engineering for moving this forward.

Mac OS X Hosts - VirtualBox

Here's what I did using VirtualBox on an OS X host. First, install vagrant. Then, from a terminal:

# Add the box
$ vagrant box add https://kojipkgs.fedoraproject.org//packages/Fedora-Scientific-Vagrant/Rawhide/20180613.n.0/images/Fedora-Scientific-Vagrant-Rawhide-20180613.n.0.x86_64.vagrant-virtualbox.box  --name Fedora-Scientific-Rawhide
==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'Fedora-Scientific-Rawhide' (v0) for provider: 
    box: Downloading: https://kojipkgs.fedoraproject.org//packages/Fedora-Scientific-Vagrant/Rawhide/20180613.n.0/images/Fedora-Scientific-Vagrant-Rawhide-20180613.n.0.x86_64.vagrant-virtualbox.box
==> box: Box download is resuming from prior download progress
==> box: Successfully added box 'Fedora-Scientific-Rawhide' (v0) for 'virtualbox'!

..

Now that the box has been downloaded, we initialize a new VM:

$ vagrant init Fedora-Scientific-Rawhide
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
MacBook-Air:Downloads amit$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'Fedora-Scientific-Rawhide'...
Progress: 70%
Progress: 90%
..


==> default: Machine booted and ready!
..

If you see the above message, we are ready to start using Fedora Scientific:

$ vagrant ssh

The above will drop us into a terminal session on our Fedora Scientific VM. To be able to use graphical programs, we will change the above command to (please note I also needed to install xquartz to be able to see graphical programs from the VM on my host):

$ vagrant ssh -- -X

By default, the virtual machine is given 512 MB memory which is not enough for doing anything useful on your system. To change that, open the Vagrantfile that was created by the vagrant init step above. In that file, look for the block of code starting with config.vm.provider "virtualbox" and change the block to be:

config.vm.provider "virtualbox" do |vb|
     # Display the VirtualBox GUI when booting the machine
     #   vb.gui = true
     #
     # Customize the amount of memory on the VM:
     vb.memory = "1024"
  end

The key above is vb.memory = "1024" which gives our virtual machine 1024 MB. If you want or can provide more RAM, adjust the value accordingly. Once done, do:

$ vagrant reload

This will recreate the virtual machine.

Windows hosts - VirtualBox

To be done (If you end up doing it, please let me know - see for a link at the bottom of this post).

Linux hosts - libvirt

To be done (If you end up doing it, please let me know - see for a link at the bottom of this post).

Linux hosts - VirtualBox

To be done (If you end up doing it, please let me know - see for a link at the bottom of this post).

Explore

Now that we have Fedora Scientific setup, head over to the docs to explore what's in Fedora Scientific.

Servidor LAMP en Fedora 28

Posted by Ivan Fernandez Cid on June 16, 2018 09:20 PM
Configurar un servidor LAMP en Fedora es una tarea bastante sencilla. A continuación describo cómo hacerlo. Instalamos el servidor web(apache httpd) la forma más sencilla. (Nota: todos estos comandos como root, ya sea como "su", o "sudo" antes de cada comando si instalaste a partir de Fedora 28). dnf groupinstall "Web Server" Después el servidor mariadb(mysql) dnf install mariadb-server

Je représenterai Borsalinux-fr et Fedora aux RMLL 2018 à Strasbourg (7-12 juillet)

Posted by Charles-Antoine Couret on June 16, 2018 06:00 AM

Pour la 3e fois j'assisterais aux Rencontres Mondiales du Logiciel Libre, cet évènement annuel assez majeur en Francophonie à propos des Logiciels Libres. Après avoir été à l'édition de Montpellier (2014) et de Beauvais (2015), je serai présent cette année à Strasbourg toute la semaine du samedi 7 juillet au jeudi 12 juillet. D'habitude je n'assistais qu'au week-end.

Je représenterai Borsalinux-fr pour promouvoir Fedora via un stand dans le village du libre. Je présenterai aussi une conférence sur les apports de Fedora à l'écosystème du Logiciel Libre, réactualisée de celle déjà présentée aux JM2L en novembre dernier. Le texte a déjà été partiellement publié ici même, la conférence sera l'occasion de mentionner les progrès sur la question depuis plusieurs mois sur de nombreux sujets.

La conférence aura lieu le jeudi 11 juillet à 11h à la salle ATRIUM - AT9. Cela devrait durer environ une heure. Venez nombreux !

Sinon je serais disponible dans le Village du Libre sur le stand Borsalinux-fr. Si vous avez des questions, besoin d'aide, des remarques ou que vous voulez faire un coucou, n'hésitez pas. Je serai là pour cela. Des goodies seront également disponibles.

Fedora 28 : ARM programming and testing .

Posted by mythcat on June 15, 2018 08:34 PM
This is a simple tutorial about ARM programming and QEMU:
The test.c program is this :
volatile unsigned int * const UART0DR = (unsigned int *)0x101f1000;

void print_uart0(const char *s) {
while(*s != '\0') { /* Loop until end of string */
*UART0DR = (unsigned int)(*s); /* Transmit char */
s++; /* Next char */
}
}

void c_entry() {
print_uart0("Hello world!\n");
}

Using volatile keyword is necessary to instruct the compiler that the memory pointed.
The unsigned int type enforces 32-bits read and write access.
The QEMU model like in a real system on chip the Transmit FIFO Full flag must be checked in the UARTFR register before writing on the UARTDR register.
Create the startup.s assembler file:
.global _Reset
_Reset:
LDR sp, =stack_top
BL c_entry
B .
Create the script linker named test.ld:
ENTRY(_Reset)
SECTIONS
{
. = 0x10000;
.startup . : { startup.o(.text) }
.text : { *(.text) }
.data : { *(.data) }
.bss : { *(.bss COMMON) }
. = ALIGN(8);
. = . + 0x1000; /* 4kB of stack memory */
stack_top = .;
}
Next step is the install of arm-none-eabi x86_64 tools :
[root@desk arm-source]# dnf install arm-none-eabi-gcc-cs-c++.x86_64 
Last metadata expiration check: 1:54:04 ago on Fri 15 Jun 2018 06:55:54 PM EEST.
Package arm-none-eabi-gcc-cs-c++-1:7.1.0-5.fc27.x86_64 is already installed, skipping.
Dependencies resolved.
Nothing to do.
Complete!
[root@desk arm-source]# dnf install arm-none-eabi-gdb.x86_64
Last metadata expiration check: 1:54:48 ago on Fri 15 Jun 2018 06:55:54 PM EEST.
Package arm-none-eabi-gdb-7.6.2-4.fc24.x86_64 is already installed, skipping.
Dependencies resolved.
Nothing to do.
Complete!
[mythcat@desk arm-source]$ ll
total 12
-rw-rw-r--. 1 mythcat mythcat 60 Jun 15 20:28 startup.s
-rw-rw-r--. 1 mythcat mythcat 288 Jun 15 20:26 test.c
-rw-rw-r--. 1 mythcat mythcat 223 Jun 15 20:29 test.ld
Let's test this with qemu virtual tool ( use Ctr+A and X keys to stop qemu) :
[mythcat@desk arm-source]$ qemu-system-arm -M versatilepb -m 64M -nographic -kernel test.bin
pulseaudio: set_sink_input_volume() failed
pulseaudio: Reason: Invalid argument
pulseaudio: set_sink_input_mute() failed
pulseaudio: Reason: Invalid argument
Hello world!
QEMU: Terminated

Résultats des élections de Fedora 06/18

Posted by Charles-Antoine Couret on June 15, 2018 06:55 PM

Comme je vous le rapportais il y a peu, Fedora a organisé des élections pour renouveler partiellement le collège de ses organes Mindshare et Council puis du FESCo.

Le scrutin est comme toujours un vote par valeurs. Nous pouvons attribuer à chaque candidat un certain nombre de points, dont la valeur maximale est celui du nombre de candidat, et le minimum 0. Cela permet de montrer l'approbation à un candidat et la désapprobation d'un autre sans ambiguïté. Rien n'empêchant de voter pour deux candidats avec la même valeur.

Les résultats pour le Conseil sont (seul le premier est élu) :

  # votes |  name
- --------+----------------------
     164  | Till Maas (till)
- --------+----------------------
     115  | Nick Bebout (nb)

À titre indicatif le score maximal possible était de 2 * 110 (pour 110 votants) soit 220.

Les résultats pour le FESCo sont (seuls les quatre premiers sont élus) :

  # votes |  name
- --------+----------------------
     290  | Till Maas (till/tyll)
     286  | Stephen Gallagher (sgallagh)
     254  | Randy Barlow (bowlofeggs)
     234  | Petr Šabata (contyk/psabata)
- --------+----------------------
     231  | Justin Forbes (jforbes)

À titre indicatif le score maximal possible était de 5 * 86 (pour 86 votants) soit 430.

Les résultats pour le Mindshare sont donc (seuls le premier est élu) :

  # votes |  name
- --------+----------------------
     205  | Sumantro Mukherjee (sumantrom)
- --------+----------------------
     188  | Nick Bebout (nb)
     114  | Itamar Peixoto (itamarjp)

À titre indicatif le score maximal possible était de 3 * 107 (pour 107 votants) soit 321.

Nous pouvons noter que globalement le nombre de votants pour chaque scrutin était proche aux alentours de 110-86 votants ce qui est un peu moins que la fois précédente (175-150 en moyenne). Les scores sont aussi plutôt éparpillés, avec souvent quelques membres assez largement en tête de chaque scrutin.

Bravo aux participants et aux élus et le meilleur pour le projet Fedora.

Fedora Classroom Session: Fedora QA 101/102

Posted by Fedora Magazine on June 15, 2018 08:00 AM

Fedora Classroom sessions continue next week with a session on Fedora QA. The general schedule for sessions appears on the wiki. You can also find resources and recordings from previous sessions there. Here are details about this week’s session on Tuesday, June 19 at 1600 UTC. That link allows you to convert the time to your timezone.

Topic: Fedora QA 101/102

As the Fedora QA wiki page explains, this project covers testing of the software that makes up Fedora. The team’s goal is to continually improve the quality of Fedora releases and updates. You can find more information on the activities of the QA team on their wiki page.

This is a combined classroom session covering topics from Fedora QA 101 and 102, listed below:

Fedora QA 101
  1. Working with Fedora
  2. Installation of VMs
  3. Installation and configuration of Fedora
  4. Overview of the Fedora release cycle
  5. Account setup
  6. Finding and writing test cases
  7. Websites used for testing
  8. Version control and bug tracking
  9. Testing methodologies, strategies, and categories
Fedora QA 102
  1.  Screen sharing and walk-through of release validation
  2. Screen sharing and walk-through of updates testing
  3. Testing cloud images with Amazon EC2
  4. How to write test cases for packages
  5. Proposing and hosting your own test days

Instructors

Sumantro Mukherjee works at Red Hat and contributes to numerous open source projects in his free time. He also loves to contribute to Fedora QA and takes pleasure in helping new joiners contribute. Furthermore, Sumantro represents the Asia Pacific region in Fedora Ambassadors Steering Committee (FAmSCo). You can get in touch with him via his Fedora project e-mail or on IRC. Sumantro also goes by the nickname sumantrom on Freenode.

Geoffrey Marr, also known by his IRC name as coremodule, is a Red Hat employee and Fedora contributor with a background in Linux and cloud technologies. While working, he spends his time lurking the Fedora QA wiki and test pages. Away from work, he enjoys RaspberryPi projects, especially those focusing on software-defined radio.

Joining the session

This session takes place on Blue Jeans. The following information will help you join the session:

  • URL: https://bluejeans.com/8569015043/
  • Meeting ID (for Desktop App): 8569015043

We hope you attend, learn from, and enjoy this session. Also, If you have any feedback about the sessions, have ideas for a new one or want to host a session, feel free to comment on this post or edit the Classroom wiki page.

The questions you really want FSFE to answer

Posted by Daniel Pocock on June 15, 2018 07:28 AM

As the last man standing as a fellowship representative in FSFE, I propose to give a report at the community meeting at RMLL.

I'm keen to get feedback from the wider community as well, including former fellows, volunteers and anybody else who has come into contact with FSFE.

It is important for me to understand the topics you want me to cover as so many things have happened in free software and in FSFE in recent times.

last man standing

Some of the things people already asked me about:

  • the status of the fellowship and the membership status of fellows
  • use of non-free software and cloud services in FSFE, deviating from the philosophy that people associate with the FSF / FSFE family
  • measuring both the impact and cost of campaigns, to see if we get value for money (a high level view of expenditure is here)

What are the issues you would like me to address? Please feel free to email me privately or publicly. If I don't have answers immediately I would seek to get them for you as I prepare my report. Without your support and feedback, I don't have a mandate to pursue these issues on your behalf so if you have any concerns, please reply.

Your fellowship representative

Introducing: greenboot

Posted by Lorbus on June 14, 2018 06:51 PM

A Generic Health Check Framework for systemd

Not too long ago, I applied to Google Summer of Code for the student scholarship position together with a Fedora project ideated by Peter Robinson, who is the principal IoT architect at Red Hat, named Fedora IoT: Atomic Host Upgrade Daemon. As you may be guessing by now, I was very fortunate and the proposal was accepted! The coding phase started on the 14th of May and in this blog post I’ll try to give a little insight into my first month working on the project.

Swimming in Cold Waters

aka Bash in my Head

On the first day my mentors Peter, Dusty Mabe, Jonathan Lebon and I (as well as a few others!) had our first Video Chat Meeting to discuss Design and Milestones of the GSoC project.

The proposed Upgrade Daemon should frequently check for available updates, apply them, restart the system, check the system health, and in case of failures, reboot into the old, previously working version of the operating system. It should also enable the user to define their custom, highly system-specific health check routines that are run as part of it in order to determine the system health status.

While I had been using (and absolutely fallen in love with)Fedora Atomic Host for a while, I had only a very slight idea of what was about to come.
A program as described in our proposal touches on quite a number of moving parts in the Linux world: Grub, systemd as well as OSTree (libostree & rpm-ostree) which is the base for Project Atomic (a project labeled ‘Git for Operating Systems’ by its architect Colin Walters).

Much of the needed functionality already exists in various places, now I just had to tie them together in a sensible manner. This may sound simple for some, but for me it was A LOT of information to digest and to read up on.
As of recently OSTree provides auto-update functionality, so a few days into GSoC we agreed that re-using systemd for much of the missing parts was a good way forward:

On boot up systemd would run check services grouped under a health check target. Whether the target is reached determines the system’s health status. Another systemd service then does something on success/something else on failure.

On atomic systems it could call rpm-ostree rollback --reboot on failure, downgrading the system to its previous version.
This approach is also abstract and versatile enough to be useful on non-OSTree based systems, essentially making it a ‘Generic Health Check Framework for systemd’.

For the sake of moving quickly, I decided to prototype using bash scripts instead of C like I had originally planned (do not worry, this is still planned for future releases). And while I hadn’t produced a single working LOC in the first week, at times flicking through man pages like a mad man, I now finally had a clear idea of how to approach this whole thing. And even better, I almost immediately came up with a pretty cool name for it.

Putting on the Boots

On systems running Fedora 28, you can install, run and check the output of the current alpha version of greenboot like this (you could additionally install greenboot-reboot to reboot on RED status):

dnf copr enable lorbus/greenboot
dnf install greenboot greenboot-notifications
systemctl enable greenboot.target
systemctl start greenboot
journalctl -u greenboot.target
journalctl -u greenboot
journalctl -t greenboot.sh

The following directory structure is created to place your custom scripts in:

/etc
/greenboot.d
/check
/required
/wanted
/green
/red

Customize Health Checking Behaviour

You now have multiple options to customize greenboot’s health checking behaviour:

  • Put scripts representing health checks that MUST NOT FAIL in order to reach a GREEN boot status into /etc/greenboot.d/check/required.
  • Put scripts representing health checks that MAY FAIL into /etc/greenboot.d/check/wanted.
  • Create oneshot health check service units that MUST NOT FAIL like the following and put them into /etc/systemd/system:
[Unit]
Description=Custom Required Health Check
Before=greenboot.target
[Service]
Type=oneshot
ExecStart=/path/to/service
[Install]
RequiredBy=greenboot.target
  • Create oneshot health check service units that MAY FAIL like the following and put them into /etc/systemd/system:
[Unit]
Description=Custom Wanted Health Check
Before=greenboot.target
[Service]
Type=oneshot
ExecStart=/path/to/service
[Install]
WantedBy=greenboot.target

Customize GREEN Status Behaviour

  • Put scripts representing procedures you want to run after a GREEN boot status has been reached into /etc/greenboot.d/green.

Customize RED Status Behaviour

  • Put scripts representing procedures you want to run after a RED boot status has been reached into /etc/greenboot.d/red.

What’s next?

While the prototype is working reasonably well now, there are still quite a few things on the ToDo list for greenboot:

  • Create reasonable default health checks and red/green procedures
  • Support/document hardware watchdog services as health checks
  • Create Documentation
  • CI on Fedora Atomic
  • Convert source to C
  • Create protocol for tracking number of boot attempts to avoid endless reboot loops (this will probably not be part of greenboot itself)

In conclusion I can say this past month has been an amazing experience! For the first time I got to work side by side with professionals. I have already learned so much and it just keeps on getting better!

I would like to thank

  • Google for making it possible for me to work on Open Source,
  • my mentors Peter, Dusty and Jonathan for their amazing help, patience and reviews,
  • the Fedora Atomic Community for their great attitude, helpfulness and creating a friendly and productive atmosphere, and
  • Everyone who has already weighed in on greenboot or shared it online!

Feedback

Feel free to comment and give feedback on:

Locks in the classroom – 2018

Posted by Jonathan Dieter on June 13, 2018 04:28 PM

For the sixth year now, our grade nine students have been doing 3D modeling using Blender. We ran late this year, but the final locks were finished a couple of weeks ago, and they’re finally ready for publishing. As this is my last year in the school, this will most likely be the last of this series of posts. So, with no further delay, here are the top models from each of the three grade nine classes (click on the pictures for Full HD renders).

First up is a lock on a cash-laden safe

Lock by Najib – CC BY-SASource

Simple and pleasant to look at

Lock by Joelle – CC BYSource

This next one is nicely integrated into the background

Lock by FadySP – CC BY-SASource

Another safe, but why is my picture in it?

Lock by Univirus – CC BY-SASource

Excellent choices in his textures

Lock by Abi Haidar – CC BYSource

I think this padlock is wearing camo

Lock by Buhler – CC BY-SASource

I like the color choices in this lock

Lock by Joanne – CC BY-SASource

The attention to detail in this is impressive!

Lock by S. Moon – CC BYSource

The next question is… Why?

Lock by Diab – CC BYSource

Excellent use of physics to make the chain hang over the edge of the table

Lock by Abi Hachem – CC0Source

And, finally, a video of a swinging lock that makes excellent use of Blender’s physics engine!


Lock by Hadi – CC BYSource

FESCo Election: Interview with Till Maas (till)

Posted by Till Maas on June 13, 2018 09:45 AM

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, June 7th and closes promptly at 23:59:59 UTC on Wednesday, June 13th, 2018.

Interview with Till Maas (till)

  • Fedora Account: till
  • IRC: tyll (found in #fedora-releng #fedora #fedora-devel #fedora-admin #fedora-apps #fedora-social #fedora-de #epel)
  • Fedora User Wiki Page

Questions

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

Processes in the Fedora community need to scale better and need more support by automation. After we decommissioned  PackageDB, several previously automated processed got broken or replaced by complicated manual processes. Also when we tried updates gating, the process for packagers became a lot more complicated or impossible, since the tooling did not support the packagers. In a similar way as we thrive to create a great user experience for the users of our distribution, we also need to make sure that we enable our packagers and other developers/contributors to focus on the tasks that need a human brain and provide them useful information and feedback.

To streamline the process of giving update feedback I created the fedora-easy-karma tool. Also I (semi-)automated several task for release engineering to cleanup old packages. Getting rid of old packages/information is another important task to make sure the baggage does not make us slow and another area that I am working on.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

We have awesome developers and other contributors that make Fedora great. As FESCo we should make sure that we enable them to do their best and remove barriers/allow an agile development process. Packagers should not be blocked by non-responsive maintainers, broken packages or complicated processes to cleanup non-conforming packages. In my opinion we should move to a more collaborative package ownership model where everyone accepts that there might be sometimes extra work to cleanup something but we remove a lot of friction/stupid tasks with automation.

What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

To handle all the work we need more contributors. A great source are the new packagers seeking sponsors. All potential new contributors should get timely feedback to ensure that they feel welcomed and get the help they need. Also there seems to be too much outdated information or non-conforming packages that slows active people down.

FESCo can help best by backing up the people that do the work and make it easy for people who would like to automatically adjust packages at a larger scale or to remove old information.

Flatpak in detail

Posted by Matthias Clasen on June 13, 2018 09:19 AM

At this point, Flatpak is a mature system for deploying and running desktop applications. It has accumulated quite some sophistication over time, which can make it appear more complicated than it is.

In this post, I’ll try to look in depth at some of the core concepts behind Flatpak, namely runtimes and extensions.

In the beginning: bundles

At its very core, the idea behind Flatpak is to bundle applications with their dependencies, and ship them as a self-contained unit.  There are good reasons that bundling is attractive for application developers:

  • There is a much bigger chance that the app will run on an arbitrary end-user system, which may have different versions of libraries, different themes, or a different kernel
  • You are not relying on all the different update mechanisms and policies of Linux distributions
  • Distribution updates to your dependencies will not break your app behind your back
  • You can test the same code that your users run

Best of both worlds: runtimes

In the age-old debate beween bundlers and packagers, there are good arguments on both sides. The usual arguments against bundling are:

  • Code duplication. If a library gets hit by a security issue you have to fix it in all the apps that bundle it
  • Wastefulness. if every app ships an entire library stack, this blows up the required bandwidth for downloads and the required disk space for installing them

With this in mind, Flatpak early on introduced the concept of runtimes. The idea behind runtimes is that many desktop applications use a deep library stack, but it is often a similar set of libraries. Therefore, it makes sense to take these common library stacks and distribute them separately as “GNOME runtime” or “KDE runtime”, and have apps declare in their metadata which runtime they need.

It then becomes the responsibility of the flatpak tooling to assemble  the applications filesystem tree with the runtimes filesystem tree when it creates the sandbox environment that the app runs in.

To avoid conflicts, Flatpak requires that the applications filesystem tree is rooted in /app, while runtimes have a traditional /usr tree.

Splitting off runtimes preserves most of the benefits that I outlined for bundles, while greatly reducing code duplication and letting us update libraries independently of applications.

Of course, it also brings back some of the risks of modularity: updating the libraries independently carries, once again, the risk of breaking the applications that use the runtime. So the team maintaining a runtime has to be very careful to avoid introducing problematic changes or incompatibilities.

Going further: extensions

As I said, shipping runtimes separately saves a lot of bandwidth, since the runtime has to be downloaded only once for all the applications that share it. But a runtime is still a pretty massive download, and contains a lot of things that may not be useful most of the time or are just optional.

A good examples for this are translations. It is not uncommon for desktop apps to be translated in 50 locales. But the average user will only ever use a single one of these. In traditional packaging, this is sometimes addressed by breaking translations out as “lang packs” that can be installed separately.

Another example is debug information. You don’t need symbols and other debug information unless you encounter a crash and want to submit a meaningful stacktrace. In traditional packaging, this is addressed by splitting off “debuginfo” packages that can be installed when needed.

Flatpak provides a mechanism  to address these use cases.  Runtimes (and applications too) can declare extension points, which are designated locations in their filesystem tree where additional runtimes can be mounted. These additional runtimes are called extensions. When constructing a sandbox for running an app, flatpak tooling will look for matching extensions and mount them at the right place.

Flatpak is not a generic solution, but tailored towards the use case of desktop applications, and it tries to do the the right thing out of the box: flatpak-builder automatically breaks out .Locale and .Debug extensions when building apps or runtimes, and when installing things, flatpak installs the matching .Locale extension. But it goes beyond that and only installs the subset of it that is relevant for the current locale, thereby recreating the space-saving effect of lang packs.

Extensions: infinite variations

The extension mechanism is flexible enough to cover not just locales and debuginfo, but all sorts of other optional components that applications might need. To give just some examples:

  • OpenGL drivers that match the GPU
  • Other hardware-specific APIs like vaaapi
  • Media codecs
  • Widget themes

All of these can be provided as extensions. Flatpak has the smarts built-in to know whether a given OpenGL driver extension matches the hardware or whether a given theme extension matches the current desktop theme, and it will automatically install and use matching extensions.

At last: the host OS

The examples in the previous paragraph are realized as extensions because the shared objects or theme components need to match the runtime they are used with.

But some things just don’t change very much over time, and don’t need exact matching against the runtime to be used by applications. Examples in this category are fonts, icons or certificates.

Flatpak makes these components from the host OS available in the sandbox, by mounting them below /run/host/ in the sandbox, and appending /run/host/share to the XDG_DATA_DIRS environment variable.

Summary

Flatpak does a lot of hard work behind the scenes to ensure that the apps it runs find an environment that looks similar to a traditional Linux desktop, by combining the application, its runtime, optional extensions and host components.

The Flatpak documentation provides more information about working with Flatpak as an application developer.

Fedora 28 on Raspberry Pi 3 B+

Posted by Fedora Magazine on June 13, 2018 08:00 AM

The Raspberry Pi model 3 B+ (RPi 3 B+) is the latest available in the Raspberry Pi series, released in mid-March 2018. RPi 3 B+ has some nice features and improvements over the previous RPi 3 B. They include faster 1.4 GHz processor clock speed, Gigabit Ethernet speed, dual-band 2.4GHz and 5GHz wifi support, and Bluetooth 4.2.

Fedora 28 was released soon after the RPi 3 B+. The good news is it supports the RPi 3 B+, as well as the RPi Model B versions 2 and 3. Images are available to download for both ARMv7 (32-bit) and aarch64(64-bit). This article will show you how to get wifi running on the RPi 3 B+.

Writing image to SD card

First, download a Fedora 28 image here. For this example, download the Fedora 28 aarch64 Workstation image and then write it to the SD card:

$ wget https://dl.fedoraproject.org/pub/fedora-secondary/releases/28/Workstation/aarch64/images/Fedora-Workstation-28-1.1.aarch64.raw.xz

Now, find out where the SD card has been mounted on the system by running dmesg:

$ dmesg
...
30046.093242] sd 4:0:0:0: [sdb] 62333952 512-byte logical blocks: (31.9 GB/29.7 GiB)
...

Here the card is mounted on /dev/sdb. This may be different on your system. Now, to write the image to the SD card, use the dd command:

$ xzcat Fedora-Workstation-28-1.1.aarch64.raw.xz | sudo dd status=progress bs=4M of=/dev/sdb

10737418240 bytes (11 GB, 10 GiB) copied, 1200.19 s, 8.9 MB/

Now you’ve successfully written the image to the SD card.

Next, in order to have the built-in wifi working you must download some missing firmware files and make them available in the directory /usr/lib/firmware/brcm/.

First, mount the root filesystem and add these files so that they will be available during boot. Use the lsblk command to find out where the root filesystem is available:

$ lsblk
...
sdb    8:16 1 29.7G 0 disk 
├─sdb1 8:17 1 200M  0 part 
├─sdb2 8:18 1 1G    0 part 
├─sdb3 8:19 1 1G    0 part 
├─sdb4 8:20 1 1K    0 part 
└─sdb5 8:21 1 7.8G  0 part

In this case the root filesystem is in sdb5, so mount it and add the missing firmware files:

$ sudo mkdir /mnt/foo && sudo mount /dev/sdb5 /mnt/foo/
$ sudo curl https://fedora.roving-it.com/brcmfmac43455-sdio.txt -o /mnt/foo/usr/lib/firmware/brcm/brcmfmac43455-sdio.txt
$ sudo curl https://fedora.roving-it.com/brcmfmac43455-sdio.clm_blob -o /mnt/foo/usr/lib/firmware/brcm/brcmfmac43455-sdio.clm_blob
$ sudo umount /mnt/foo

It is also possible to add the firmware files after the first boot. That method requires the Raspberry Pi 3 B+ to connect to internet via Ethernet. After you download the firmware files, to load the firmware the Raspberry Pi 3 B+ needs a system reboot.

Booting and initial configuration

Insert the prepared SD card into the Raspberry Pi 3 B+ connected to a monitor. Boot starts as soon as the RPi 3 B+ has a 5-volt, 2.5-amp power supply. The monitor displays the boot process output.

During first boot, the system takes you to the initial UI setup page where you configure user details. It also lists available networks to get connected via wifi. After these steps, you’ll see the regular GNOME Desktop.

Connecting to wifi using CLI

This section is useful in case you boot the Server image where there’s no graphical interface.

$ nmcli device wifi list
IN-USE SSID         MODE   CHAN  RATE SIGNAL BARS SECURITY
       SSID_2.5_GHz  Infra  10   195   Mbit/s 100 ▂▄▆█ WPA2
       SSID_5_GHz I  nfra   149  405   Mbit/s 72  ▂▄▆_ WPA2
$ nmcli device wifi connect SSID_5_GHz password $PASSWORD

$ nmcli device wifi list
IN-USE SSID MODE CHAN RATE SIGNAL BARS SECURITY
       SSID_2.5_GHz Infra 10 195 Mbit/s 100 ▂▄▆█ WPA2
 *     SSID_5_GHz I nfra 149 405 Mbit/s 72 ▂▄▆_ WPA2

Notice that the Raspberry Pi 3 B+ connects to 5 GHz wifi.

Known Issues

Fedora 28 works perfectly fine on Raspberry Pi 3 B+, but there are some issues to keep in mind.

  • The aarch64 Server image doesn’t have the NetworkManager-wifi package pre-installed . Install this package first before trying to connect to the wifi network. Other images already have this package.
  • Sometimes having a keyboard and mouse connected to the Raspberry Pi 3 B+ can cause the boot process to get stuck. In that case, remove them and re-insert them after the boot process completes.

Getting help

Raspberry Pi 3 B+ is one of the coolest and most popular single board computer devices available with Fedora 28 support. Give it a try and use it to run your favorite IoT application!

There is a FAQ available at the Fedora ARM wiki page which may help answer some of your questions. If it doesn’t, reach out for help or suggestions in the following places:

libinput and its device quirks files

Posted by Peter Hutterer on June 13, 2018 06:16 AM

This post does not describe a configuration system. If that's all you care about, read this post here and go be angry at someone else. Anyway, with that out of the way let's get started.

For a long time, libinput has supported model quirks (first added in Apr 2015). These model quirks are bitflags applied to some devices so we can enable special behaviours in the code. Model flags can be very specific ("this is a Lenovo x230 Touchpad") or generic ("This is a trackball") and it just depends on what the specific behaviour is that we need. The x230 touchpad for example has a custom pointer acceleration but trackballs are marked so they get some config options mice don't have/need.

In addition to model tags we also have custom attributes. These are free-form and provide information that we cannot get from the kernel. These too can be specific ("this model needs a pressure threshold of N") or generic ("bluetooth keyboards are an external keyboards").

Overall, it's a good system. Most users never have to care that we even have this. The whole point is that any device-specific quirks need to be merged only once for each model, then everyone with the same device gets to benefit on the next update.

Originally quirks were hardcoded but this required rebuilding libinput for any changes. So we moved this to utilise the udev hwdb. For the trivial work of fetching udev properties we got a lot of flexibility in how we can match against devices. For example, an entry may look like this:


libinput:name:*AlpsPS/2 ALPS GlidePoint:dmi:*svnDellInc.:pnLatitudeE6220:*
LIBINPUT_ATTR_PRESSURE_RANGE=100:90
The above uses a name match and the dmi modalias match to apply a property for the touchpad on the Dell Latitude E6330. The exact match format is defined by a bunch of udev rules that ship as part of libinput.

Using the udev hwdb maked the quirk storage a plaintext file that can be updated independently of libinput, including local overrides for testing things before merging them upstream. Having said that, it's definitely not public API and can change even between stable branch updates as properties are renamed or rescoped to fit the behaviour more accurately. For example, a model-specific tag may be renamed to a behaviour-specific tag as we find more devices affected by the same issue.

The main issue with the quirks now is that we keep accumulating more and more of them and I'm starting to hit limits with the udev hwdb match behaviour. The hwdb is great for single matches but not so great for cascading matches where one match may overwrite another match. The hwdb match system is largely implementation-defined so it's not always predictable which match rule wins out in the end.

Second, debugging the udev hwdb is not at all trivial. It's a bit like git - once you're used to it it's just fine but until then the air turns yellow with all the swearing being excreted by the unsuspecting user.

So long story short, libinput 1.12 will replace the hwdb model quirks database with a set of .ini files. The model quirks will be installed in /usr/share/libinput/ or whatever prefix your distribution prefers instead. It's a bunch of files with fairly simplistic instructions, each [section] has a set of MatchFoo=Bar directives and the ModelFoo=bar or AttrFoo=bar tags. See this file for an example. If all MatchFoo directives apply to a device, the Model and Attr tags are applied. Matching works in inter- and intra-file sequential order so the last section in a file overrides the first section of that file and the highest-sorting file overrides the lowest-sorting file. Otherwise the tags are accumulated, so if two files match on the same device with different tags, both tags are applied. So far, so unexciting.

Sometimes it's necessary to install a temporary local quirk until upstream libinput is updated or the distribution updates its package. For this, the /etc/libinput/local-overrides.quirks file is read in as well (if it exists). Note though that the config files are considered internal API, so any local overrides may stop working on the next libinput update. Should've upstreamed that quirk, eh?

These files give us the same functionality as the hwdb - we can drop in extra files without recompiling. They're more human-readable than a hwdb match and it's a lot easier to add extra match conditions to it. And we can extend the file format at will. But the biggest advantage is that we can quite easily write debugging tools to figure out why something works or doesn't work. The libinput list-quirks tool shows what tags apply to a device and using the --verbose flag shows you all the files and sections and how they apply or don't apply to your device.

As usual, the libinput documentation has details.

Cockpit 170

Posted by Cockpit Project on June 13, 2018 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 170.

Software Updates: Layout rework

The Software Updates page has improved mobile browser support.

Updates Mobile

On desktop browsers, the updates list avoids line breaks in the version column. Versions become truncated when there’s not enough room to display the complete string. Hovering over the shortened string shows the complete version.

Updates Truncating Version

oVirt: Use authenticated libvirt connection by default

The oVirt version of the Machines page now uses a TLS-enabled libvirt connection URI instead of qemu:///. This enables libvirt operations that require authentication. One common example is sending an NMI.

A machines-ovirt.config file with a VIRSH_CONNECTION_URI option may override the connection address.

Thanks to Sharon Gratch for this improvement!

Try it out

Cockpit 170 is available now:

[Week 4] GSoC Status Report for Fedora App: Amitosh

Posted by Fedora Community Blog on June 12, 2018 11:57 PM

Google Summer of Code 2018

This is Status Report for Fedora App filled by participants on a weekly basis.

Status Report for Amitosh Swain Mahapatra (amitosh)

  • Fedora Account: amitosh
  • IRC: amitosh (found in #fedora, #fedora-dotnet, #fedora-summer-coding, #fedora-commops)
  • Fedora Wiki User Page: amitosh

Tasks Completed

More test cases

Last week we worked on the basic unit testing and integration testing configurations, using the popular Karma test runner and the Jasmine test library to implement unit tests. In this week we expanded our unit test coverage to cover all the services we use in our app. In the course of developing these test cases, we caught few bugs that crept during the initial migration and fixed them.

Here is the pull request:  #78

Package search

You now can search for packages right from the app. It lists the appropriate package name to use in dnf for a given software. Along with the package description, it also shows a link to the upstream website, the Fedora group that maintains the package, its sub packages and if the package is a library it also presents the associated development headers.

Here is the pull request:  #72

i18n and l10n

We have improved i18n and l10n of the app. The views now respect the system locale and format the content accordingly. In further iterations, we will look at the translation and other aspects, and make sure that further work in this field is possible without a lot of barriers.

Here is the pull request:  #64

What’s Happening

Improving the Fedora magazine section

We are introducing read tracking and bookmark features for the Fedora Magazine section in the app. We will be improving the offline caching system to store the data more efficiently.

Improving FedoCal integration

@littlewonder has finished his work on revamping the FedoCal UI. Now we will be are working to incorporate the new design and features to the FedoCal backend.

Please send in your feedback at amitosh@fedoraproject.org

The post [Week 4] GSoC Status Report for Fedora App: Amitosh appeared first on Fedora Community Blog.

Gestión y administración de usuari@s y grupos en Linux

Posted by Alvaro Castillo on June 12, 2018 08:10 PM

Algunos conceptos

Linux al igual que otro sistema operativo basado en UNIX, por herencia obtiene uno de los hitos que marcó un antes y después en la era de los sistemas, y es el término multiusuario. Esto quiere decir que nos permite tener diferentes cuentas de distintos usuari@s iniciadas en el sistema corriendo n procesos a la vez pertenecientes a cada usuario. Anteriormente, si querías hacer uso del sistema y otra persona quería utilizarlo, no podías ya que tod@s disponían de una so...

Fingerprint reader support, the second coming

Posted by Bastien Nocera on June 12, 2018 06:00 PM
Fingerprint readers are more and more common on Windows laptops, and hardware makers would really like to not have to make a separate SKU without the fingerprint reader just for Linux, if that fingerprint reader is unsupported there.

The original makers of those fingerprint readers just need to send patches to the libfprint Bugzilla, I hear you say, and the problem's solved!

But it turns out it's pretty difficult to write those new drivers, and those patches, without an insight on how the internals of libfprint work, and what all those internal, undocumented APIs mean.

Most of the drivers already present in libfprint are the results of reverse engineering, which means that none of them is a best-of-breed example of a driver, with all the unknown values and magic numbers.

Let's try to fix all this!

Step 1: fail faster

When you're writing a driver, the last thing you want is to have to wait for your compilation to fail. We ported libfprint to meson and shaved off a significant amount of time from a successful compilation. We also reduced the number of places where new drivers need to be declared to be added to the compilation.

Step 2: make it clearer

While doxygen is nice because it requires very little scaffolding to generate API documentation, the output is also not up to the level we expect. We ported the documentation to gtk-doc, which has a more readable page layout, easy support for cross-references, and gives us more control over how introductory paragraphs are laid out. See the before and after for yourselves.

Step 3: fail elsewhere

You created your patch locally, tested it out, and it's ready to go! But you don't know about git-bz, and you ended up attaching a patch file which you uploaded. Except you uploaded the wrong patch. Or the patch with the right name but from the wrong directory. Or you know git-bz but used the wrong commit id and uploaded another unrelated patch. This is all a bit too much.

We migrated our bugs and repository for both libfprint and fprintd to Freedesktop.org's GitLab. Merge Requests are automatically built, discussions are easier to follow!

Step 4: show it to me

Now that we have spiffy documentation, unified bug, patches and sources under one roof, we need to modernise our website. We used GitLab's CI/CD integration to generate our website from sources, including creating API documentation and listing supported devices from git master, to reduce the need to search the sources for that information.

Step 5: simplify

This process has started, but isn't finished yet. We're slowly splitting up the internal API between "internal internal" (what the library uses to work internally) and "internal for drivers" which we eventually hope to document to make writing drivers easier. This is partially done, but will need a lot more work in the coming months.

TL;DR: We migrated libfprint to meson, gtk-doc, GitLab, added a CI, and are writing docs for driver authors, everything's on the website!

Bodhi 3.8.1 released

Posted by Bodhi on June 12, 2018 05:01 PM

Bug

  • Fix two incompatibilities with Python 3.7 (#2436 and #2438).

Contributor

Thanks to Miro Hrončok for fixing these issues.

GLPI version 9.3

Posted by Remi Collet on June 12, 2018 03:03 PM

GLPI (Free IT and asset management software) version 9.3~RC2 is available. RPM are available in remi-glpi93 repository for Fedora ≥ 25 and Enterprise Linux ≥ 6.

Read official announcement: Second Release Candidate of GLPI 9.3

Attention Warning: this is not yet a stable version, do not use it on production.

Attention Warning: this version requires MySQL  5.6 or MariaDB  10, and so will not work with default version on  RHEL / CentOS. I recommend you to use the rh-mariadb102 Software Collection.

All plugin projets have not yet released a compatible version.

Available in the repository:

  • glpi-9.3.0~RC2-1
  • glpi-data-injection-2.5.2-1
  • glpi-reports-1.11.3-1
  • glpi-webservices-1.8.0-1

Attention Warning: for security reason, the installation wizard is only allowed from the server where GLPI is installed. See the configuration file (/etc/httpd/conf.d/glpi.conf) to temporarily allow more clients.

With RHEL or CentOS :

First, you need to install a version of PHP 5.6 (7.x is recommended) following the Wizard explanation (installation as Single version), then

yum-config-manager --enable remi-glpi93
yum install glpi

You can also read more detailed instructions in my installation notes.

With Fedora :

dnf config-manager --set-enabled remi-glpi93
dnf install glpi

You are welcome to try this version, in a dedicated test environment, give your feedback and post your questions and bugs on:

 

Happiness Packets and Fedora GSoC 2018

Posted by Fedora Community Blog on June 12, 2018 08:30 AM

I was selected to work with Fedora on the Fedora Happiness Packets for GSoC 2018! A shout-out to Jona and Bee for helping me with the proposal and initial PRs!

About me

Hi there! My name is Anna. I go by the username Algogator on IRC and elsewhere.

  • I study computer science at the University of Texas at Arlington.
  • Python is my favorite language. Been using it for everything for the past 6 years.
  • Huge open source fan. I started a Firefox club at my university. Currently president of the Python user group at UTA (PyMavs).

What I’ll be working on and why

The Happiness Packets is an open source platform to spread gratitude and appreciation among contributors in the community. For Fedora Appreciation Week 2018, having a Fedora themed Happiness Packets site will encourage and make it easier for people to send positive feedback to their peers (anonymously if they like). I’ll be mainly working on integrating fedmsg (to award a Fedora Badge for sending a message) and adding authentication (for FAS) to the Django project.

I picked this project since I have coded web applications in Python (Flask) before, worked on implementing an SSO solution (SAML) to authenticate with ADFS, and created a Bonusly replica while working at my previous company.

Gratitude is a muscle. Open source communities work because of the time and effort put in by volunteers and a heartfelt thank you or pat on the back can go a long way.

Community bonding period

During the community bonding period, I spent some time collecting these adorable Fedora badges to get my feet wet. It’s a fun way to learn about the community and set up your account before you start contributing. I recommend getting the Involvement and Crypto Badger badges first. And voilà!

You’ve taken your first step into a larger world

Joining the CommOps team

I started working my way through the Join Community Operations list.

Introduced myself on IRC and the CommOps and Diversity mailing lists. Attended the weekly meetings. And made a user page on the wiki.

In the initial call with Jona and Bee, Jona went over the various Fedora sub-projects and what they do. There are different ways you can contribute to Fedora based on your skill set.

Research

I spent most of the time googling, saving and organizing links that I thought would be useful.

fedmsg

From the fedmsg website:

“fedmsg (Federated Message Bus) is a library built on ZeroMQ using the PyZMQ Python bindings. fedmsg aims to make it easy to connect services together using ZeroMQ publishers and subscribers.”

auth/Openid connect

The post Happiness Packets and Fedora GSoC 2018 appeared first on Fedora Community Blog.

[F29] Participez à la journée de test consacrée au noyau Linux 4.17

Posted by Charles-Antoine Couret on June 12, 2018 06:00 AM

Aujourd'hui, ce mardi 12 juin, est une journée dédiée à un test précis : sur le noyau Linux 4.17. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Le noyau Linux est le cœur du système Fedora (et des autres distributions GNU/Linux). C'est le composant qui fait le lien entre les logiciels et le matériel. C'est lui qui permet aux processus de travailler ensemble sur un même ordinateur et de pouvoir utiliser les périphériques (à travers des pilotes) disponibles sur chaque machine.

C'est donc un composant critique et il est nécessaire de s'assurer qu'il fonctionne. Notons par ailleurs que l'équipe des mainteneurs du noyau chez Fedora ont décidé de faire une journée de tests systématique après la sortie d'un nouveau noyau !

Les tests du jour couvrent :

  • L'exécution des tests automatisés par défaut et ceux de performances ;
  • Vérifier que la machine démarre correctement ;
  • Vérifier que le matériel est bien exploité (affichage, claviers, souris, imprimantes, scanners, USB, carte graphique, carte son, webcam, réseau filaire et wifi, etc.)

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-day et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

All systems go

Posted by Fedora Infrastructure Status on June 11, 2018 03:21 PM
Service 'Package maintainers git repositories' now has status: good: Everything seems to be working.

Major service disruption

Posted by Fedora Infrastructure Status on June 11, 2018 02:52 PM
Service 'Package maintainers git repositories' now has status: major: There is an ongoing issue with src.fp.o that we are working through

New badge: Fedora Podcast Interviewee !

Posted by Fedora Badges on June 11, 2018 02:47 PM
Fedora Podcast IntervieweeYou've been interviewed on the Fedora Podcast

New badge: LinuxCon Beijing 2018 !

Posted by Fedora Badges on June 11, 2018 02:36 PM
LinuxCon Beijing 2018You joined Fedora related events at LinuxCon Beijing 2018

Khaled Monsoor: How Do You Fedora?

Posted by Fedora Magazine on June 11, 2018 08:00 AM

We recently interviewed Khaled Monsoor on how he uses Fedora. This is part of a series that profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to tell of us about someone you think we should interview, or to express interest in being interviewed.

Who is Khaled Monsoor?

Khaled Monsoor was born and raised in Dhaka, the capital of Bangladesh. After graduating with a degree in computer science and engineering, he worked in several different business sectors. Monsoor started a masters in Bioinformatics, but decided to not pursue it.

Monsoor currently works at Augmedix Inc., a Silicon Valley medicare startup, as a research engineer.  started using Linux in 2002 and got involved with the Fedora Project in 2005. He believes balancing the demands of a full-time job and family is the biggest challenge to contributing to open source projects.

His heroes are the Thundercats, Captain Planet and MacGyver. Khaled’s favorite movies are The Matrix and Interstellar. “During my youth, The Matrix [shook] my whole concept of reality. Is what we see and feel really real, or just sort of simulation or just a test? That sort of thing. In Interstellar, it’s the twisted human lives with advanced technology mesmerized me. I think the father in me cried, like a baby, with Matthew McConaughey in the hospital meeting scene.”

Monsoor also enjoys photography. He like to use his Nikon D7100, but admits the best camera is the one you have on hand. “I like magnificent nature pictures that takes me to that place, travel & street pictures of same spirit, and honest portraits that makes a close feelings of that person’s real life.”

More of Monsoor’s photos can be found on his Flickr page.

The Fedora Community

Monsoor’s first interaction with the Fedora Community left him impressed with its energy and passion about aesthetics. While there are many Linux distributions to choose from, he chose Fedora due to its “system stability, community support and Konsole terminal.” He would like to see more attention paid to Fedora’s overall visual aesthetics.

What Hardware?

For work, Monsoor uses Fedora 27 on an HP Probook 470 G3 laptop. The laptop is equipped with an Intel Core i7-6500U Processor and 16 GB RAM. It has a hybrid graphics solution utilizing Intel and AMD GPUs. The hybrid graphics is not very useful due to driver issues in Linux. Monsoor replaced the 500GB hard drive with a 128GB SSD drive to boost performance. “The big 17″ display is a huge plus for software development.”

Monsoor has repetitive pain injury (RSI) pain in his wrists so he uses a special mouse. “I use an Anker vertical wireless mouse to ease the stress on my wrist.”

Kahled Monsoor's Computer Setup

What Software?

Monsoor prefers KDE Plasma for his desktop. He also makes use of Kate, Konsole, Kalc, Dolphin file manager and Kdenlive. For non-KDE software he uses Gimp, Pinta, Shotwell, Hyper Terminal, VS Code, PostgreSQL, Firefox and Libre Office.

When asked about why he prefers KDE he said, “I’m not sure, exactly. Possibly, it gives me a feeling of control. In charge of something very capable, waiting for directions and just works. Not forced over-simplistic, or trying to hide the complexities it handles, rather gives a grip on them. Or, just that its name begins with the same character (K) as my name.”

He prefers Konsole because “it shares the philosophy of KDE. Stable, capable and highly-configurable, just what power users needs. Not too dumb-looking, not too nerdy.”

2018 May Elections to FESCo: Interviews

Posted by Fedora Community Blog on June 11, 2018 07:38 AM
Fedora Elections - All interviews published

Fedora FESCo Elections are here and it’s time to vote! All candidate interviews to FESCo are published.

The 2018 May cycle of Elections is in full swing. Voting officially began on Thursday, June  7th, and ends on Wednesday, June 13th at 23:59 UTC. Voting takes place on the Voting application website. As part of the Elections coverage on the Community Blog, the candidates running for seats published their interviews and established their platforms here. Are you getting ready to vote and looking for this information? You can find the full list of candidates and links to their interviews below.

Candidate Interviews

Vote!

Remember, the voting period ends this upcoming Wednesday, so make sure you get your votes in before the end of the Election. You can vote on the Voting application.

The post 2018 May Elections to FESCo: Interviews appeared first on Fedora Community Blog.

[Week 4] GSoC Status Report for Fedora App: Abhishek

Posted by Fedora Community Blog on June 11, 2018 07:37 AM

This is Status Report for Fedora App filled by participants on a weekly basis.

Status Report for Abhishek Sharma (thelittlewonder)

  • Fedora Account:thelittlewonder
  • IRC: thelittlewonder (found in #fedora-summer-coding, #fedora-india, #fedora-design)
  • Fedora User Wiki Page

Tasks Completed

New Calendar View

Fedora Calendar

We implemented a new design in the calendar section of the app, which lets you see all the upcoming and past events of a team’s calendar. You can add these to your favourite calendar app directly from the Calendar Tab.

Máirín Duffy (mizmo) had suggested in one of the design meetings to implement other features such as adding location (so that clicking it directly opens up the IRC client on your device) and showing minutes of the past meetings. These features are currently not there but you can expect them to be in the later releases.

Pull Request: #74

Search in Calendars

Search in Calendar

Users can now search in the list of calendars to look out for their calendar. Before we had a select drop-down menu, now we have a much simpler interface.

Another cool feature that should be out in the next few weeks, is that you can actually subscribe to calendars by starring them.

Pull Request: #74

Empty StatesEmpty States

We normally design for a populated interface where everything in the layout looks well-arranged. What we often ignore is what will be displayed when the layout is empty. An empty state is a thing you usually design when something is empty or goes wrong. So This week, we spent some time on the designs for the empty states. You can read more about empty states here: https://uxplanet.org/empty-state-mobile-app-nice-to-have-essential-f11c29f01f3

What’s Happening

Fedora Magazine

We are planning to introduce read tracking and bookmark features for the Fedora Magazine section in the app. So over the next week, we will be revamping the design to incorporate these features and align its style with the rest of the app.

Ask Fedora

We will also be working on the new design for the Ask Fedora Section of the app. We will be segregating the questions into most recent, most popular and most voted so that users can access them on the go.

That’s all I have for this week. 👋

Please send in your feedback at mailto:guywhodesigns@gmail.com

The post [Week 4] GSoC Status Report for Fedora App: Abhishek appeared first on Fedora Community Blog.

Home Theatre!

Posted by farhaan on June 11, 2018 03:03 AM

Due to a lot of turmoils in my life in the recent past, I had to shift with a friend. Abhinav has been an old friend and college mate, we have hacked on a lot of software and hardware projects together but this one is on of the coolest hack of all time and since we are flatmates now it solved a lot of issues. We also had his brother Abhishek so the hack became more fun.

The whole idea began with the thoughts of making the old laptops which we have to be used as servers, we just thought what can we do to make the best of the machines we have. He has already done few set ups but then we landed up on doing a htpc, it stands for Home Theatre PC or media centre, basically a one stop shop for all the need, movies, tv shows and music. And we came up with a nice arrangement which requires few things, the hardware we have:

  1. Dell Studio 1558
  2. Raspberry Pi 3
  3. And a TV to watch these on 😉

When we started configuring this setup we had a desktop version of Ubuntu 18.04 installed but we figured out that this was slowing down the machine so we switched to Ubuntu Server edition. This was some learning because I have never installed any server version of operating system. I use to wonder always what kind of interface will these versions give. Well without any doubt it just has a command-line utility for every thing, from partition to network connection.

Once the server was installed we just had to turn that server into a machine which can support our needs, basically installed few packages.

We landed up on something called as Atomic Toolkit. A big shoutout for the team to develop this amazing installed which has a ncurses like interface and can run anywhere. Using this toolkit we kind of installed and configured CouchePotato, Emby and Headphones.

Click to view slideshow.

This was more than enough we could automate a lot of things in our life with this kind of set up, from Silicon Valley to Mr. Robot. CouchePotato help us to get the best quality of videos and Emby gives us a nice dashboard to show all the content we have.

I don’t use Headphones much because I love another Music Application but then Headphones being a one stop shop is not wrong too. All this was done on the Dell Studio Machine we had, also we stuck a static IP on it so to know which IP to hit.

Our sever was up, running and configured. Now, we needed a client to listen to this server we kind of have a TV but that TV is not smart enough so we used a Raspberry Pi 3 and attached it to the TV using the HDMI port.

We installed OSMC on the Raspberry Pi and configured it to use Emby and listen to the Emby server once we booted it up it was very straight forward. This made our TV look good and also a little smart and it opened our ways for 1000s of movies, music and podcast. Although I don’t know if setting up this system was more fun or watching those movies will be.

 

Episode 100 - You're bad at buying security, we can help!

Posted by Open Source Security Podcast on June 11, 2018 01:30 AM
Josh and Kurt talk about how to be a smart security buyer. We have guest Steve Mayzak walk us through how a the buying process works as well as giving out a ton of great advice. Even if you're experienced with how to buy security technology you should give this a listen.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6673481/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


New in nbdkit

Posted by Richard W.M. Jones on June 10, 2018 06:17 PM

nbdkit is our toolkit for writing flexible Network Block Device servers with both normal and esoteric plugins as data sources. You can use it to pull data into qemu or other hypervisors, or with libguestfs.

Since I last mentioned nbdkit ([1] and [2]) there have been a few new developments:

  • There are now stable (1.2) and development (1.3) branches. Most of the features discussed below are only available on the development branch at this time.
  • If you use bash, you can now tab-complete many nbdkit commands, including automatically getting a list of flags, plugins and plugin parameters.
  • The highly optimized and multithreaded file plugin now supports hole-punching (trim/discard) thanks to Eric Blake’s work.
  • Eric also wrote new filters: a log filter for enhanced logging, and blocksize, nozero and fua filters for modifying requests from clients.
  • I wrote some new plugins too: The ext2 plugin allows you to read and write files within ext2 filesystems (my response to this very long qemu-devel discussion). The zero and random plugins are largely for testing awkward corner cases in NBD clients and the nbdkit code itself.
  • Eric added support for zeroing to non-C plugins.
  • Pino Toscano added the nbdkit_realpath utility function for plugins.
  • For programmers writing plugins, Eric Blake has done a lot of work on the plugin API. If your plugins use:
    #define NBDKIT_API_VERSION 2
    

    then the plugin functions all have flags exposing FUA (flush) commands from the client. Of course the old plugins are still supported both as source and binaries.

  • The default NBD protocol served in ≥ 1.3 changed to “newstyle”, but you can continue to serve “oldstyle” by adding the -o option.

nbdkit can be downloaded from here and older versions (probably not including the features above) are available in all popular Linux distributions.

Connexion automatique d’un utilisateur avec une session verrouillée (sddm)

Posted by Didier Fabert (tartare) on June 10, 2018 08:25 AM

Connexion automatique d’un utilisateur

Il faut éditer le fichier de configuration de sddm afin d’ajouter les entrées de connexion automatique d’un utilisateur, dans la section Autologin:

  • Session: spécifier la session utilisateur (plasma, gnome, etc…). La liste des sessions disponible (installée sur votre poste) se trouve dans le répertoire /usr/share/xsessions/
    ls /usr/share/xsessions/
  • Relogin: Spécifier si sddm doit reconnecter l’utilisateur lorsque celui-ci se déconnecte
  • User: Le login de l’utilisateur à connecter automatiquement
Fichier /etc/sddm.conf
...
[Autologin]
Relogin=false
Session=plasma.desktop
User=tartarefr
...

Verrouillage de la session au démarrage

Pour verrouiller la session d’un utilisateur au démarrage de celle-ci, il faut ajouter au fichier (ou le créer s’il n’existe pas) ~/.profile

Fichier ~/.profile
export DESKTOP_LOCKED=yes

La ligne de commande correspondante (pour un copier/coller)

echo "export DESKTOP_LOCKED=yes" >> ~/.profile