Fedora People

Installed nvidia driver

Posted by Yuan Yijun on April 21, 2018 04:53 AM

Since Fedora 28 wayland is crashing for me (Dell m4700 with external monitor), I got some time to try the nvidia GPU driver. At first I looked at https://www.if-not-true-then-false.com/2015/fedora-nvidia-guide/ which shows the actual output when installed: in "About" page it will show the nvidia card name. Then since it is easier to install from RPM Fusion, I followed https://rpmfusion.org/Howto/NVIDIA. The document is concise but helpful. For example, when it says "Secure Boot" has issue, then it is best turned off in BIOS. For another example, when it says "Wayland" has issue and must install something from Copr, indeed that is the case. Also the "grubby" command to update kernel command line is helpful too.

I followed another article https://gorka.eguileor.com/vbox-vmware-in-secureboot-linux/ to sign the modules. First create a key, then register the key to UEFI, I never did this before. I cannot find the keyring ".system_keyring" but /proc/keys shows something else.

[root@m4700 ~]# cat /proc/keys |grep trust
086fc70c I------     2 perm 1f0b0000     0     0 keyring   .builtin_trusted_keys: 1
3a3ece32 I------     1 perm 1f0f0000     0     0 keyring   .secondary_trusted_keys: 6
[root@m4700 ~]# keyctl list %:.secondary_trusted_keys
6 keys in keyring:
141543180: ---lswrv     0     0 keyring: .builtin_trusted_keys
519723937: ---lswrv     0     0 asymmetric: Fedora Secure Boot CA: fde32599c2d61db1bf5807335d7b20e4cd963b42
425639263: ---lswrv     0     0 asymmetric: m4700: e9b8a7dceb32dbad7203e4f126927614ef8d749f
366180860: ---lswrv     0     0 asymmetric: vbox sining key: 71bb96ac139d95a88e376b3549f18b5b6e7a6731
511921353: ---lswrv     0     0 asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed522988a1bd4
711259049: ---lswrv     0     0 asymmetric: Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e4f9ae17c55af53
[root@m4700 ~]# keyctl list %:.builtin_trusted_keys
1 key in keyring:
914482814: ---lswrv     0     0 asymmetric: Fedora kernel signing key: 2285ad1ee22995954b75925873d046bfaab640a0


Run "akmods" to build modules. There used to have an error "systemd-modules-load" but after all modules are signed, reboot and the modules seem to load properly.

# rpm -qa | grep ^kmod | xargs rpm -ql | grep .ko$ | xargs -l /usr/src/kernels/$(uname -r)/scripts/sign-file sha256 ./MOK.priv ./MOK.der

I don't know after a new kernel is installed, how much will break. Need to build modules again, sign them and update grub?

Fedora Infrastructure Hackathon (day 1-5)

Posted by Stephen Smoogen on April 20, 2018 09:35 PM
From 2018-04-09 to 2018-04-13, most of the Fedora Infrastructure team was in Fredericksburg, Virginia working face to face on various issues. I already covered my trip on the 08th to Fredericksburg so this is a followup blog to cover what happened. Each day had a pretty predictable cycle to it starting with waking up around 06:30 and getting a shower and breakfast downstairs. The hotel was near Quantico which is used by various government agencies for training so I got to see a lot of people every morning suiting up. Around 07:30, various coworkers from different time zones would start stumbling in.. some because it was way too late to get up in a day, and others because it was way too early. Everyone would get a cup or two of coffee in them and Paul would show up to herd us towards the cars. [Sometimes it took two or three attempts as someone would straggle away to try and get another 40 winks.] Then we would drive over to the University of Mary Washington extension campus.

I wanted to give an enormous shout-out to the staff there, people checked in on us every day to see if we had any problems, and worked around our weird schedules. They also helped get our firewall items fixed as the campus is fairly locked down for guests but made it so our area had an exception for the week so that ssh would work. 

Once we got situated in the room, we would work through the days problems we would try to tackle. Monday was documentation, Tuesday was reassigning tasks, Wednesday was working through AWX rollouts, Thursday was trying to get bodhi working with openshift. Friday we headed home via our different methods. [I took a train though not this one.. this was the CSX shipping train which came through before ours.]

Most of the work I did during this was working on tasks to get people enabled and working. I helped get Dusty and Sinny into a group which could log into various atomic staging systems to see what logs and builds were doing. I worked with Paul Frields on writing service level expectations that I will be putting into more detail in next weeks blogs. I talked with Brian Stinson and Jim Perrin on CentOS/EPEL build tools and plans.


Finally I worked with Matthew Miller on statistics needs and will be looking to work with CoreOS people someday in the future on how to update how we collect data. As with any face to face meetings, it was mostly about getting personal feedback on what is working and what isn't. I have a better idea on things needed in the future for the Fedora Apprentice group (my blogs for 2 weeks from now), Service Level Expectations, and EPEL (3 to 4 weeks from now).

Fedora Infrastructure Hackfest 2018 Recap

Posted by Ricky Elrod on April 20, 2018 02:35 PM

Last week was the 2018 Fedora Infrastructure Hackfest which took place in Fredericksburg, Virginia. I think everyone who attended found it to be a very productive week of hacking.

We started the hackfest off by brainstorming new documentation for packagers and those interested in becoming a packager. We decided to use asciidoc and put the documentation in a new git repo on Pagure. Right now it uses Pelican and Asciidoctor for generating html versions of the source files, but there was talk about eventually transitioning to Antora at some point. The point of the new documentation is that many of our workflows have changed with the move of packages/dist-git into Pagure. For example, it's now possible to send pull-requests on packages and ask maintainers to merge them. So the documentation needed to be updated to reflect all of these changes, and it seemed like an ideal time to revamp and rework the documents.

We decommissioned darkserver and summershum, two apps that had been nigh-neglected for quite some time.

We worked on rawhide gating -- that is, making it so that Rawhide updates must past through Bodhi. The big idea here is that we want to be able to let automated tests run over Rawhide updates before they are pushed out to Rawhide users, in order to make the Rawhide experience more stable. While Randy and Patrick did most of the Bodhi updates to make this happen, I began working on changes to the bodhi CLI. I am currently waiting until they have something for me to test my changes against.

On the same day that was going on, I also deployed the redirect to move the Fedora Jenkins setup into the CentOS infrastructure. It is now available here.

I also worked out a bug with the modernpaste paste-deactivation script I wrote for us to deactivate pastes when necessary. Recently, I redeployed modernpaste to Fedora 27 VMs, but the newer Python version caused an issue with the imports I used. Ultimately, I had to add a missing import and all was good again.

We talked about setting up Ansible AWX and came up with a plan and tried deploying it on a VM. I did most of the grunt work of setting up the VM, and Kevin and I worked together on the initial AWX attempts. Unfortunately, we ran into a fair number of what seem to be upstream bugs. In the interim, in an effort to at least get something up, we changed our deploy (e.g. by making it use a local database container instead of our external database VM), and eventually got it functional enough to play around with. Patrick started working on integrating OpenID Connect authentication into it, with the goal of users being able to log into our AWX with their FAS credentials. AWX seems like it would be nice in that we'd have more fine-grained control over which playbooks can be run by whom, however the pain of setting it up so far has been fairly off-putting. It will be interesting to play around with for a while longer and see what the team thinks.

My last day at the hackfest, Thursday, was discussion mainly about our OpenShift instances. We discussed moving more apps into OpenShift, plans to upgrade our instances to the latest releases, and so on. I signed on to move two more apps into OpenShift in the near future. Patrick gave us a demo of how it looks to use our current Ansible setup to deploy an OpenShift app. Over Thursday and Friday, he and Randy worked on moving Bodhi to OpenShift, but ultimately, due to several bugs in the Bodhi release, earlier this week, we reverted back to the old setup for the time being.

This is what I worked on. There were some other things going on as well. RHEL 7.5 is now synced in our repositories and ready for us to upgrade to it after freeze, along with OpenShift 3.9. Kevin worked on new priorities and templates for our Infrastructure ticket tracker.

Oh, and on Thursday night, I sang two karaoke songs, winning a $25 giftcard to the venue we were at for dinner/drinks that night. I ended up giving it to one of the other singers, since I don't live in Virginia, and will likely never be back at the venue.

Use Peek to take quick GIF screencasts

Posted by Fedora Magazine on April 20, 2018 08:00 AM

Are you looking for an easy tool for taking quick screencasts as animated GIFs? Take a look at Peek — a simple application that makes taking GIF screencasts a breeze. While there are other ways in Fedora to take screencasts, like the functionality built into the desktop, and the EasyScreenCast extension, Peek adds the ability to produce GIFs.

an animated GIF created with the peek screenshot tool

an animated GIF created with the peek screenshot tool

Peek provides a simple interface for creating quick screencasts. In addition to outputting animated GIFs, it can also produce other video formats such as WebM. Configuration options  also allow configuration  to automatically downscale recorded videos. Unlike some of the older screencasting tools, Peek works under Wayland.

Installation

The easiest way to install Peek on Fedora is from the Flathub repositories of Flatpaks. First, follow the directions in this article to set up Flathub as a third party software source. Note, however, that Flathub is not officially affiliated with the Fedora Project, and may contain software that does not adhere to some of the Fedora Packaging Guidelines.

Once Flathub is enabled as a third-party source, simply search and install Peek from the Software app in Fedora Workstation.

Using Peek

Peek can only record a defined part of your screen, there are no options to record a whole monitor, or the whole desktop. When opening the application, the user is presented with a window with a fully transparent body. Simply resize the window to define the area to screencast, and press the Record button at the top of the window.

a screenshot of a peek window

a screenshot of a peek window

A countdown will inform you when the recording starts, and when finished, simply press the Stop button in the top right of the window.

 

 

How to start developing on Java in Fedora

Posted by Fedora Magazine on April 20, 2018 08:00 AM

Java is one of the most popular programming languages in the world. It is widely-used to develop IOT appliances, Android apps, web, and enterprise applications. This article will provide a quick guide to install and configure your workstation using OpenJDK.

Installing the compiler and tools

Installing the compiler, or Java Development Kit (JDK), is easy to do in Fedora. At the time of this article, versions 8 and 9 are available. Simply open a terminal and enter:

sudo dnf install java-1.8.0-openjdk-devel

This will install the JDK for version 8. For version 9, enter:

sudo dnf install java-9-openjdk-devel

For the developer who requires additional tools and libraries such as Ant and Maven, the Java Development group is available. To install the suite, enter:

sudo dnf group install "Java Development"

To verify the compiler is installed, run:

javac -version

The output shows the compiler version and looks like this:

javac 1.8.0_162

Compiling applications

You can use any basic text editor such as nano, vim, or gedit to write applications. This example provides a simple “Hello Fedora” program.

Open your favorite text editor and enter the following:

public class HelloFedora {


      public static void main (String[] args) {
              System.out.println("Hello Fedora!");
      }
}

Save the file as HelloFedora.java. In the terminal change to the directory containing the file and do:

javac HelloFedora.java

The compiler will complain if it runs into any syntax errors. Otherwise it will simply display the shell prompt beneath.

You should now have a file called HelloFedora, which is the compiled program. Run it with the following command:

java HelloFedora

And the output will display:

Hello Fedora!

Installing an Integrated Development Environment (IDE)

Some programs may be more complex and an IDE can make things flow smoothly. There are quite a few IDEs available for Java programmers including:

However, one of the most popular open-source IDE’s, mainly written in Java, is Eclipse. Eclipse is available in the official repositories. To install it, run this command:

sudo dnf install eclipse-jdt

When the installation is complete, a shortcut for Eclipse appears in the desktop menu.

For more information on how to use Eclipse, consult the User Guide available on their website.

Browser plugin

If you’re developing web applets and need a plugin for your browser, IcedTea-Web is available. Like OpenJDK, it is open source and easy to install in Fedora. Run this command:

sudo dnf install icedtea-web

As of Firefox 52, the web plugin no longer works. For details visit the Mozilla support site at https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct.

Congratulations, your Java development environment is ready to use.

Writing Chuck – Joke As A Service

Posted by farhaan on April 20, 2018 07:42 AM

Recently I really got interested to learn Go, and to be honest I found it to be a beautiful language. I personally feel that it has that performance boost factor from a static language background and easy prototype and get things done philosophy from dynamic language background.

The real inspiration to learn Go was these amazing number of tools written and the ease with which these tools perform although they seem to be quite heavy. One of the good examples is Docker. So I thought I would write some utility for fun, I have been using fortune, this is a Linux utility which gives random quotes from a database. I thought let me write something similar but let me do something with jokes, keeping this mind I was actually searching for what can I do and I landed up on jokes about Chuck Norris or as we say it facts about him. I landed up on chucknorris.io they have an API which can return different jokes about Chuck, and there it was my opportunity to put something up and I chose Go for it.

JSON PARSING

The initial version of the utility which I put together was way simple, it use to make a GET request stream the data in put in the given format and display the joke. But even with this implementation I learnt a lot of things, the most prominent one was how a variable is exported in Go i.e how can it be made available across scope and how to parse a JSON from a received response to store the beneficial information in a variable.

<script src="https://gist.github.com/farhaanbukhsh/c428cd6a66e05d964c34ee1cea88ce61.js"></script>

Now the mistake I was doing with the above code is I was declaring the fields of the struct with a small letters this caused a problem because although the value get stored in the struct I can’t use them outside the function I have declared it in. I actually took a while to figure it out and it was really nice to actually learn about this. I actually learnt about how to make a GET request and parse the JSON and use the given values.

Let’s walk through the code, the initial part is a struct and I have few fields inside it, the Category field is a slice of string, which can have as many elements as it receives the interesting part is the way you can specify the key from the received JSON how the value of received JSON is stored in the variable or the field of the struct. You can see the json:"categories" that is the way to do it.

With the rest of the code if you see I am making a GET request to the given URL and if the it returns a response it will be res and if it returns an error it will be handled by err. The key part here is how marshaling and unmarshaling of JSON takes place.

This is basically folding and un-folding JSON once that is done and the values are stored to retrieve the value we just use a dot notation and done. There is one more interesting part if you see we passed &joke which if you have a C background you will realize is passing the memory address, pass by reference, is what you are looking at.

This was working good and I was quite happy with it but there were two problems I faced:

  1. The response use to take a while to return the jokes
  2. It doesn’t work without internet

So I showed it to Sayan and he suggested why not to build a joke caching mechanism this would solve both the problems since jokes will be stored internally on the file system it will take less time to fetch and there is no dependency on the internet except the time you are caching jokes.

So I designed the utility in a way that you can cache as may number of jokes as you want you just have to run chuck --index=10 this will cache 10 jokes for you and will store it in a Database. Then from those jokes a random joke is selected and is shown to you.

I learnt to use flag in go and also how to integrate a sqlite3 database in the utility, the best learning was handling files, so my logic was anytime you are caching you should have a fresh set of jokes so when you cache I completely delete the database and create a new one for the user. To do this I need to check of the Database is already existing and if it is then remove it. I landed up looking for the answer on how to do that in Go, there are a bunch of inbuilt APIs which help you to do that but they were misleading for me. There is os.Stat, os.IsExist and os.IsNotExist. What I understood is os.Stat will give me the status of the file, while the other two can tell me if the file exists or it doesn’t, to my surprise things don’t work like that. The IsExist and IsNotExist are two different error wrapper and guess what not of IsExist is not IsNotExist, good luck wrapping your head around it. I eventually ended up answering this on stackoverflow.

After a few iteration of using it on my own and fixing few bugs the utility is ready except the fact that it is missing test cases which I will soon integrate, but this has helped me learn Go a lot and I have something fun to suggest to people. Well, I am open to contribution and hope you will enjoy this utility as much as I do.

Here is a link to chuck!

Give it a try and till then Happy Hacking and Write in GO! 

Featured Image: https://gopherize.me/

Sniffing Unix Domain Sockets

Posted by Robbie Harwood on April 20, 2018 04:00 AM

A quick post because I can't resist the "sniffing" joke.

If you do a lot of network traffic analysis, you've probably used wireshark, tcpdump, and other tools of that nature. However, they only work well for network traffic - try to use Unix domain sockets and you're out of luck - they don't understand them.

Unless you proxy the traffic. Like so:

First, let the process open whatever socket you care about.

Then, move it out of the way and have socat listen in its stead:

mv DEFAULT.socket hidden.socket
socat UDP-LISTEN:6000,reuseaddr,fork UNIX-CONNECT:hidden.socket

This sets up socat to take anything from UDP port 6000 and apply it to the (now hidden) socket, with it none the wiser.

Then we plug tcpdump in to listen on this port:

tcpdump -ni lo -s0 -f 'udp port 6000' -w /tmp/out.pcap

And set up the proxy entry where the socket was expecting to be:

socat UNIX-LISTEN:DEFAULT.socket,fork UDP-CONNECT:127.0.0.1:6000

Sidecar container for Python exceptions

Posted by ABRT team on April 19, 2018 12:37 PM

In recent blog posts container-exception-logger (CEL) and catching unhandled Python exceptions we’ve introduced new tools for containers. Adding the tools to your Fedora container allows you to catch Python exceptions in the container and get logs of them to the host.

If however your containers do not run an RPM-based Linux distribution or you don’t want to rebuild your image, you can utilize a sidecar container and get the above-mentioned tools into your container through it.

Sidecar container

The sidecar container should contain Python hooks and the CEL executable. You can use this script that will download Python hooks and the CEL from the Github and build the CEL executable from sources.

#!/bin/sh

RAWGITHUB="https://raw.githubusercontent.com/abrt/abrt/master/src/hooks"

# Python2 hooks
mkdir -p abrt-hooks/py2
wget -qN "$RAWGITHUB/abrt_container.pth" -P abrt-hooks/py2
wget -qN "$RAWGITHUB/abrt_exception_handler_container.py" -P abrt-hooks/py2

# Python3 hooks
mkdir -p abrt-hooks/py3
wget -qN "$RAWGITHUB/abrt3_container.pth" -P abrt-hooks/py3
wget -qN "$RAWGITHUB/abrt_exception_handler3_container.py" -P abrt-hooks/py3

# Binary container-exception-logger
mkdir -p abrt-hooks/bin
BASEDIR=$(pwd)
TMPDIR=$(mktemp -d)
pushd $TMPDIR
git clone https://github.com/abrt/container-exception-logger.git abrt-cel ; cd abrt-cel
make
cp src/container-exception-logger $BASEDIR/abrt-hooks/bin
popd

echo "Done"

Example of Dockerfile for the sidecar container with the Python hooks and CEL.

FROM alpine:latest

RUN mkdir -p /abrt/bin /tmp/abrt-hooks

# Copy hook files
# abrt-hooks/
# ├── bin
# │  └── container-exception-logger
# ├── py2
# │  ├── abrt_container.pth
# │  └── abrt_exception_handler_container.py
# ├── py3
# │  ├── abrt3_container.pth
# │  └── abrt_exception_handler3_container.py
# └── usercustomize.py
COPY abrt-hooks /tmp/abrt-hooks

# Copy script
# files/
# └── usr
#    └── local
#       └── bin
#          └── abrt-cp.sh
COPY files /

VOLUME ["/abrt"]
RUN chmod -R 777 /abrt /tmp/abrt-hooks
CMD [ "/bin/sh", "/usr/local/bin/abrt-cp.sh" ]

The sidecar container includes a simple script that is run when the container starts. The script just copies files to a shared volume and then goes into a sleep loop.

# abrt-cp.sh
#!/bin/sh
cp -R /tmp/abrt-hooks/* /abrt
while true; do sleep 300; done

Another important part that is in the sidecar container is usercustomize module. The module takes care of adding directories with Python hooks to a sys.path.

# usercustomize.py
import os
import site
from sys import version_info

hook_path = os.path.dirname(os.path.abspath(__file__))
if version_info[0] == 2:
    site.addsitedir(os.path.join(hook_path, "py2"))
else:
    site.addsitedir(os.path.join(hook_path, "py3"))

Setting up the container

First, we get current values of PATH and PYTHONPATH variables in our main container. The PYTHONPATH variable is used to import the usercustomize module; the PATH is needed for the CEL executable.

$ oc exec <pod> -it -- sh -c 'echo PATH: $PATH; echo PYTHONPATH: $PYTHONPATH'
PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PYTHONPATH:

Once we know the values we can append to them paths of Python hooks and the CEL executable. It is possible to skip this step and edit the variables manually in the deployment config.

$ oc env -c <container> dc/<name> PYTHONPATH=/abrt \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/abrt/bin
deploymentconfig "<name>" updated

In the deployment config, we add the sidecar container and share the volume between the two containers. We can also edit the PATH, PYTHONPATH variables here.

$ oc edit dc/<name>;
spec:
  containers:
  - name: main_container
    env:
    - name: PYTHONPATH
      value: /abrt
    - name: PATH
      value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/abrt/bin
    ...
    volumeMount:
    - name: abrt-volume
      path: /abrt
  ...
  - name: sidecar
    ...
    volumeMount:
    - name: abrt-volume
      path: /abrt
  ...
  volumes:
  - emptyDir: {}
    name: abrt-volume
  ...

Once the deployment finishes, we can check whether the path to the hooks was added to site directories.

$ python2 -m site
sys.path = [
    '/',
    '/abrt',
    '/abrt/py2',
    '/usr/lib/python2.7',
    '/usr/lib/python2.7/plat-x86_64-linux-gnu',
    '/usr/lib/python2.7/lib-tk',
    '/usr/lib/python2.7/lib-old',
    '/usr/lib/python2.7/lib-dynload',
    '/usr/local/lib/python2.7/dist-packages',
    '/usr/lib/python2.7/dist-packages',
]
USER_BASE: '/.local' (doesn't exist)
USER_SITE: '/.local/lib/python2.7/site-packages' (doesn't exist)
ENABLE_USER_SITE: True

Now that we see the path was added correctly we can test if the hooks work. To do that we will cause an exception in our main container.

$ python2 -c 0/0
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ZeroDivisionError: integer division or modulo by zero

If we check the journal log on the host we should be able to see the error message.

$ journalctl -xe
Apr 3 14:15:92 work dockerd-current[1175]: container-exception-logger - {
"executable": "python -c ...",
"backtrace": "<string>:1:<module>:ZeroDivisionError: integer division or modulo by zero
Traceback (most recent call last):
File \"<string>\", line 1, in <module>
ZeroDivisionError: integer division or modulo by zero
Local variables in innermost frame:
__builtins__: <module '__builtin__' (built-in)>
__name__: '__main__'
__doc__: None
__package__: None",
"pid": 36,
"reason": "<string>:1:<module>:ZeroDivisionError: integer division or modulo by zero",
"time": "6535897932",
"type": "Python"}

Mastering HTML5

Posted by Alvaro Castillo on April 19, 2018 06:20 AM

¿Qué es HTML?

HTML proviene de las siglas en inglés HyperText Markup Language, traducido al español "Lenguaje de etiquetas de Hipertexto". Es un lenguaje (no de programación) interpretado que principalmente se caracteriza por el uso de unas "etiquetas". Estas permiten generar y organizador una serie de elementos que se representarán dentro de una página Web.

Funcionamiento

El código que se escribe será interpretado por un navegador Web como es el caso de Google Chrome, Mozilla Firefox,...

[F28] Participez à la journée de test consacrée à la mise à niveau

Posted by Charles-Antoine Couret on April 19, 2018 06:00 AM

Aujourd'hui, ce jeudi 19 avril, est une journée dédiée à un test précis : sur la mise à niveau de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Nous sommes proches de la diffusion de Fedora 28 édition finale. Et pour que ce lancement soit un succès, il est nécessaire de s'assurer que le mécanisme de mise à niveau fonctionne correctement. C'est-à-dire que votre Fedora 26 ou 27 devienne une Fedora 28 sans réinstallation, en conservant vos documents, vos paramètres et vos programmes. Une très grosse mise à jour en somme.

Les tests du jour couvrent :

  • Mise à niveau depuis Fedora 26 ou 27, avec un système chiffré ou non ;
  • Même que précédemment mais avec KDE comme environnement ou une version Spin quelconque ;
  • De même avec la version Server au lieu de Workstation ;
  • En utilisant GNOME Logiciels plutôt que dnf.

En effet, Fedora propose depuis quelques temps déjà la possibilité de faire la mise à niveau graphiquement avec GNOME Logiciels et en ligne de commande avec dnf. Dans les deux cas le téléchargement se fait en utilisation normale de votre ordinateur, une fois que ce sera prêt l'installation se déroulera lors du redémarrage.

Pour ceux qui veulent bénéficier de F28 avant sa sortie officielle, profitez-en pour réaliser ce test, que cette expérience bénéficie à tout le monde. :-)

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Fedora meetup at Pune – March 2018

Posted by Fedora Community Blog on April 18, 2018 05:09 PM

Long time we did not had any meetup at Pune, Maharashtra, India, so we decided to get started again.  Details about this meetup are available at Fedora Wiki page.

Planning for meetup started 1 month before. Initially Ompragash proposed to have meetup.com account for Fedora Pune to get more awareness. Later dropped this plan, since this is not only Fedora Pune level topic but applicable for all Fedora events.

Event started well in time.

  • 15 Attendees were present including college students.
  • Ompragash welcomed all the attendees and had a short introduction from everyone. He delivered his first talk “Telling the Fedora Story”, in this he explained how Fedora project started.
  • Parag explained different ways to contribute in Fedora in his talk. He kept on answering queries in between, it was engaging session.

Fedora meetup at Pune - March 2018

After Parag’s talk, we had snacks break. Thanks to Ompragash for arranging snacks. I specifically found snacks break very good for networking.

  • I started my talk on Fedora 28 changes. Since, this was post break session, tried to keep it 2-way discussions rather than one way.

One of the idea behind organizing this meetup with basic topics was to get started and get more idea from attendees itself about topics for future Fedora meetups. Also we were thinking to involve participants in organizing future meetups as well.

We had very good discussion at the end of session about topics for future Fedora meetups. We all together came up with following list.

  1.  FAS Creation (15min)
  2. IRC Communication (30min)
  3.  Patch creation and submission (We can demo and submit to package maintainer) (30min)
  4.  Earning Fedora badges
  5.  How to use bugzilla?
  6.  How to become design team members?

Planning to cover these topics in future meetups. Overall had a good meetup and happy, it get started again.

The post Fedora meetup at Pune – March 2018 appeared first on Fedora Community Blog.

Fedora 28 Beta – dnf system-upgrade

Posted by Gwyn Ciesla on April 18, 2018 12:38 PM

Yesterday, as is my usual custom, I upgraded my laptop from Fedora 27 to Fedora 28 Beta. I used DNF system upgrade, as I have in the past. The only hiccup was entirely my own fault. I forgot that while I’ve set GNOME to ignore the lid closing to prevent suspension, that setting doesn’t apply during system upgrade. 🙂 Oops.

Used DNF to remove duplicate rpms, reinstalled the new kernel and libwbclient, and corrected GNOME’s right-click behaviour, and all is well.

Cockpit 166

Posted by Cockpit Project on April 18, 2018 10:30 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 166.

Kubernetes: Add creation of Virtual Machines

The KubeVirt Virtual Machines page now offers a “Create” button for defining a new virtual machine. The JSON definition can be provided with a file upload, with drag & drop, or by copy & pasting. See it in action:

<iframe allowfullscreen="" frameborder="0" height="720" src="https://www.youtube.com/embed/J07dW5VZJtg?rel=0" width="960"></iframe>

Thanks to suomiy for this improvement!

Realms: Automatically set up Kerberos keytab for Cockpit web server

When joining a FreeIPA domain, Cockpit now automatically sets up Kerberos Single Sign-On for its web server. With that, users with a valid kerberos ticket (from kinit) will be logged into Cockpit without having to go through the login page.

Numbers now get formatted correctly for the selected language

Numbers like disk sizes or network bandwidth usages are now shown with the correct decimal separator for the current browser language.

Try it out

Cockpit 166 is available now:

Rust 💙 GNOME Hackfest: Day 1

Posted by Alberto Ruiz on April 18, 2018 08:36 AM

Hello everyone,

This is a report of the first day of the Rust 💙 GNOME Hackfest that we are having in Madrid at the moment. During the first day we had a round of introductions and starting outlining the state of the art.

<figure class="wp-caption aligncenter" data-shortcode="caption" id="attachment_816" style="width: 4160px">IMG_20180418_095328<figcaption class="wp-caption-text">A great view of Madrid’s skyline from OpenShine’s offices</figcaption></figure>

At the moment most of the focus is around the gobject_gen! macro, the one that allows us to inherit GObjects. We already have basic inheritance support, private structure, signals and methods. The main outstanding issues are:

  • Properties support, which danigm is working on
  • Improve compiler errors for problems within the macro, which antonyo and federico worked on
  • Interface implementation and declaration
  • Cover more Rust<->GObject/GLib/C type conversion coverage

I’ve been focusing mostly on improving the support for the private structure of GObject classes. At the moment this is how you do it:

struct MyPrivateStruct {
        val: Cell<u32>,
}

gobject_gen! {
    class Foo {
        type InstancePrivate = MyPrivateStruct;
    }

    pub fn my_method(&self) {
        self.get_priv().val.set(30);
    }
}

Which I find rather cumbersome, I’m aiming at doing something like this:

gobject_gen! {
    class Foo {
        val: Cell<u32>;
    }

    pub fn my_method(&self) {
        self.val.set(30);
    }
}

Which is a bit more intuitive and less verbose. The private struct is generated behind the scenes and the de-referencing is done through the Deref trait. The challenge now is how to allow for default values for the private structure. This was done through the Default trait on the current implementation, but I’m not sure this is the best way forward to make this easy to use.

By the way, the hackfest is being kindly hosted by our friends at OpenShine, which besides being free software enthusiasts and supporters, are great at deploying kubernetes, Big Data and analytics solutions.
logo-alta-alpha1

All systems go

Posted by Fedora Infrastructure Status on April 18, 2018 02:18 AM
Service 'Package Updates Manager' now has status: good: Everything seems to be working.

Major service disruption

Posted by Fedora Infrastructure Status on April 18, 2018 01:56 AM
Service 'Package Updates Manager' now has status: major: bodhi web frontend down

F28 Upgrade Test Day

Posted by alciregi on April 17, 2018 10:02 PM

Thursday 19, is the Fedora 28 Upgrade Test Day.

Test days in Fedora are days dedicated to test some novelty or some particular aspect in the operating system. Nothing hard and nothing you cannot test during the other days. The point is that people involved in development and people willing to help in testing new and old features (assuring that they still works during a development cycle) can join the #fedora-test-day IRC channel and meet other people in order talk about the topic of the day. These days can be useful to learn something new and to track down new bugs, all togheter.
In order to take part to test days you don’t need to be an expert: just follow the instructions listed in the event wiki page, and report the result.

You can find more info on Upgrade Test Day here: https://communityblog.fedoraproject.org/fedora-28-upgrade-test-day-2018-04-19
And more info on Fedora Test Days in general: https://fedoraproject.org/wiki/QA/Test_Days

Things We did at Fedora Infrastructure Hackathon 2018

Posted by Sinny Kumari on April 17, 2018 07:01 PM

IMG_20180410_085205

Last week I got an opportunity to be part of Fedora Infrastructure Hackathon from 9th to 13th April in Fredericksburg, Virginia, USA. It was a very nice experience to work face to face with the folks I usually interact online. Additionally, as a new team member, it was good to get to know everyone in person.

It was a productive week for all of us – lot of things were discussed, planned and done. You can find these details on hackathon wiki page. Being there helped me a lot with improving my understanding around Atomic, our Fedora infrastructure setup and projects with their current state developed and maintained by Infra team. During the hackathon week, we discussed about topics such as rawhide gating via bodhi, Fedora infra team maintained apps migration to openshift, etc. More details can be read from Kevin’s and Pingou’s post.

We also discussed and planned where to put multi-arch content of the Atomic Two Week release on Fedora Website. Right now, if you go to Fedora website for Atomic downloads, there is only x86_64 related information and download links. After discussion, we decided on extending existing Atomic download page to have multi-arch content. Atomic is a fast release, and keeping all arches information at one place will help us maintain and update website easily with latest Two Week release content. Kudos to Ryan who volunteered to make UI changes in the website!

I also talked to smooge to get a better understanding of our Fedora infrastructure setup. For example – where to look for logs for a particular infra machine, querying packages installed in a builder. This is definitely going to help me with debugging and fixing some of the Atomic related issues.

There were further discussions with Patrick and Randy on having multi-arch images in Fedora registry  which currently has x86_64 specific images. We discussed about things that need to be done to enable support for pushing multi-arch images. This will include fixing prep-docker-brew-branch script to support multi-arch.

Other than that I had lots of conversation and knowledge transfer with Dusty.

  • Dusty shared with me the details of how we do Atomic Two Week release during Fedora GA release. This release is done for latest stable Fedora. For example: current Fedora Atomic Two Week release is based on F27. Other than that we also have nightly Atomic composes for pre-released Fedora (F28 at the moment) and rawhide. Things get tricky when a new Fedora GA release comes into picture. There are additional steps and scripts involved during this phase to make sure Atomic Two Week release with new Fedora GA happen successfully on time. We will soon have F28 GA release, it will be interesting to watch all steps happening and helping when needed.
  • We also discussed about having a Fedora Atomic Host release dashboard. Before doing Fedora Atomic Two Week release, we usually go through various checks to ensure that everything looks good on all supported arches. This is a time consuming and manual process. Having a dashboard where we collect results status from various places for every Two Week and other pre-release nightly composes like autocloud test, atomic-host-tests, AMIs upload status and so on will be very useful. This is going to help us to always be ready for release with Atomic Two Week.
  • We also looked into how vagrant images get updated currently after Atomic Two Week release happens. Right now we do it from the vagarantup web UI. We discussed and the plan is to do it through vagrant-cloud api.

I must say it was a very productive week for me. Being part of this hackathon made stuff happen quicker over IRC. Many thanks to Paul Frields who organized everything during this trip. Also thanks to Red Hat and Fedora Council for sponsoring this trip. It was very nice meeting you all and definitely want to meet you all again!

2018 Fedora Infrastructure Hackathon

Posted by Randy Barlow on April 17, 2018 05:14 PM

Last week I traveled to Fredericksburg, VA, USA with several members of the Fedora community for the 2018 Fedora Infrastructure Hackathon. It was a highly productive week for me and I think for others as well.

On Monday we spent some time talking about all the changes that happened to …

F27-20180415 updated Live isos released

Posted by Ben Williams on April 17, 2018 04:34 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated 27 Live ISOs, carrying the 4.15.16-300 kernel.

This set of updated isos will save about 945 MB of updates after install.  (for new installs.)

Build Directions: https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

We would also like to thank Fedora- QA  for running the following Tests on our ISOs.: https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=27&build=FedoraRespin-27-updates-20180415.0&groupid=1

These can be found at  http://tinyurl.com/live-respins .We would also like to thank the following irc nicks for helping test these isos: alciregi, dowdle, paragan, and Southern_Gentlem.

As always we are always needing Testers to help with our respins. We have a new Badge for People whom help test.  See us in #fedora-respins on Freenode IRC.

Certificate Transparency and HTTPS

Posted by Red Hat Security on April 17, 2018 03:00 PM

Google has announced that on April 30, 2018, Chrome will:

“...require that all TLS server certificates issued after 30 April, 2018 be compliant with the Chromium CT Policy. After this date, when Chrome connects to a site serving a publicly-trusted certificate that is not compliant with the Chromium CT Policy, users will begin seeing a full page interstitial indicating their connection is not CT-compliant. Sub-resources served over https connections that are not CT-compliant will fail to load and will show an error in Chrome DevTools.”

So what exactly does this mean, and why should one care?

What is a CT policy?

CT stands for “Certificate Transparency” and, in simple terms, means that all certificates for websites will need to be registered by the issuing Certificate Authority (CA) in at least two public Certificate Logs.

When a CA issues a certificate, it now must make a public statement in a trusted database (the Certificate Log) that, at a certain date and time, they issued a certificate for some site. The reason is for more than a year many different CAs have issued certificates for sites and names for which they shouldn’t (like “localhost” or “1.2.3.”) or have issued certificates following fraudulent requests (e.g. people who are not BigBank asking for certificates for bigbank.example.com). By placing all requested certificates into these Certificate Logs, other groups, such as security researchers and companies, can monitor what is being issued and raise red flags as needed (e.g. if you see a certificate issued for your domain, which you did not request).

If you do not announce your certificates in these Certificate Logs, the Chrome web browser will generate an error page that the user must click through before going to the page they were trying to load, and if a page contains elements (e.g. from advertising networks) that are served from non CT-compliant domains, they will simply not be loaded.

Why is Google doing this?

Well there are probably several reasons but the main ones are:

  1. As noted, several CAs have been discovered issuing certificates wrongly or fraudulently, putting Internet users at risk. This technical solution will greatly reduce the risk as such wrong or fraudulently issued certificates can be detected quickly.

  2. More importantly, this prepares for a major change coming to the Chrome web browser in July 2018, in which all HTTP websites will be labeled as “INSECURE”, which should significantly drive up the adoption of HTTPS. This adoption will, of course, result in a flood of new certificates which, combined with the oversight provided by Certificate Logs, should help to catch fraudulently or wrongly-obtained certificates.

What should a web server operator do?

The first step is to identify your web properties, both external facing and internal facing. Then it’s simply a matter of determining whether you:

want the certificate for a website to show up in the Certificate Log so that the Chrome web browser does not generate an error (e.g. your public facing web sites will want this), or absolutely do not want that particular certificate to show up in the Certificate Logs (e.g. a sensitive internal host), and you’re willing to live with Chrome errors.

Depending on how your certificates are issued, and who issued them, you may have some time before this becomes an issue (e.g. if you are using a service that issues short lived certificates you definitely will be affected by this). Also please note that some certificate issuers like Amazon’s AWS Certificate Manager do allow you to choose to opt out of reporting them to the Certificate Logs, a useful feature for certificates being used on systems that are “internal” and you do not want the world to know about.

It should be noted that in the long term, option 2 (not reporting certificates to the Certificate Logs) will become increasingly problematic as it is possible that Google may simply have Chrome block them rather than generate an error. So, with that in mind, now is probably a good time to start determining how your security posture will change when all your HTTPS-based hosts are effectively being enumerated publicly. You will also need to determine what to do with any HTTP web sites, as they will start being labelled as “INSECURE” within the next few months, and you may need to deploy HTTPS for them, again resulting in them potentially showing up in the Certificate Logs.

English

Category

Secure

Tags

https

Component

httpd

git rebase --onto - The Simple One-Minute Explanation

Posted by Jeff Sheltren on April 17, 2018 09:24 AM
TL;DR the command you want is: git rebase --onto [the new HEAD base] [the old head base - check git log] [the-branch-to-rebase-from-one-base-to-another] And my main motivation to putting it here is to easily find it again in the future as I always forget the syntax. (This is a re-post from my old blog on drupalgardens, but it is still helpful.) Mental model To make all of this simpler think of: You have: Two red dishes on top of two blue dishes One yellow dish You want: Those two red dishes on top of the one yellow dish You do: Carefully go with the finger down to the bottom of the two red dishes, which is the first blue dish Take the two red dishes Transfer them over to the one yellow dish That is what rebase --onto does: git rebase --onto [yellow dish] [from: first blue dish] [the two red dishes] Note: The following is meant for an intermediate audience that is familiar with general rebasing in GIT Longer explanation It happened! A branch - you had based your work - on has diverged upstream, but you still have work in progress, which you want to preserve. So it looks...
fabian Tue, 04/17/2018 - 02:24

Top Badgers of 2017: Fabio Valentini

Posted by Fedora Community Blog on April 17, 2018 08:30 AM

What is “Top Badgers”?

“Top Badgers” is a special series on the Community Blog. In this series, Luis Roca interviewed the top badge earners of 2017 in the Fedora Project. Not familiar with Fedora Badges? No worries, you can read more about them on the Badges website.

This article features Fabio Valentini (decathorpe), who clocked in at the #3 spot of badges earned in 2017, with 34 badges! As of the writing of this article, Fabio is the #222 all-time badge earner in Fedora.

“You earned a lot of badges this year. For you, what was the most memorable badge and why?”

That would be the “All your $arch are belong to us (Koji Success V)” badge. Partly because I think that the badge title is one of the funniest. On the other hand, getting more than 1000 successful Koji builds required about three successful builds per day until that point in time, and I didn’t even realize that I was that busy building packages.

“Of the badges you earned this year, which one did you think was easiest?”

The easiest badge was probably the “Tadpole” badge, because you get that one for just having an FAS account long enough (three years, in that case). Since I don’t think that should count here, the “Override, you say?” badge was probably the easiest to get.

“Hardest?”

I guess “Like a Rock (Updates-Stable IV)” was the hardest one to achieve, since it involved the most work – preparing an update, testing it locally, building it in Koji, submitting it to Bodhi, gathering feedback, pushing it to stable, etc. takes quite a lot of time.

“What is your advice to either new or recent contributors who want to earn more badges?”

I don’t think gathering badges should be a standalone goal, but for the sake of the argument, there are some low hanging fruit:

  • Building some packages in COPR gets you 2-3 badges.
  • The same goes for building stuff in koji (successful and failed) builds both, although submitting failing builds just for the sake of getting the badge should be frowned upon).
  • Submitting karma for updates gets you another 4-5 quick badges.
  • Contributing to and voting in the supplemental wallpaper contests is awarded by (at least) 4 badges per year. More if your entry wins. 😉

“Any tips to success?”

Just do the work, and you’ll probably gather more badges than you realize.

“What are you currently working on in the Fedora Project?”

I am the (sole) maintainer of the elementary/Pantheon DE packages in Fedora. I own and co-maintain around 40 golang packages, and I am the maintainer of the syncthing package. I also provide “unofficial official” COPR repositories for stable builds of elementary packages that aren’t quite ready for the official Fedora repos, and for nightly builds of all elementary packages. As part of that I’m working with – and contributing to – the upstream elementary projects.

“What can the community look forward to from you in 2018?”

I hope that I can improve the stability and “smoothness” of the Pantheon DE on Fedora. The stabilization being done upstream for the upcoming “5.0 Juno” release of elementaryOS will probably help me achieve that. I’m also looking forward to update and clean my golang packages with the newly added packaging helpers for golang, which – I hope – will reduce the maintenance burden of those packages (and, as a result, syncthing) quite a bit.

The post Top Badgers of 2017: Fabio Valentini appeared first on Fedora Community Blog.

Running minikube v0.26.0 with CRIO and KVM nestign enabled by default

Posted by Fabian Deutsch on April 17, 2018 08:05 AM

Probably not worth a post, as it’s mentioned in the readme, but CRIO wsa recently updated in minikube v0.26.0 which now makes it work like a charm.

When updating to 0.26 make sure to update the minikube binary, but also the docker-machine-driver-kvm2 binary.

Like in the past it is possible to switch to CRIO using

$ minikube start --container-runtime=cri-o
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
$

However, my favorit launch line is:

minikube start --container-runtime=cri-o --network-plugin=cni --bootstrapper=kubeadm --vm-driver=kvm2

Which will use CRIO as the container runtime, CNI for networking, kubeadm for bringing up kube inside a KVM VM.

Fedora 28 : GoLang first example .

Posted by mythcat on April 17, 2018 06:33 AM
This tutorial is about GoLang IDE . About this IDE you can read more here - my intro article about this IDE.
I try to used with the platform-native GUI library for Go named andlabs/ui.
First, after you install the IDE you can check the settings on menu Settings: Using this tool with Fedora 28 is easy.
About the first install of andlabs/ui then this come with this error:
[mythcat@desk ~]$ go get github.com/andlabs/ui
# pkg-config --cflags gtk+-3.0
Package gtk+-3.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `gtk+-3.0.pc'
to the PKG_CONFIG_PATH environment variable
Package 'gtk+-3.0', required by 'virtual:world', not found
pkg-config: exit status 1
That tell us is need to install some packages from Fedora repo:
# dnf install gtk3-devel
I used this source code to test the andlabs/ui:
package main

import (
"github.com/andlabs/ui"
)

func main() {
err := ui.Main(func() {
input := ui.NewEntry()
button := ui.NewButton("Greet")
greeting := ui.NewLabel("")
box := ui.NewVerticalBox()
box.Append(ui.NewLabel("Enter your name:"), false)
box.Append(input, false)
box.Append(button, false)
box.Append(greeting, false)
window := ui.NewWindow("Hello", 200, 100, false)
window.SetMargined(true)
window.SetChild(box)
button.OnClicked(func(*ui.Button) {
greeting.SetText("Hello, " + input.Text() + "!")
})
window.OnClosing(func(*ui.Window) bool {
ui.Quit()
return true
})
window.Show()
})
if err != nil {
panic(err)
}
}
First time was run well and after I restart the IDE I got this error:
# gui_test
/usr/lib/golang/pkg/tool/linux_amd64/link: running gcc failed: exit status 1
gcc: error: /home/mythcat/go/src/github.com/andlabs/ui/libui_linux_amd64.a: No such file or directory

The problem come from GoLang IDE settings and andlabs/ui.
I remove and install again the andlabs/ui and now working well.
The result of this source code come with this output:

Fedora 28 Upgrade Test Day 2018-04-19

Posted by Fedora Community Blog on April 16, 2018 08:28 PM

Thursday, 2018-04-19, is the Fedora 28 Upgrade Test Day!
As part of this planned change for Fedora 28, we need your help to test if everything runs smoothly!

Why Upgrade Test Day?

As we approach the Final Release date for Fedora 28. Most users will be upgrading to Fedora 28 and this test day will help us understand if everything is
working perfectly. This test day will cover both a Gnome graphical upgrade and an upgrade done using DNF .

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Fedora 28 Upgrade Test Day 2018-04-19 appeared first on Fedora Community Blog.

Le blog passe la v6, sous WordPress

Posted by Guillaume Kulakowski on April 16, 2018 08:17 PM

Comme certains l’auront peut-être remarqué, le blog vient de faire peau neuve : Nouveau look basé sur un template Material Design, Migration de Dotclear vers WordPress. Nouveau look Effectivement il était temps, la précédente version du blog datée de 2012. J’ai d’abord essayé de faire un design par moi même, en me basant sur Materialize […]

Cet article Le blog passe la v6, sous WordPress est apparu en premier sur Guillaume Kulakowski's blog.

Some Native GTK Dialogs in LibreOffice

Posted by Caolán McNamara on April 16, 2018 03:34 PM
<iframe allowfullscreen="allowfullscreen" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/wL3Rt4v11LY/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/wL3Rt4v11LY?feature=player_embedded" width="320"></iframe>
When the GTK3 backend is active in current LibreOffice master (towards 6.1) some of the dialogs are now comprised of fully native GTK dialogs and widgetery. Instead of VCL widgetery themed to look like GTK, they're the real thing.
<iframe allowfullscreen="allowfullscreen" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/yOOQ-tOhgjE/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/yOOQ-tOhgjE?feature=player_embedded" width="320"></iframe>

So for these dialogs this means f.e that the animated effects for radio and checkbuttons work, that the scrolling is overlay scrolling, and that the visual feedbacks that scrolling has reached its limit, or that available content is outside the scrolling region are shown. In the above demo, the GtkNotebook is the real thing.
<iframe allowfullscreen="allowfullscreen" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/99v_2rIBL14/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/99v_2rIBL14?feature=player_embedded" width="320"></iframe>
I'm particularly pleased with the special character dialog because it has a custom character grid widget which comes with accessibility support which remains working when reworked as a native gtk widget with custom drawing and interaction callbacks.

GNOME Terminal 3.28.x lands in Fedora

Posted by Debarshi Ray on April 16, 2018 02:18 PM

The following screenshots don’t have the correct colours. Their colour channels got inverted because of this bug.

Brave testers of pre-release Fedora builds might have noticed the absence of updates to GNOME Terminal and VTE during the Fedora 28 development cycle. That’s no longer the case. Kalev submitted gnome-terminal-3.28.1 as part of the larger GNOME 3.28.1 mega-update, and it will make its way into the repositories in time for the Fedora 28 release early next month.

The recent lull in the default Fedora Workstation terminal was not due to the lack of development effort, though. The recent GNOME 3.28 release had a relatively large number of changes in both GNOME Terminal and VTE, and it took some time to update the Fedora-specific patches to work with the new upstream version.

Here are some highlights from the past six months.

Unified preferences dialog

The global and profile preferences were merged into a single preferences dialog. I am very fond of this unified dialog because I have a hard time remembering whether a setting is global or not.

gnome-terminal-3.28-preferences

Text settings

The profile-specific settings UI has seen some changes. The bulk of these are in the “Text” tab, which used to be known as “General” in the past.

It’s now possible to adjust the vertical and horizontal spacing between the characters rendered by the terminal for the benefit of those with visual impairments. The blinking of the cursor can be more easily tweaked because the setting is now exposed in the UI. Some people are distracted by a prominently flashing cursor block in the terminal, but still want their thin cursors to flash elsewhere for the sake of discoverability. This should help with that.

gnome-terminal-3.28-preferences-text

Last but not the least, it’s nice to see the profile ID occupy a less prominent position in the UI.

Colours and bold text

There are some subtle improvements to the foreground colour selection for bold text. As a result, the “allow bold text” setting has been deprecated and replaced with “show bold text in bright colors” in the “Colors” tab. Various inconsistencies in the Tango palette were also resolved.

Port to GAction and GMenu

The most significant non-UI change was the port to GAction and GMenuModel. GNOME Terminal no longer uses the deprecated GtkAction and GtkUIManager classes.

Blinking text

VTE now supports blinking text. Try this:

  $ tput blink; echo "blinking text"; tput sgr0


If you don’t like it, then there’s a setting to turn it off.

Overline and undercurl

Similar to underline and strikethrough, VTE now supports overline and undercurl. These can be interesting for spell checkers and software development tools.

Bodhi 3.6.0 released

Posted by Bodhi on April 16, 2018 01:32 PM

Deprecation

  • bodhi-monitor-composes has been deprecated and will be removed in a future release. Please
    use bodhi composes list instead (#2170).

Dependency changes

  • Pungi 4.1.20 or higher is now required.
  • six is now a required dependency.
  • Skopeo is now a required dependency for Bodhi installations that compose containers.

Features

  • The UI no longer lists a user's updates from retired releases by default (#752).
  • The CLI now supports update severity (#1814).
  • There is now a REST API to find out the status of running or failed composes (#2015).
  • The CLI now has a composes section which is able to query the server to display the status
    of composes (#2016).
  • Bodhi is now able to identify containers in Koji (#2027).
  • Bodhi is now able to compose containers (#2028).
  • There is now a cache_dir setting that can be used to direct Bodhi where to store a
    shelve while generating metadata (9b08f7b).
  • There is now documentation about buildroot overrides (3450073).
  • Bodhi will now include RPM changelogs in e-mails (07b27fe).
  • Bodhi's update e-mail now instruct dnf users to use the --advisory flag
    (9fd56f9).
  • A new wait_for_repo_sig setting will allow Bodhi to work with signed repodata
    (eea4039).

Bugs

  • Bodhi will not reopen VERIFIED or CLOSED bugs anymore
    (#1091, #1349, #2168).
  • Bugzilla tickets will no longer get too much text inserted into their fixedin field
    (#1430).
  • The CLI --close-bugs flag now works correctly (#1818).
  • Fix ACL lookup for Module Packages (2251).
  • Captcha errors are now correctly noted on cookies instead of the session, which was incompatible
    with Cornice 3 (900e80a).
  • The prefer_ssl setting now properly works (9f55c7d).

Development improvements

  • Uniqueness on a release's branch column was dropped, since container releases will likely use the
    same branch name as RPM releases (2216).
  • Bodhi now learns the Pungi output dir directly from Pungi (dbc337e).
  • The composer now uses a semaphore to keep track of how many concurrent composes are running
    (66f995e).
  • CI tests are now also run against Fedora 28 (#2215).
  • Bodhi is now up to 98% line test coverage, from 95% in the 3.5.0 release.
  • It is now possible to run the same tests that CI runs in the Vagrant environment by running
    devel/run_tests.sh.
  • The Bodhi CLI now supports Python 3 with 100% test coverage.
  • The Bodhi server also now supports Python 3, but only has 78% test coverage with Python 3 as many
    tests need to be converted to pass on Python 3, thus it is not yet recommended to run Bodhi server
    on Python 3 even though it is theoretically possible.

Contributors

The following developer contributed patches to Bodhi 3.6.0:

  • Lumir Balhar
  • Patrick Uiterwijk
  • Mattia Verga
  • Clément Verna
  • Pierre-Yves Chibon
  • Jan Kaluza
  • Randy Barlow

Fedora Atomic Workstation: Developer tools

Posted by Matthias Clasen on April 16, 2018 10:28 AM

A while ago, I wrote about using GNOME Builder for GTK+ work on my Fedora Atomic Workstation. I’ve done this with some success since then. I am using the nightly builds of GNOME Builder from the sdk.gnome.org flatpak repository, since I like to try the latest improvements.

As these things go, sometimes I hit a bug. Recently, I ran into a memory leak that caused GNOME Builder to crash and burn. This was happening just as I was trying to take some screenshots for a blog post. So, what to do?

I figured that I can go back to using the commandline, without giving up the flatpak environment that I’m used to now, by using flatpak-builder, which is a commandline tool to build flatpak applications. In my opinion, it should come out-of-the-box with the Atomic Workstation image, just like other container tools. But  that is not the case right now, so I used the convenient workaround of package layering:

$ rpm-ostree install flatpak-builder

flatpak-builder uses a json manifest that describes what and how to build. GTK+ is shipping manifests for the demo apps in its source tree already, for example this one:

https://gitlab.gnome.org/GNOME/gtk/blob/master/build-aux/flatpak/org.gtk.WidgetFactory.json

These manifests are used in the GNOME gitlab instance to build testable flatpaks for merge requests, as can be seen here:

https://gitlab.gnome.org/GNOME/gtk/-/jobs/24276/artifacts/browse

This is pretty amazing as a way to let interested parties (designers, translators, everybody) test suggested changes without having to go through a prolonged and painful build process of ever-changing dependencies (the jhbuild experience). You can read more about it in Carlos‘ and Jordan’s posts.

For me, it means that I can just use one of these manifests as input to flatpak-builder to build GTK+:

$ flatpak-builder build \
   build-aux/flatpak/org.gtk.WidgetFactory.json

This produces a local build in the build/ directory, and I can now run commands in a flatpak sandbox that is populated with the build results like this:

$ flatpak-builder --run build \
   build-aux/flatpak/org.gtk.WidgetFactory.json \
   gtk4-widget-factory

A few caveats are in order when you are using flatpak-builder for development:

flatpak-builder will complain if the build/ directory already exists, so for repeated building, you should add the –force-clean option.

The manifest we are using here is referring to the main GTK+ git repository, and will create a clean checkout from there, ignoring local changes in your checkout. To work around this, you can replace the https url pointing at the git repository by a file: url pointing at your checkout:

 "url": "file:///home/mclasen/Sources/gtk"

You still have to remember to create a local commit for all the changes you want to go into the build. I have suggested that flatpak-builder should support a different kind of source to make this a little easier.

Once you have the basic setup working, things should be familiar. You can get a shell in the build sandbox by using ‘sh’ as the command:

$ flatpak-builder --run build \
  build-aux/flatpak/org.gtk.WidgetFactory.json \
  sh

flatpak-builder knows to use the sdk as runtime when setting up the sandbox, so tools like gdb are available to you. And the sandbox has access to the display server, so you can run graphical apps without problems.

In the end, I got my screenshots of the font chooser, and this setup should keep me going until GNOME Builder is back on track.

4 cool new projects to try in COPR for April

Posted by Fedora Magazine on April 16, 2018 08:00 AM

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

Here’s a set of new and interesting projects in COPR.

Anki

Anki is a program that helps you learn and remember things using spaced repetition. You can create cards and organize them into decks, or download existing decks. A card has a question on one side and an answer on the other. It may also include images, video or audio. How well you answer each card determines how often you see that particular card in the future.

While Anki is already in Fedora, this repo provides a newer version.

Installation instructions

The repo currently provides Anki for Fedora 27, 28, and Rawhide. To install Anki, use these commands:

sudo dnf copr enable thomasfedb/anki
sudo dnf install anki

Fd

Fd is a command-line utility that’s a simple and slightly faster alternative to find. It can execute commands on found items in parallel. Fd also uses colorized terminal output and ignores hidden files and patterns specified in .gitignore by default.

Installation instructions

The repo currently provides fd for Fedora 26, 27, 28, and Rawhide. To install fd, use these commands:

sudo dnf copr enable keefle/fd
sudo dnf install fd

KeePass

KeePass is a password manager. It holds all passwords in one end-to-end encrypted database locked with a master key or key file. The passwords can be organized into groups and generated by the program’s built-in generator. Among its other features is Auto-Type, which can provide a username and password to selected forms.

While KeePass is already in Fedora, this repo provides the newest version.

Installation instructions

The repo currently provides KeePass for Fedora 26 and 27. To install KeePass, use these commands:

sudo dnf copr enable mavit/keepass
sudo dnf install keepass

jo

Jo is a command-line utility that transforms input to JSON strings or arrays. It features a simple syntax and recognizes booleans, strings and numbers. In addition, jo supports nesting and can nest its own output as well.

Installation instructions

The repo currently provides jo for Fedora 26, 27, and Rawhide, and for EPEL 6 and 7. To install jo, use these commands:

sudo dnf copr enable ganto/jo
sudo dnf install jo

[F28] Participez à la journée de test consacrée à Anaconda

Posted by Charles-Antoine Couret on April 16, 2018 07:30 AM

Aujourd'hui, ce lundi 16 avril, est une journée dédiée à un test précis : sur Anaconda. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Anaconda est depuis les débuts de Fedora le programme qui permet de l'installer sur votre machine. Anaconda a subi avec les années de nombreux changements que ce soit dans son architecture interne tout comme son interface utilisateur.

Et pour Fedora 28, Anaconda va bénéficier du début d'une profonde refonte technique en le rendant plus modulaire. L'objectif est que des composants d'Anaconda communiquent entre eux via DBus et que selon les versions de Fedora (ou de toute personnalisation de celui-ci) des modules soient activés, désactivés voire ajoutés afin de ne proposer que ce qui est nécessaire. L'ajout des modules se fera en plusieurs cycles de Fedora.

De plus, Fedora 28 propose une refonte des responsabilités entre Anaconda et gnome-initial-setup pour éviter qu'une question ne soit posée deux fois à l'utilisateur inutilement.

C'est donc un composant critique et il est nécessaire de s'assurer qu'il fonctionne pour le jour de la sortie officielle.

L'objet de cette journée de tests est bien entendu de valider l'ensemble de ces changements, en étant sûr que le résultat est fiable et fonctionnel.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-day et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Episode 92 - Chat with Rami Saas the CEO of WhiteSource

Posted by Open Source Security Podcast on April 15, 2018 11:00 PM
Josh and Kurt talk to Rami Saas, the CEO of WhiteSource about 3rd party open source security as well as open source licensing.
<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6483430/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Fedora Infrastructure hackfest 2018

Posted by Kevin Fenzi on April 15, 2018 09:33 PM

Last week I had the pleasure of attending the 2018 Infrastructure Hackfest in Fredricksberg, VA. It was a very productive week and very nice to meet up face to face with a lot of folks I work with mostly over IRC and email.

Travel went pretty well for me (direct flights, 4-5 hours each way) and the hotel worked out nicely. I liked that the hotel had a big table (with power!) in the corner of the lobby for us to use in evenings for more hacking. Our day workspace was a classroom at a nearby grad college. Aside from some firewall issues monday morning (They were blocking everything but 80/443) it worked pretty well too. Lots of tables we could move around, and whiteboards/projector.

Monday we went over current package maintainer workflows and looked for improvements we could land. Pagure 4.0 (out very soon now) should let us fix the silly ‘get a pagure token for this and put it in a config file’ workflow. We also got bodhi to have a ‘waive’ button to waive test results from the web ui, along with showing what exact tests were missing or failing (so we can get rid of a hacky shell script on the wiki for this purpose). After workflows, we started on package maintainer docs rework. We wanted to move them out of the wiki and clean them up and reorg them. First we tried to identify all wiki pages that are related here and make sure we got all the popular ones, then we brainstormed some personas (new to packaging, upstream maintainer wanting to just package their project, someone who doesn’t have time to follow process and just wants the steps to do an update, someone who wants to know how it all works, etc), then we wrote up a bunch of things these people would want to do and tried to organize new docs for it. It’s not really yet in a state to get wide feedback, but hopefully soon it will be.

Tuesday we talked about reassigning some apps that were maintained by someone who moved to other work, then retired a few apps that have been limping along that we no longer want to spend time on: darkserver (which actually has been broken the last year or so), and summershum (which got checksums of source file uploads), which we never quite figured a use for. Next up was rawhide gating. We went over the proposal send to the devel list and got more detailed about what exact things we would need to do for it. It turns out that just adding some things to the update object we could handle the case of side tags and the work won’t be as big as we thought it would.

Wed was setting up a AWX instance. This is the upstream version of ansible tower. We hoped that using it would give us some good advantages. Unfortunately we hit a number of issues. If we tried to install with a external db the install failed, so we had to move back to a db in container next to the rest, then we got pretty far on configuring authentication and hit a bug preventing SAML from working, then finally we hit an issue where we couldn’t get our private repo mounted in the right container to allow us to have access to our secrets. There will be more discussion on this in the coming weeks and we can decide if the effort is worth it or if we should just keep on going the way we had been. As a side note we looked more at the loopabull setup we have. It has only one demo job in it, but it seems ready to add more into as soon as we would like. Look for us to use that more soon too.

Thursday was more work on rawhide gating and some discussion around openshift. Patrick took the projector and he and Randy got bodhi’s web frontend completely moved over to our openshift (but prod is still not pointing at these new instances until we are ready). We also did some more work on release-monitoring.org (anitya) and got it much further along. There were a number of tweaks and improvements to our openshift playbooks too.

Friday we just met at the hotel as everyone was traveling away, so we just worked on finishing some things up.

A few things that got done also over multiple days or between other issues:

  • jenkins.fedorainfracloud.org is now redirecting to centos-ci. This is the jenkins used by various pagure projects to test their code. We had been working on moving it, but hadn’t gotten to it. Now it’s all done except for a more perm redirect.
  • Various internal hardware/budgeting/ordering things were sorted out
  • Tracked down and fixed our old cloud when updates caused it to stop processing copr builders right.
  • Initial work was started on CAIAPI and the small flask based web frontend in front of it (This will replace fas2).
  • Priorities were added to the infrastructure issue tracker to help set expectations on what issues are waiting on what.
  • A new template was added to the infrastructure issue tracker to help us know what issues need to be done by and what they impact.
  • Some thoughts for making the weekly meetings more interesting or more accessable to the developers on the team were brainstormed.
  • We got RHEL 7.5 and openshift 3.9 all set to upgrade to after the upcoming freeze is over.

All in all, a very pproductive week of hacking on things. Many thanks to Paul Frields (stickster) for organizing everything and the Council/OSAS/Fedora Engineering for sponsoring things.

 

Into The Unknown - My Departure from RedHat

Posted by Mo Morsi on April 15, 2018 08:19 PM

In May 2006, a young starry eyed intern walked into the large corporate lobby of RedHat's Centential Campus in Raleigh, NC, beginning what would be a 12 years journey full of ups and downs, break-throughs and setbacks, and many many memories. Flash forward to April 2018, when the "intern-turned-hardend-software-enginner" filed his resignation and ended his tenure at RedHat to venture into the risky but exciting world of self-employment / entrepreneurship... Incase you were wondering that former-intern / Software Engineer is myself, and after nearly 12 years at RedHat, I finished my last day of employment on Friday April 13th, 2018.

Overall RedHat has been a great experience, I was able to work on many ground-breaking products and technologies, with many very talented individuals from across the spectrum and globe, in a manner that facilitated maximum professional and personal growth. It wasn't all sunshine and lolipops though, there were many setbacks, including many cancelled projects and dead-ends. That being said, I felt I was always able to speak my mind without fear of reprocussion, and always strived to work on those items that mattered the most and had the furthest reaching impact.

Some (but certainly not all) of those items included:

  • The highly publicized, but now defunct, RHX project
  • The oVirt virtualization management (cloud) platform, where I was on the original development team, and helped build the first prototypes & implementation
  • The RedHat Ruby stack, which was a battle to get off the ground (given the prevalence of the Java and Python ecosystems, continuing to to this day). This is one of the items I am most proud of, we addressed the needs of both the RedHat/Fedora and Ruby communities, building the necessary bridge logic to employ Ruby and Rails solutions in many enterprise production envrinments. This continues to this day as the team continuously stays ontop of upstream Ruby developments and provide robust support and solutions for downstream use
  • The RedHat CloudForms projects, on which I worked on serveral interations, again including initial prototypes and standards, as well as ManageIQ integration.
  • ReFS reverse engineering and parser. The last major research topic that I explored during my tenure at RedHat, this was a very fun project where I built upon the sparse information about the filesystem internals that's out there, and was able to deduce the logic up to the point of being able to read directory lists and file contents and metadata out of a formatted filesystem. While there is plenty of work to go on this front, I'm confident that the published writeups are an excellent launching point for additional research as well as the development of formal tools to extract data from the filesystem.

My plans for the immediate future are to take a short break then file to form a LLC and explore options under that umbrella. Of particular interest are crypto-currencies, specifically Ripple. I've recently begun developing an integrated wallet, ledger and market explorer, and statistical analysis framework called Wipple which I'm planning to continue working on and if all goes according to plan, generating some revenue from. There is alot of ??? between here and there, but thats the name of the game!

Until then, I'd like to thank everyone who helped me do my thing at RedHat, from landing my initial internships and the full-time after that, to showing me the ropes, and not shooting me down when I put myself out there to work on and promote our innovation solutions and technologies!

Solong

Vulkan now fully functional on ASUS X550ZE

Posted by Luya Tshimbalanga on April 15, 2018 07:18 PM
South Island (Hainan) and Sea Island (Kaveri) functional with RADV


Running Fedora 28 Design Suite post beta shows a nice surprise: Vulkan with RADV is fully functional on both South Island (Hainan) and Sea Island (Kaveri) cards on ASUS X550ZE laptop. amdgpu driver is needed to enable the feat in combination of boot parameter (cik.amdgpu_support=1 cik.radeon_support=0 si.amdgpu_support=1 si.radeon_support=0)

Vulkan smoketest running on RADV
Some minor issues need be to addressed like occasional glitches. Otherwise the performance is stable enough for dail use.

Fedora 28 Anaconda Test Day 2018-04-16

Posted by Fedora Community Blog on April 15, 2018 06:39 PM

Monday, 2018-04-16 is the Fedora 28 Anaconda Test Day!

Why Anaconda Test Day?

Anaconda will be split into several modules that will communicate over DBus. The goal is to introduce a stable way to interact with Anaconda to support its customization, extensibility, and testability. It will be easier to monitor the installation, maintain an install class or an addon, drop some modules or provide your own UI.

This is just the first part of the move towards a modular (DBus) solution. The whole Anaconda logic will not be moved in F28 to modules at once, instead of small parts will be moved to the DBUS modules incrementally.
We would test to make sure that all the functionalities are performing as they should.

It’s also pretty easy to join in: all you’ll need is Fedora 28(which you can grab from the wiki page).

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in on#fedora-test-day Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Fedora 28 Anaconda Test Day 2018-04-16 appeared first on Fedora Community Blog.

x86 ISA Extensions part II: SSE

Posted by Levente Kurusa on April 15, 2018 03:42 PM

Welcome back to this series exploring the many extensions the x86 architecture has seen over the past decades. In this installment of the series, we will be looking at the successor to MMX: Streaming SIMD Extensions, or SSE for short. Most of these instructions are SIMD (as their name implies), which stands for Single Instruction Multiple Data. In brief, SIMD instructions are similar to the ones we’ve covered in the MMX article: an instruction can possibly work on multiple data groups.

SSE was introduced in 1999 with Intel’s Pentium III soon after Intel saw AMD’s “3DNow!” extension (we will cover this extension in a future installment, but right now I lack access to an AMD machine that I could use 🙂). A question arises naturally: SSE wasn’t the first SIMD set that Intel has introduced to the x86 family of processors, so why did Intel create a new extension set? Unfortunately, MMX had two major problems at the time. First, the registers it “introduced” were aliases of previously existing registers (amusingly, this was touted as an advantage for a while because of the easier context switching), this meant that floating points and MMX operations couldn’t coexist. Second, MMX only worked on integers, it had no support for floating points which was an increasingly important aspect of 3D computer graphics. SSE adds dozens of new instructions that operate on an independent register set and a few integer instructions that continue to operate on the old MMX registers.

(A slight note before we start: In this article “SSE” refers to the very first SSE extension introduced by Intel. In future installments of this series, we will explore SSE2, SSE3, SSSE3, SSE4 and SSE4.1, but here we focus on “SSE1”.)

Do you have SSE?

As with all instruction set extensions, there is a chance that your CPU does not have it. The chances are once again pretty slim with SSE, given its age, but it’s always interesting to see how one can feel sure about its CPU’s support for SSE.

On Linux:

$ cat /proc/cpuinfo | grep -wq sse && echo “SSE available”  || echo “SSE not available”

On OS X/macOS:

$ sysctl machdep.cpu.features | grep -wq SSE && echo “SSE available”  || echo “SSE not available”

Alternatively, CPUID offers a way to gather this information on bare-metal or in an OS-agnostic way. SSE is indicated by CPUID leaf 1, EDX bit 25:

.text
.globl _is_sse_available
_is_sse_available:
    pushq   %rbx

    movq    $1, %rax
    cpuid
    movq    %rdx, %rax
    shrq    $25, %rax
    andq    $1, %rax

    popq    %rbx
    ret

Once you are satisfied that your CPU allows for SSE instructions, it’s time to dive in to the specifics of SSE!

Registers

Since SSE introduces actual, new registers (in contrast with its predecessor), I think it’s useful to have a quick glance at them. SSE added eight, 128-bit registers named: %xmm0, %xmm1, ..., %xmm7. (Amusingly, xmm is the reverse of mmx which is the name of the MMX registers, I assume this is meant as a pun, but I couldn’t find a source confirming) In stark contrast with MMX, SSE does not allow for multiple data types. Each XMM register can hold four, 32-bit single-precision floating points, while MMX could hold different widths of integers.

%xmm0, %xmm1, ..., %xmm7:
*- - - - - - - - -*- - - - - - - - -*- - - - - - - - -*- - - - - - - - -*
| 32-bit SP float | 32-bit SP float | 32-bit SP float | 32-bit SP float |
*- - - - - - - - -*- - - - - - - - -*- - - - - - - - -*- - - - - - - - -*
|                            128-bit value                              |
*- - - - - - - - -*- - - - - - - - -*- - - - - - - - -*- - - - - - - - -*

In this figure, each line represents a data type that can be in the XMM register with SSE. I’ve put the “128-bit value” in the figure, since if you only load data into the register and not issue any floating point operation, then it can be potentially any unstructured data. However, when using floating points only the four, single-precision floating points are supported as data in the register. Unstructured data can potentially cause exceptions to happen.

To control the state of some operations, an additional control and status register is added, dubbed MXCSR. This register cannot be accessed using the mov family of instructions, rather SSE adds two new instructions that allow the register to be loaded and stored, LDMXSCR & STMXSCR. The figure shows its layout and then explains its usage within the SSE environment.

The MXCSR register

Bits 0-5 in MXCSR are flags that show that a certain type of floating-point exception occurred, they are also sticky meaning that the user (or the OS) has to reset them manually after an exception, otherwise they’ll stay set forever. Bits 7-12 are masking bits, they can be used to used to stop the CPU from issuing an exception when certain conditions pertaining to the specific exception are met, in which case the processor will return a value (qNaN, sNaN, definite integer or one of the source operands; see [1] for more details).

For more information on the specific meanings of the registers, look at [1], Chapter 10.2.3.

Instructions

Now that we have covered the registers introduced in the SSE extension, let’s have a look at what new instructions have Intel added and their implications. To utilize SSE to its fullest extent, the very first step to be taken is to move data into the new XMM registers, SSE offers a couple instructions, out of which the following (movaps & movups) are the most common:

# Create a memory location with four single-prec floats
vector0: .dq 3.14, 2.71, 1.23, 4.56
scalar0: .dd 1234
vector1: .dq 3.62, 6.73, 8.41, 9.55

movaps vector0, %xmm0
movups vector1, %xmm1

movaps stands for MOVe Aligned Packed Single Precision Float, and movups stands for the same, but Unaligned. The distinction between aligned and unaligned access is important, and generally developers should aim for aligned access whenever possible for better overall performance.

Now that we have managed to move data into an XMM register, let’s do something with it. A trivial example and one that we explored previously is some simple vector manipulation:

# assuming vector0 and vector1 from the previous snippet

movaps vector0, %xmm0
movups vector1, %xmm1

addps %xmm0, %xmm1 # ADD Packed Single precision float
subps %xmm0, %xmm1 # undo previous operation
maxps %xmm0, %xmm1 

maxps is a very handy instruction: it compares each of the four single-precision floats in the XMM registers and then moves the larger float into the destination operand (it can be either a register like %xmm1 or a 128-bit memory location). This instruction alone can save a large chunk of cycles by avoiding a loop and many cmp and branch instructions.

An other interesting aspect of the SSE extensions are cacheability controls. The application programmer can now tell the CPU that some memory is “non-temporal”, that is it won’t be needed in the near future so do not pollute the cache with it, like so:

movntps %xmm0, vector0

The reverse (i.e., if the programmer knows that a certain memory location will be needed in the near future) can also be signaled to the processor using the PREFETCH family of instructions:

Instruction Pentium III Pentium 4/Xeon Temporal?
prefetch0 L2 or L1 L2 Temporal
prefetch1 L2 L2 Temporal
prefetch2 L2 L2 Temporal
prefetchnta L1 L2 Non-temporal

Conclusion

The next extension we will be looking at will be the SSE2 extension set that builds on the foundations of SSE and MMX to deliver better performance. Starting with the new installment, we will introduce benchmarks, too. In the meantime, have a look at a cache of examples in the GitHub Repo for the series! Until next time!

References

1: Intel IA-32 Software Development Manual, Chapter 11.5.2: SIMD Floating-Point Exception Conditions

How to install RabbitVCS on Fedora systems

Posted by Luca Ciavatta on April 15, 2018 10:00 AM

Install RabbitVCS on Fedora is easy and provides a collection of graphical tools providing simple and straightforward access to version control systems from interfaces you already use. RabbitVCS offers Subversion and Git integration from Finder, Nautilus and Thunar file managers, and the Gedit text editor. It is free/libre/open-source software written in Python using the PyGObject library and it’s inspired by TortoiseSVN.

 

RabbitVCS. A collection of graphical tools providing simple access to version control systems

If you know TortoiseSVN, the popular extension for Windows File Explorer that works with Subversion repositories, you will be fine with RabbitVCS, because the closest alternative to TortoiseSVN in Linux is RabbitVCS. It is a graphical front-end for version control systems available on Linux and it integrates into file managers to provide file context menu access to version control repositories.

Some time ago, was not simple to install RabbitVCS on Fedora systems, but now the things are different. You can install directly from Fedora’s repositories, from the distributed tarball, or directly from the GitHub repository.  If you install from Fedora’s repositories, you’ll grab the 64bit packages now. And this is amazing, everything working out of the box with one simple CLI command.

From Fedora repository, simply type the following in the terminal:

$ sudo dnf install rabbitvcs*

From the Tarball, first, make sure you have installed all dependencies:


$ sudo dnf install nautilus-python pysvn python-simplejson python-configobj python-devel dbus-python python-dulwich subversion meld

Then download the tarball from GitHub and from the top folder type:


$ sudo python setup.py install

Once that is done working, look in the client’s folder and read the README file for each client/plugin to learn how they are installed.

From GitHub repository, install dependencies like above, check the complete instructions for developers here and finally clone the repository:


$ git clone https://github.com/rabbitvcs/rabbitvcs.git

And enjoy your simple, medium or developer installation of RabbitVCS on your Fedora system!

The post How to install RabbitVCS on Fedora systems appeared first on cialu.net.

All systems go

Posted by Fedora Infrastructure Status on April 15, 2018 02:12 AM
New status good: Everything seems to be working. for services: Ask Fedora, Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Darkserver, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, FedoraHosted.org Services, Fedora pastebin service, FreeMedia, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Fedora People, Package Database, Package maintainers git repositories, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

Major service disruption

Posted by Fedora Infrastructure Status on April 15, 2018 01:50 AM
New status major: Networking issues to some locations from main datacenter for services: Ask Fedora, Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Darkserver, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, FedoraHosted.org Services, Fedora pastebin service, FreeMedia, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Fedora People, Package Database, Package maintainers git repositories, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

FLISoL Panamá 2018

Posted by José A. Reyes H. on April 14, 2018 06:00 PM
Participa con nosotros. FLISoL Panamá 2018   Anuncios

Fedora 28 : The VS Code on Fedora.

Posted by mythcat on April 14, 2018 01:14 PM
The Visual Studio Code is an editor for development and includes the features you need for highly productive source code editing. You can use this editor with many linux distros. Today I tested with Fedora 28 distro version.
sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
sudo sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/
vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/vscode.repo'
Then use dnf to check and install this editor.
#dnf check-update
#dnf install code
Next step is to install extensions for Python, Golang, PHP, C# and more.

Unpack container image using docker-py

Posted by Tomas Tomecek on April 14, 2018 08:50 AM

This is just a quick blog post. I’ve seen a bunch of questions on docker-py’s issue tracker about how to extract a container image using docker-py and get_archive (a.k.a. docker export).

Here’s what I’m doing:

import subprocess
import docker

container_image = "fedora:27"
path = "/var/tmp/container-image"
docker_client = docker.APIClient(timeout=600)
container = docker_client.create_container(container_image)
try:
    stream, _ = docker_client.get_archive(container, "/")

    # tarfile is hard to use
    os.mkdir(path, 0o0700)
    logger.debug("about to untar the image")
    p = subprocess.Popen(
        ["tar", "--no-same-owner", "-C", path, "-x"],
        stdin=subprocess.PIPE,
    )
    for x in stream:
        p.stdin.write(x)
    p.stdin.close()
    p.wait()
    if p.returncode:
        raise RuntimeError("Failed to unpack the archive.")
finally:
    docker_client.remove_container(container, force=True)

Pretty easy, right?

A font update

Posted by Matthias Clasen on April 14, 2018 12:37 AM

At the end of march I spent a few days with the Inkscape team, who were so nice to come to the Red Hat Boston office for their hackfest. We discussed many things, from the GTK3 port of Inkscape, to SVG and CSS, but we also spent some time on one of my favorite topics: fonts.

Font Chooser

One thing Tav showed me which I was immediately envious of is the preview of OpenType features that Inkscape has in its font selector.

Clearly, we want something like that in the GTK+ font chooser as well. So, after coming back from the hackfest, I set out to see if I can get this implemented. This is how far I got so far, it is available in GTK+ master.

This really helps understanding which glyphs are affected by a font feature. I would like to add a preview for ligatures as well, but harfbuzz currently does not offer any API to get at the necessary information (understandably — its main focus is applying font features for shaping) and I’m not prepared to parse those font tables myself in GTK+. So, ligatures will have to wait a bit.

Another thing I would like to explore at some point is possible approaches for letting users apply font features to smaller fragments of text, like a headline or a single word. This could be a ‘font tweak’ dialog or panel. If you have suggestions or ideas for this, I’d love to hear them.

At the request of the Inkscape folks, I’ve also explored a backport of the new font chooser capabilities to the 3.22 branch, but since this involves new API, we’re not sure yet which way to go with this.

Font Browser

While doing font work it is always good to have a supply of featureful fonts, so I end up browing the Google web fonts quite a bit.

Recently, I stumbled over a nice-looking desktop app for doing so, but alas, it wasn’t available as a package, and it is written in rust, which I know very little about (I’m hoping to change that soon, but that’s a topic for another post).

But I’ve mentioned this app on #flatpak, and just a few days later, it appeared on flathub, and thus also in GNOME software on my system, just a click away. So nice of the flathub team!

The new flathub website is awesome, btw. Go check it out.

The best part is that this nice little app is now available not just on my bleeding-edge Fedora Atomic Workstation, but also on Ubuntu, Gentoo and even RHEL, thanks to flatpak.  Something we could only dream of a few years ago.

 

Flock 2018 will be in Dresden, Germany this August 8-11

Posted by Fedora Magazine on April 13, 2018 02:36 PM

Each year the Fedora Project holds a conference for contributors known as Flock. The Flock 2018 conference will be in Dresden, Germany from 8-11 August. The conference is open to all contributors. Here’s a message from the Flock organizing committee about this year’s conference.

We are very grateful to the bidders for this year’s flock. While in the end we could not accept the bids because of administrative, monetary, or calendar issues, we were definitely inspired by and are appreciative of the very hard work that went into the bids. The bids this year were for a diverse range of cities, and each was interesting and inspired the final choice. You’re encouraged to let these folks know how much they are appreciated:

Conference Program

We are continuing to tweak the program concept to help the conference keep pace with the evolution of the Fedora. Project Leader Matthew Miller started a discussion about this year’s content and schedule on the flock-planning mailing list. We encourage you to subscribe to that list and join the conversation.

The call for submissions for talks and workshops will open once the programming conversation has completed and we have picked some tooling. If you’re interested in helping with tooling, please let us know on the flock-planning mailing list.

Registration

We are working on changing the registration system to one more easily supported by our accounting system. Therefore we will be opening registration as soon as those details are confirmed.

The registration this year will again include a small voluntary fee to offset swag and setup costs per attendee. The fee for USA attendees is $25. This fee has been scaled via the Big Mac Index to other countries and geographic areas. This means the fee in each country should roughly be the same level of spending, rather than the exact equivalent in local currency. That makes it easier for people in each area to register.

Barring technical issues, the registration system will again also allow you to contribute some extra money toward funding travel for those who need it. This means anyone can contribute to make it easier for someone else to attend.

Venue

The event will be located at the Radisson Blu Park Hotel & Conference Centre, Dresden Radebeul. We will be posting details on how to reserve a room on the Flock website (see below). Rooms will be 100 EUR per night for double occupancy and 75 EUR per night for single occupancy.

Ongoing Communication

There is a Freenode IRC channel for real-time communication about Flock. We will bridge this to a Telegram channel for those who choose to use this client for easier mobile communications.

We are working on setting up a Flock-attendees-2018 mailing list. Details will be posted when they are available.

Last, but not least, we will begin working on the updated Flock to Fedora website and will post an announcement when it is ready. The website will list several hints for transportation to the event venue. For now, those of you trying to plan air travel should consider flying into either the Dresden Airport or to a neighboring city with easy train access such as Prague, Czech Republic or Berlin, Germany. The train trip from Prague to Dresden is particularly beautiful and highly recommended.

Fedora Podcast 005 — Fedora Magazine

Posted by Fedora Magazine on April 13, 2018 12:44 PM
Paul Frields

Paul Frields

Episode 005 of the Fedora Podcast is now available. Episode 005 features Editor of the Fedora Magazine Paul W. Frields. Paul is a former Fedora Project Leader and former manager of the Fedora Engineering team.

In this episode, Paul talks about the Fedora Magazine, the way it works, the process to write an article, and the scope of the magazine in the Fedora Project.

In addition to listening above, Episode 005 is also available on Soundcloud, iTunes, Stitcher, Google Play, Simplecast, Spotify and x3mboy’s site. Full transcripts of Episode 005 are available here. Transcripts are also available for previous episodes.

Subscribe to the podcast

You can subscribe to the podcast in Simplecast, follow the Fedora Podcast on Soundcloud, on iTunes, Stitcher, Google Play, Spotify or periodically check the author’s site on fedorapeople.org.

Acknowledgments

This podcast is made with the following free software: Jitsi, Audacity, and espeak.

The following audio files are also used: Soft echo sweep by bay_area_bob and The Spirit of Nøkken by johnnyguitar01.

Fedora Classic or is it Traditional?

Posted by Till Maas on April 13, 2018 12:04 PM

automobile-automotive-beetle-163809

In FESCo ticket 1878, Matthew, our project leader, suggested to find a different name for the Everything directory that used to contain all pieces used to build all the other parts of the Fedora artifacts because there are now other artifacts that do things differently such as Fedora Modularity or Fedora Atomic. Also there are other artifacts such as Fedora Copr. Therefore it is nowadays only almost Everything. Regardless of the name we will choose for this directory, this also made me realize, that there is no name for Fedora artifacts that used to be Everything or the packages formerly known as Fedora. How would you call it? Is it Fedora Classic? Fedora Traditional? Does this have a negative connotation to you? Does it maybe mean, that there will be no Fedora Classic in five years? I am looking forward to your comments.

[F28] Participez à la journée de test consacrée au noyau Linux 4.16

Posted by Charles-Antoine Couret on April 13, 2018 06:00 AM

Aujourd'hui, ce vendredi 13 avril, est une journée dédiée à un test précis : sur le noyau Linux 4.16. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Le noyau Linux est le cœur du système Fedora (et des autres distributions GNU/Linux). C'est le composant qui fait le lien entre les logiciels et le matériel. C'est lui qui permet aux processus de travailler ensemble sur un même ordinateur et de pouvoir utiliser les périphériques (à travers des pilotes) disponibles sur chaque machine.

C'est donc un composant critique et il est nécessaire de s'assurer qu'il fonctionne.

Les tests du jour couvrent :

  • L'exécution des tests automatisés par défaut et ceux de performances ;
  • Vérifier que la machine démarre correctement ;
  • Vérifier que le matériel est bien exploité (affichage, claviers, souris, imprimantes, scanners, USB, carte graphique, carte son, webcam, réseau filaire et wifi, etc.)

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-day et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.