September 25, 2016

Advanced Multimedia on the Linux Command Line

There was a time that Apple macOS was the best platform to handle multimedia (audio, image, video). This might be still true in the GUI space. But Linux presents a much wider range of possibilities when you go to the command line, specially if you want to:

  • Process hundreds or thousands of files at once
  • Same as above, organized in many folders while keeping the folder structure
  • Same as above but with much fine grained options, including lossless processing that most GUI tools won’t give you

The Open Source community has produced state of the art command line tools as ffmpeg, exiftool and others, which I use every day to do non-trivial things, along with Shell advanced scripting. Sure, you can get these tools installed on Mac or Windows, and you can even use almost all these recipes on these platforms, but Linux is the native platform for these tools, and easier to get the environment ready.

These are my personal notes and I encourage you to understand each step of the recipes and adapt to your workflows. It is organized in Audio, Video and Image+Photo sections.

I use Fedora Linux and I mention Fedora package names to be installed. You can easily find same packages on your Ubuntu, Debian, Gentoo etc, and use these same recipes.

<section id="audio">

Audio

</section> <section id="audio.showinfo">

Show information (tags, bitrate etc) about a multimedia file

ffprobe file.mp3
ffprobe file.m4v
ffprobe file.mkv
</section> <section id="audio.flac2alac">

Lossless conversion of all FLAC files into more compatible, but still Open Source, ALAC

ls *flac | while read f; do
	ffmpeg -i "$f" -acodec alac -vn "${f[@]/%flac/m4a}" < /dev/null;
done
</section> <section id="audio.flac2mp3">

Convert all FLAC files into 192kbps MP3

ls *flac | while read f; do
   ffmpeg -i "$f" -qscale:a 2 -vn "${f[@]/%flac/mp3}" < /dev/null;
done
</section> <section id="audio.flac2mp3hierarchy">

Same as above but under a complex directory structure

# Create identical directory structure under new "mp3" folder
find . -type d | while read d; do
   mkdir -p "alac/$d"
done

find . -name "*flac" | sort | while read f; do
   ffmpeg -i "$f" -acodec alac -vn "alac/${f[@]/%flac/m4a}" < /dev/null;
done
</section> <section id="audio.cue2files">

Convert APE+CUE, FLAC+CUE, WAV+CUE album-on-a-file into a one file per track ALAC or MP3

If some of your friends has the horrible tendency to commit this crime and rip CDs as 1 file for entire CD, there is an automation to fix it. APE is the most difficult and this is what I’ll show. FLAC and WAV are shortcuts of this method.

  1. Make a lossless conversion of the APE file into something more manageable, as WAV:
    ffmpeg -i audio-cd.ape audio-cd.wav
  2. Now the magic: use the metadata on the CUE file to split the single file into separate tracks, renaming them accordingly. You’ll need the shnplit command, available in the shntool package on Fedora (to install: yum install shntool):
    shnsplit -t "%n • %p ♫ %t" audio-cd.wav < audio-cd.cue
  3. Now you have a series of nicely named WAV files, one per CD track. Lets convert them into lossless ALAC using one of the above recipes:
    ls *wav | while read f; do
       ffmpeg -i "$f" -acodec alac -vn "${f[@]/%wav/m4a}" < /dev/null;
    done

    This will get you lossless ALAC files converted from the intermediary WAV files. You can also convert them into FLAC or MP3 using one of the other recipes above.

Now the files are ready for your tagger.
</section>

<section id="video">

Video

</section> <section id="video.srt">

Add chapters and soft subtitles from SRT file to M4V/MP4 movie

This is a lossless and fast process, chapters and subtitles are added as tags and streams to the file; audio and video streams are not reencoded.

  1. Make sure your SRT file is UTF-8 encoded:
    bash$ file subtitles_file.srt
    subtitles_file.srt: ISO-8859 text, with CRLF line terminators
    

    It is not UTF-8 encoded, it is some ISO-8859 variant, which I need to know to correctly convert it. My example uses a Brazilian Portuguese subtitle file, which I know is ISO-8859-15 (latin1) encoded because most latin scripts use this encoding.

  2. Lets convert it to UTF-8:
    bash$ iconv -f latin1 -t utf8 subtitles_file.srt > subtitles_file_utf8.srt
    bash$ file subtitles_file_utf8.srt
    subtitles_file_utf8.srt: UTF-8 Unicode text, with CRLF line terminators
    
  3. Check chapters file:
    bash$ cat chapters.txt
    CHAPTER01=00:00:00.000
    CHAPTER01NAME=Chapter 1
    CHAPTER02=00:04:31.605
    CHAPTER02NAME=Chapter 2
    CHAPTER03=00:12:52.063
    CHAPTER03NAME=Chapter 3
    …
    
  4. Now we are ready to add them all to the movie along with setting the movie name and embedding a cover image to ensure the movie looks nice on your media player list of content. Note that this process will write the movie file in place, will not create another file, so make a backup of your movie while you are learning:
    MP4Box -ipod \
           -itags 'track=The Movie Name:cover=cover.jpg' \
           -add 'subtitles_file_utf8.srt:lang=por' \
           -chap 'chapters.txt:lang=eng' \
           movie.mp4
    

The MP4Box command is part of GPac.
OpenSubtitles.org has a large collection of subtitles in many languages and you can search its database with the IMDB ID of the movie. And ChapterDB has the same for chapters files.

</section> <section id="video.decrypt">

Decrypt and rip a DVD the loss less way

  1. Make sure you have the RPMFusion and the Negativo17 repos configured
  2. Install libdvdcss and vobcopy
    dnf -y install libdvdcss vobcopy
  3. Mount the DVD and rip it, has to be done as root
    mount /dev/sr0 /mnt/dvd;
    cd /target/folder;
    vobcopy -m /mnt/dvd .

You’ll get a directory tree with decrypted VOB and BUP files. You can generate an ISO file from them or, much more practical, use HandBrake to convert the DVD titles into MP4/M4V (more compatible with wide range of devices) or MKV/WEBM files.

</section> <section id="video.slowmotion">

Convert 240fps video into 30fps slow motion, the loss-less way

Modern iPhones can record videos at 240 or 120fps so when you’ll watch them at 30fps they’ll look slow-motion. But regular players will play them at 240 or 120fps, hiding the slo-mo effect.
We’ll need to handle audio and video in different ways. The video FPS fix from 240 to 30 is loss less, the audio stretching is lossy.

# make sure you have the right packages installed
dnf install mkvtoolnix sox gpac faac
#!/bin/bash

# Script by Avi Alkalay
# Freely distributable

f="$1"
ofps=30
noext=${f%.*}
ext=${f##*.}

# Get original video frame rate
ifps=`ffprobe -v error -select_streams v:0 -show_entries stream=r_frame_rate -of default=noprint_wrappers=1:nokey=1 "$f" < /dev/null  | sed -e 's|/1||'`
echo

# exit if not high frame rate
[[ "$ifps" -ne 120 ]] && [[ "$ifps" -ne 240 ]] && exit

fpsRate=$((ifps/ofps))
fpsRateInv=`awk "BEGIN {print $ofps/$ifps}"`

# loss less video conversion into 30fps through repackaging into MKV
mkvmerge -d 0 -A -S -T \
	--default-duration 0:${ofps}fps \
	"$f" -o "v$noext.mkv"

# loss less repack from MKV to MP4
ffmpeg -loglevel quiet -i "v$noext.mkv" -vcodec copy "v$noext.mp4"
echo

# extract subtitles, if original movie has it
ffmpeg -loglevel quiet -i "$f" "s$noext.srt"
echo

# resync subtitles using similar method with mkvmerge
mkvmerge --sync "0:0,${fpsRate}" "s$noext.srt" -o "s$noext.mkv"

# get simple synced SRT file
rm "s$noext.srt"
ffmpeg -i "s$noext.mkv" "s$noext.srt"

# remove undesired formating from subtitles
sed -i -e 's|<font size="8"><font face="Helvetica">\(.*\)</font></font>|\1|' "s$noext.srt"

# extract audio to WAV format
ffmpeg -loglevel quiet -i "$f" "$noext.wav"

# make audio longer based on ratio of input and output framerates
sox "$noext.wav" "a$noext.wav" speed $fpsRateInv

# lossy stretched audio conversion back into AAC (M4A) 64kbps (because we know the original audio was mono 64kbps)
faac -q 200 -w -s --artist a "a$noext.wav"

# repack stretched audio and video into original file while removing the original audio and video tracks
cp "$f" "${noext}-slow.${ext}"
MP4Box -ipod -rem 1 -rem 2 -rem 3 -add "v$noext.mp4" -add "a$noext.m4a" -add "s$noext.srt" "${noext}-slow.${ext}"

# remove temporary files 
rm -f "$noext.wav" "a$noext.wav" "v$noext.mkv" "v$noext.mp4" "a$noext.m4a" "s$noext.srt" "s$noext.mkv"
</section> <section id="video.1photo">

1 Photo + 1 Song = 1 Movie

If the audio is already AAC-encoded, create an MP4/M4V file:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.m4a -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a copy movie.m4v

The above method will create a very efficient 0.2 frames per second (-framerate 0.2) H.264 video from the photo while simply adding the audio losslessly. Such very-low-frames-per-second video may present sync problems with subtitles on some players. In this case simply remove the -framerate 0.2 parameter to get a regular 25fps video with the cost of a bigger file size.
The -vf scale=960:-1 parameter tells FFMPEG to resize the image to 960px width and calculate the proportional height. Remove it in case you want a video with the same resolution of the photo. A 12 megapixels photo file (around 4032×3024) will get you a near 4K video.
If the audio is MP3, create an MKV file:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.mp3 -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a copy movie.mkv

If audio is not AAC/M4A but you still want an M4V file, convert audio to AAC 192kbps:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.mp3 -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a aac -strict experimental -b:a 192k movie.mkv

See more about FFMPEG photo resizing.

</section> <section id="image">

Image and Photo

</section> <section id="image.noexif">

Move images with no EXIF header to another folder

mkdir noexif;
exiftool -filename -T -if '(not $datetimeoriginal or ($datetimeoriginal eq "0000:00:00 00:00:00"))' *jpg | xargs -i mv "{}" noexif/
</section> <section id="image.file2exif">

Set EXIF photo create time based on file create time

Warning: use this only if image files have correct creation time on filesystem and if they don’t have an EXIF header.

exiftool -overwrite_original '-DateTimeOriginal< ${FileModifyDate}' *CR2 *JPG *jpg
</section> <section id="image.rotate">

Rotate photos based on EXIF’s Orientation flag, plus make them progressive. Lossless

jhead -autorot -cmd "jpegtran -progressive '&i' > '&o'" -ft *jpg
</section> <section id="image.rename">

Rename photos to a more meaningful filename

This process will rename silly, sequential, confusing and meaningless photo file names as they come from your camera into a readable, sorteable and useful format. Example:

IMG_1234.JPG2015.07.24-17.21.33 • Max playing with water【iPhone 6s✚】.jpg

Note that new file name has the date and time it was taken, whats in the photo and the camera model that was used.

  1. First keep the original filename, as it came from the camera, in the OriginalFileName tag:
    exiftool -overwrite_original '-OriginalFileName<${filename}' *CR2 *JPG *jpg
  2. Now rename:
    exiftool '-filename<${DateTimeOriginal} 【${Model}】%.c.%e' -d %Y.%m.%d-%H.%M.%S *CR2 *JPG *jpg
  3. Remove the ‘0’ index if not necessary:
    \ls *JPG *jpg | while read f; do
        nf=`echo "$f" | sed -e 's/0.JPG/.jpg/i'`;
        t=`echo "$f" | sed -e 's/0.JPG/1.jpg/i'`;
        [[ ! -f "$t" ]] && mv "$f" "$nf";
    done
  4. Optional: make lower case extensions:
    \ls *JPG | while read f; do
        nf=`echo "$f" | sed -e 's/JPG/jpg/'`;
        mv "$f" "$nf";
    done
  5. Optional: simplify camera name, for example turn “Canon PowerShot G1 X” into “Canon G1X” and make lower case extension at the same time:
    ls *JPG *jpg | while read f; do
        nf=`echo "$f" | sed -e 's/Canon PowerShot G1 X/Canon G1X/;
          s/iPhone 6s Plus/iPhone 6s✚/;
          s/Canon PowerShot SD990 IS/Canon SD990 IS/;
          s/JPG/jpg/;'`;
        mv "$f" "$nf";
    done

You’ll get file names as 2015.07.24-17.21.33 【Canon 5D Mark II】.jpg. If you took more then 1 photo in the same second, exiftool will automatically add an index before the extension.

</section> <section id="image.semantic">

Even more semantic photo file names based on Subject tag

\ls *【*】* | while read f; do
	s=`exiftool -T -Subject "$f"`;
	nf=`echo "$f" | sed -e "s/ 【/ • $s 【/; s/\:/∶/g;"`;
	mv "$f" "$nf";
done
</section> <section id="image.fullrename">

Full rename: a consolidation of some of the previous commands

exiftool '-filename<${DateTimeOriginal} • ${Subject} 【${Model}】%.c.%e' -d %Y.%m.%d-%H.%M.%S *CR2 *JPG *jpg
</section> <section id="image.creator">

Set photo “Creator” tag based on camera model

  1. First list all cameras that contributed photos to current directory:
    exiftool -T -Model *jpg | sort -u

    Output is the list of camera models on this photos:

    Canon EOS REBEL T5i
    DSC-H100
    iPhone 4
    iPhone 4S
    iPhone 5
    iPhone 6
    iPhone 6s Plus
  2. Now set creator on photo files based on what you know about camera owners:
    CRE="John Doe";    exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/DSC-H100/'            *.jpg
    CRE="Jane Black";  exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/Canon EOS REBEL T5i/' *.jpg
    CRE="Mary Doe";    exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 5/'            *.jpg
    CRE="Peter Black"; exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 4S/'           *.jpg
    CRE="Avi Alkalay"; exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 6s Plus/'      *.jpg
</section> <section id="image.faces">

Recursively search people in photos

If you geometrically mark people faces and their names in your photos using tools as Picasa, you can easily search for the photos which contain “Suzan” or “Marcelo” this way:

exiftool -fast -r -T -Directory -FileName -RegionName -if '$RegionName=~/Suzan|Marcelo/' .

-Directory, -FileName and -RegionName specify the things you want to see in the output. You can remove -RegionName for a cleaner output.
The -r is to search recursively. This is pretty powerful.

</section> <section id="image.timezone">

Make photos timezone-aware

Your camera will tag your photos only with local time on CreateDate or DateTimeOriginal tags. There is another set of tags called GPSDateStamp and GPSTimeStamp that must contain the UTC time the photos were taken, but your camera won’t help you here. Hopefully you can derive these values if you know the timezone the photos were taken. Here are two examples, one for photos taken in timezone -02:00 (Brazil daylight savings time) and on timezone +09:00 (Japan):

exiftool -overwrite_original '-gpsdatestamp<${CreateDate}-02:00' '-gpstimestamp<${CreateDate}-02:00' *.jpg
exiftool -overwrite_original '-gpsdatestamp<${CreateDate}+09:00' '-gpstimestamp<${CreateDate}+09:00' Japan_Photos_folder

Use exiftool to check results on a modified photo:

exiftool -s -G -time:all -gps:all 2013.10.12-23.45.36-139.jpg
[EXIF]          CreateDate                      : 2013:10:12 23:45:36
[Composite]     GPSDateTime                     : 2013:10:13 01:45:36Z
[EXIF]          GPSDateStamp                    : 2013:10:13
[EXIF]          GPSTimeStamp                    : 01:45:36

This shows that the local time when the photo was taken was 2013:10:12 23:45:36. To use exiftool to set timezone to -02:00 actually means to find the correct UTC time, which can be seen on GPSDateTime as 2013:10:13 01:45:36Z. The difference between these two tags gives us the timezone. So we can read photo time as 2013:10:12 23:45:36-02:00.

</section> <section id="image.movesgeotag">

Geotag photos based on time and Moves mobile app records

Moves is an amazing app for your smartphone that simply records for yourself (not social and not shared) everywhere you go and all places visited, 24h a day.

  1. Make sure all photos’ CreateDate or DateTimeOriginal tags are correct and precise, achieve this simply by setting correctly the camera clock before taking the pictures.
  2. Login and export your Moves history.
  3. Geotag the photos informing ExifTool the timezone they were taken, -08:00 (Las Vegas) in this example:
    exiftool -overwrite_original -api GeoMaxExtSecs=86400 -geotag ../moves_export/gpx/yearly/storyline/storyline_2015.gpx '-geotime<${CreateDate}-08:00' Folder_with_photos_from_trip_to_Las_Vegas

Some important notes:

  • It is important to put the entire ‘-geotime’ parameter inside simple apostrophe or simple quotation mark ('), as I did in the example.
  • The ‘-geotime’ parameter is needed even if image files are timezone-aware (as per previous tutorial).
  • The ‘-api GeoMaxExtSecs=86400’ parameter should not be used unless the photo was taken more than 90 minutes of any detected movement by the GPS.
</section> <section id="image.grid">

Concatenate all images together in one big image

  • In 1 column and 8 lines:
    montage -mode concatenate -tile 1x8 *jpg COMPOSED.JPG
  • In 8 columns and 1 line:
    montage -mode concatenate -tile 8x1 *jpg COMPOSED.JPG
  • In a 4×2 matrix:
    montage -mode concatenate -tile 4x2 *jpg COMPOSED.JPG

The montage command is part of the ImageMagick package.
</section>

Fedora Ambassadors: Measuring Success

Open Source Advocate

I have been a Linux dabbler since 1994 when I first tried Suse Linux. In 2006 I became a full-time Linux user when I converted by laptop to Linux in October of 2006. Like many Linux users I sampled many different distributions while choosing the one that best fit my personality. Eventually I settled on Ubuntu with the release of Ubuntu 7.10 (Gutsy Gibon). Despite choosing Ubuntu I always saw myself as a Linux and open source advocate first and an Ubuntu advocate second. I respected and valued that Linux and open source allowed people the freedom to make personal choices.

I helped organize the Ubuntu New York Local Team in their drive to become an approved team starting in November of 2008. In January of 2009 my application to become an Ubuntu Member was approved. Between November of 2008 and October of 2012 I helped organize and attended 93 Ubuntu, Linux or FOSS events. This included the first FOSSCON that was held at RIT in June of 2010.

In addition to local events I was involved in the global Ubuntu Community as a member of the Ubuntu Beginners Team, Ubuntu LoCo Council and Ubuntu Community Council. I was also fortunate to be sponsored to attend three Ubuntu Developer Summits (UDS). It was during my time serving on the Ubuntu Community Council that I yearned to have more time to get back to what I felt my core mission was; advocacy. I knew that when my term on the Ubuntu Community Council in November of 2015 that I could refocus my efforts.

Fedora Ambassador

I became a Fedora Ambassador on March 30th of 2009, but prior to December of 2015 I was more focused on Ubuntu related activities than Fedora. In late October of 2015 I reached out to a long time friend and FOSS Rock Star Remy DeCausmaker. Remy helped me find a few places I could contribute in the Fedora Project. Through these efforts I met Justin Flory who has an amazing passion for Open Source and Fedora. Almost a year later and I am very active as a contributor to the Fedora Project as an author and Ambassador. I have published 23 articles on Fedora Magazine including 19 How Do You Fedora interviews. Thanks to Justin inviting me along I also attended BrickHack and HackMIT as a Fedora Ambassador. HackMIT involved two six hour drives which allowed for a great amount of time to discuss and reflect on being a Fedora Ambassador. One of the topics in the discussion was how to measure the success of an event.

Measuring Success

Over the many years of being an open source advocate I have learned that the method to measure success can take many different forms. When organizing events for the New York LoCo we measured success by how many people attended the event. When I went to technical conferences success was measured by the number of CDs distributed. As I speaker I measured success by the number of people who attended the presentation. With Fedora Magazine I look at the number of views and comments for each article.

On the long ride home from HackMIT 2016 Justin and I discussed how to measure the success of our efforts. The Fedora Project has a badge for attending HackMIT 2016 and ten people have earned the badge. When your remove Justin and me that means 8 out of 1000 participants earned the Fedora HackMIT 2016 badge. What does this mean? I took a closer look at the badge and learned that six of the eight registered their FAS account during the event. Two already had FAS accounts. The numbers lead to several questions:

  • Will the six people who created an account to earn the badge become Fedora Contributors?
  • Will any of the people who did not earn the badge contribute to Fedora?
  • Is the badge a good measure of a successful outreach event?

The first two are good questions. It is difficult to track the first question and impossible to track the second one. The third question is the one that concerns me the most. I think badges are a good way to measure an inreach event, but a poor measure of an outreach effort. I would like to see a better way to measure the success of an event.

Fedora Ambassadors: Mission Statement

The mission of a Fedora Ambassador is clearly stated on the wiki page.

"Ambassadors are the representatives of Fedora. Ambassadors ensure the public understand Fedora's principles and the work that Fedora is doing. Additionally Ambassadors are responsible for helping to grow the contributor base, and to act as a liaison between other FLOSS projects and the Fedora community."

The Fedora Badge granted to attendees does not measure any of these items. I know that I personally handed out 200 fliers about the badge. In doing so I spoke to roughly 80% of the participants and had several good conversations about the Four Foundations. I showed excitement when people were using FOSS in their projects. I answered questions about the best light weight web server. I answered questions about why I chose Fedora. I expressed excitement when I found an entire team using Ubuntu Linux. All of those interactions embody the spirit of the mission. On the long drive home I posed a few questions as we discussed HackMIT:

  • Was the overall awareness of Fedora increased?
  • Was the overall awareness of Linux increased?
  • Was the overall awareness of FOSS increased?
  • Are the participants more likely to check Fedora out in the future?
  • Are the participants more likely to open source their work?

To answer these questions would require a survey. The survey would have to be relatively short, and not require a FAS account or require the person to identify themselves. This will make it more likely that participants would complete the survey. Beyond evaluating a single event the results for event categories could be combined and compared. Take all the answers for hackathon events and compare them to all the answers for maker faire events. With this data it might be possible to know what type of events provide the best opportunity for Ambassadors to make an impact. This would help the Fedora Community determine how to best spend limited funds and volunteer hours.

6 months a task warrior

A while back I added a task to my taskwarrior database: evaluate how things were going at 6 months of use. Today is that day. 🙂

A quick recap: about 6 months ago I switched from my ad-hoc list in a vim session and emails in a mailbox to using task warrior for tracking my various tasks ( http://taskwarrior.org/ )

Some stats:

Category Data
Pending 17
Waiting 20
Recurring 12
Completed 1094
Deleted 18
Total 1161
Annotations 1059
Unique tags 45
Projects 10
Blocked tasks 0
Blocking tasks 0
Data size 2.0 MiB
Undo transactions 3713
Sync backlog transactions 0
Tasks tagged 39.8%
Oldest task 2016-03-24-13:45
Newest task 2016-09-25-10:03
Task used for 6mo
Task added every 3h
Task completed every 4h
Task deleted every 10d
Average time pending 4d
Average desc length 32 characters

Overall I have gotten a pretty good amount of use from task. I do find it a bit sad that I have only been completing a task every 4 hours and adding one every 3 hours. At that rate things aren’t going to be great after a while. I have been using annotations a lot, which I think is a good thing. Looking back at old tasks I can get a better idea of what I did to solve a task or more context around it (I always try and add links to bugzilla or trac or pagure if there’s a ticket or bug involved).

I’d say I am happier for using task and will continue using it. It’s very nice to be able to see what all is pending and easily add things when people ask you for things and you are otherwise busy. I’d recommend it to anyone looking for a nice way to track tasks.

Clickable Pungi logs

When debugging problems with composes, the logs left behind by all stages of the compose run are tremendously helpful. However, they are rather difficult to read due to the sheer volume. Being exposed to them quite intensively for close to a year helps, but it still is a nasty chore.

The most accessible way to look at the logs is via a web browser on kojipkgs. It's just httpd displaying the raw log files on the disk.

It took me too long to figure out this could be made much more pleasant that copy-pasting stuff from the wall of text.

How about a user script that would run in Greasemonkey and allow clicking through to different log files or even Koji tasks?

<figure> Is this not better?<figcaption>Is this not better?</figcaption> </figure>

Turns out it's not that difficult.

Did you know that when Firefox displays a text/plain file, it internally creates an HTML document with all the content in one <pre> tag.

The whole script essentially just runs a search and replace operation on the whole page. We can have a bunch of functions that take the whole content as text and return it slightly modified.

First step will make URLs clickable.

function link_urls(str) {
  let pat = /https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{2,256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_\+.~#?&//=]*)/g;
  return str.replace(pat, '<a href="$&">$&</a>');
}

I didn't write the crazy regular expression myself. I got from Stack Overflow.

Next step can make paths to other files in the same compose clickable.

function link_local_files(url, pathname, mount, str) {
  let pat = new RegExp(mount + pathname + '(/[^ ,"\n]+)', 'g');
  return str.replace(pat, function (path, file) {
    return '<a href="' + url + file + '">' + path + '</a>';
  });
}

The last thing left is not particularly general: linking Koji tasks identifiers.

function link_tasks(taskinfo, str) {
  return str.replace('\d{8,}/m', '<a href="' + taskinfo + '$&">$&</a>')
            .replace(/(Runroot task failed|'task_id'): (\d{8,})/g,
                     '$1: <a href="' + taskinfo + '$2">$2</a>);
  }
}

Tying all these steps together and passing in the extra arguments is rather trivial but not very generic.

window.onload = function () {
  let origin = window.location.origin;
  let pathname = window.location.pathname.split('/', 4).join('/');
  let url = origin + pathname;
  let taskinfo = 'https://koji.fedoraproject.org/koji/taskinfo?taskID=';
  let mount = '/mnt/koji';

  var content = document.getElementsByTagName('pre')[0];
  var text = content.innerHTML;
  content.innerHTML = link_local_files(
    url, pathname, mount,
    link_tasks(taskinfo, link_urls(text))
  );
}

If you find this useful, feel free to grab the whole script with a header.

A GNU Start

I am thrilled to say that last week I became the newest member of the Fedora Engineering team. I will be working on the applications that help the Fedora community create a fantastic Linux distribution. I’m excited to be joining the team and I look forward to working with everyone!

Previously, I worked on the Pulp project, which is a content management system used in Red Hat Satellite 6. I learned a great deal while working with some excellent engineers on this project.

September 24, 2016

Bodhi 2.2.2 released

This is another in a series of bug fix releases for Bodhi this week. In this release, we've fixed
the following issues:

  • Disallow comment text to be set to the NULL value in the database #949.
  • Fix autopush on updates that predate the 2.2.0 release #950.
  • Don't wait on mashes when there aren't any 68de510c.
Fedora 25: Webkitgtk4 Update knockt Evolution, Epiphany und andere aus

Im Ramen des Updates auf Gnome 3.22 für Fedora 25 werden auch das webkitgtk4 Pakete auf Version 2.14.0-1 aktualisiert, was jedoch dazu führt, das Evolution keine Mails mehr darstellt und Epiphany keine Webseiten mehr anzeigt. Potentiell sind jedoch alle Anwendungen betroffen, die webkitgtk4 nutzen.

Wer das Update bereits eingespielt hat und von dem Problem betroffen ist, kann als Workaround ein Downgrade der webkitgtk4 Pakete mittels

su -c'dnf downgrade webkitgtk4\*'

durchführen. Jedoch muss bei zukünftigen Updates darauf geachtet werden, das webkitgtk4 nicht wieder auf die kaputte Version aktualisiert wird. Bei dnf lässt sich dies durch den zusätzlichen Parameter „-x“ erreichen, welcher dnf anweist, etwaige Updates eines Paketes zu ignorieren. Im aktuellen Fall würde der Update-Befehl für dnf wie folgt aussehen:

su -c'dnf update -x webkitgtk4\*'

September 23, 2016

We’re looking for a GNOME developer

We in the Red Hat desktop team are looking for a junior software developer who will work on GNOME. Particularly in printing and document viewing areas of the project.

The location of the position is Brno, Czech Republic, where you’d join a truly international team of desktop developers. It’s a junior position, so candidates just off the university, or even still studying are welcome. We require solid English communication skills and experience with C (and ideally C++, too). But what is a huge plus is experience with GNOME development and participation in the community.

Interested? You can directly apply for the position at jobs.redhat.com or if you have any question, you can write me: eischmann [] redhat [] com.

198px-gnomelogo-svg


Blender nightly in Flatpak

Over the past week I started on an experiment: building the git master branch of Blender in Flatpak.

And I decided to go crazy on it, and also build all its dependencies from their respective git master branch.

I've just pushed the result to my flatpak repo, and it seems to work in my limited testing.

As a result, you can now try out the bleeding edge of Blender development safely with Flatpak, and here's how.

First, install the Freedesktop Flatpak runtime:

$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg
$ flatpak remote-add --user --gpg-import=./gnome-sdk.gpg gnome https://sdk.gnome.org/repo/
$ flatpak install --user gnome org.freedesktop.Platform 1.4

Next, install the Blender app from the master branch of my repo:

$ flatpak remote-add --user --no-gpg-verify bochecha https://www.daitauha.fr/static/flatpak/repo-apps/
$ flatpak install --user bochecha org.blender.app master

That's it!

I want to be clear that I will not build this every day (or every night) as a real "nightly". I just don't have the computing resources to do that, and every build is a big hit on my laptop. (Did I mention this includes building Boost from git master? 😅)

However I'll try to rebuild it from time to time, to pick up updates.

Also, I want to note that this is an experiment in pushing the bleeding edge for Blender to the maximum with Flatpak. If upstream Blender eventually provided nightly builds as Flatpak (for which I'd be happy to help them), they probably would compromise on which dependencies to build from stable releases, and which ones to build from their git master branches.

For example, they probably wouldn't use Python from master like I do. Right now that means this build uses the future 3.7 release of Python, even though 3.6 hasn't been released yet. ☻

Another bad idea in this build is Boost from master, which takes ages just to fetch its myriad of git submodules, let alone build it.

But for an experiment in craziness, it works surprisingly well.

Try it out, and let me know how it goes!

What’s new in 389 Directory Server 1.3.5

As a member of the 389 Directory Server (389DS) core team, I am always excited about our new releases. We have some really great features in 1.3.5. However, our changelogs are always large so I want to just touch on a few of my favourites.

389 Directory Server is an LDAPv3 compliant server, used around the world for Identity Management, Authentication, Authorisation and much more. It is the foundation of the FreeIPA project’s server. As a result, it’s not something we often think about or even get excited for: but every day many of us rely on 389DS to be correct, secure and fast behind the scenes.

389 Directory Server version 1.3.5 is available now in the official Fedora 24, Fedora 25, and rawhide repositories.

Tuning database cache size

Database cache tuning is something that is frequently discussed around 389DS to gain the best performance from your server. We have overhauled the database automatic tuning code to detect memory available on the system more accurately, we split it better between backends, and make better decisions if the ram requested is too much.

For those who manually tune their backend memory usage, we now have better detection of if your tuning is going to cause stability issues. We issue better warnings, and tell you exactly which parameters you need to alter to correct problems before they happen. By putting the config values you need to alter in the error message, it saves
time and confusion by directing you, the administrator, to exactly what you need to do to improve your server health and stability.

We have also eliminated an entire class of issues with database import and re-indexing by automatically tuning the buffer sizes during the process: No more tweaking database cache sizes to import those large databases!

Auditing for attempted changes

We have added new features called the auditfail log. Previously, if a change was made, we would log who made the change, and what they changed to the audit log. But if someone attempted a change, and it failed, we would not log it.

In 1.3.5 this has changed. You can enable the auditfail log with in cn=config

nsslapd-auditfaillog-enabled: on

When a change is attempted, and fails, the reason why (I.E. incorrect object class, lack of permission) and the data that they attempted to change is logged. This is great for debugging applications, but also a great win for security as we can see if someone is attempting to change data they do not have access to.

Hardening and stability

We have been applying static and dynamic analysis tools to 389DS during this development cycle. Combined with our extensive test suites, we have closed many stability bugs (overflows, use after free, double free, segfaults and more) proactively during our development. This has made 1.3.5 in my view, what will be the most reliable, secure version of 389DS we have ever released.

Conclusion

389DS 1.3.5 is out now in Fedora 24: if you are running 389DS or FreeIPA, you are already hopefully seeing the benefits of this release!

There are many more changes than this in the 1.3.5 release: to learn more, see our release notes. Our team’s goal has been to eliminate administrative issues (not document, eliminate – never to be seen again!), improve performance and stability, and to provide better, correct defaults in the server. So many of these changes are “out of sight” to users and even administrators; but they are invaluable for improving services like FreeIPA that build upon 389 Directory Server.

New badge: F25 i18n Test Day Participant !
F25 i18n Test Day ParticipantYou helped test i18n features in Fedora 25\! Thanks\!
New badge: F24 i18n Test Day Participant !
F24 i18n Test Day Participant You helped test i18n features in Fedora 24! Thanks!

September 22, 2016

A bug fix release, primarily focusing on mashing issues

Bodhi 2.2.1 is a bug fix release, primarily focusing on mashing issues:

  • Register date locked during mashing #952.
  • UTF-8 encode the updateinfo before writing it to disk #955.
  • Improved logging during updateinfo generation #956.
  • Removed some unused code 07ff664f.
  • Fix some incorrect imports 9dd5bdbc and b1cc12ad.
  • Rely on self.skip_mash to detect when it is ok to skip a mash ad65362e.
Importing a Public SSH Key

Rex was setting up a server and wanted some help.  His hosting provider had set him up with a username and password for authentication. He wanted me to log in to the machine under his account to help out.  I didn’t want him to have to give me his password.  Rex is a smart guy, but he is not a Linux user.  He is certainly not a system administrator.  The system was CentOS.  The process was far more difficult to walk

I use public keys cryptography all the time to log in to remote systems.  The OpenSSH client uses a keypair that is stored on my laptop under $HOME/.ssh.  The public key is in $HOME/.ssh/id_rsa and the private one is in $HOME/.ssh/id_rsa.pub.  In order for the ssh command to use this keypair to authenticate me when I try to login, the key stored in $HOME/.ssh/id_rsa.pub first needs to be copied, to the remote machine’s $HOME/.ssh/authorized_keys file.  If the permissions on this file are wrong, or the permissions on the directory  $HOME/.ssh are wrong, ssh will refuse my authentication attempt.

Trying to work this out over chat with someone unfamiliar with the process was frustrating.

This is what the final product looks like.

rex@drmcs [~]# ls -la $HOME/.ssh/
total 12
drwx------ 2 rex rex 4096 Sep 21 13:01 ./
drwx------ 9 rex rex 4096 Sep 21 13:28 ../
-rw------- 1 rex rex  421 Sep 21 13:01 authorized_keys

This should be scriptable.

#!/bin/bash
SSH_DIR=$HOME/.ssh/
AUTHN_FILE=$SSH_DIR/authorized_keys

SSH_KEY="PASTE PUBLIC KEY HERE, ALL ON ONE LINE, THEN REMOVE THE NEXT LINE"
exit 0

mkdir -p $SSH_DIR
chmod 700 $SSH_DIR
touch $AUTHN_FILE
chmod 600 $AUTHN_FILE
echo $SSH_KEY >> $AUTHN_FILE

However, it occured to me that he really should not even be adding me to his account, but, instead, should be creating a separate account for me, only giving me access to that, which would let me look around but not touch. Second attempt:

#!/bin/bash

NEW_USER="NEW USERNAME"
SSH_KEY="PASTE PUBLIC KEY HERE, ALL ON ONE LINE, THEN REMOVE THE NEXT LINE"
exit 0

/usr/sbin/useradd $NEW_USER
SSH_DIR=/home/$NEW_USER/.ssh/
AUTHN_FILE=$SSH_DIR/authorized_keys

mkdir -p $SSH_DIR
chmod 700 $SSH_DIR
touch $AUTHN_FILE 
chmod 600 $AUTHN_FILE
echo $SSH_KEY >> $AUTHN_FILE 

chown -R $NEW_USER:$NEW_USER $SSH_DIR

To clean up the account when I am done, Rex can run:

sudo /usr/sbin/userdel -r admiyo

Which will not only remove my account, but also the directory /home/ayoung
If I have left a login he will see:

userdel: user admiyo is currently used by process 3561
#photos happenings

Out of a mojito bar in South Beach with a lobotomised plastic picnic spoon and a crew of control freaks.

3.22 is here

We recently released GNOME 3.22. It will be in Fedora Workstation 25. Go look at the video — it’s awesome!

GNOME Photos has again taken significant strides forward – just like we did six months ago in 3.20. One of the big things that we added this time was sharing. This nicely rounds out our existing online acccounts integration, and complements the work we did on editing six months ago.

gnome-photos-sharing

Sharing is an important step towards a more tightly integrated online account experience in GNOME. We have been interested in a desktop-wide sharing service for some time. With Flatpak portals becoming a reality, I hope that the sharing feature in Photos can be spun off into a portal for GNOME.

Thanks to Umang Jain, our GSoC intern this summer for working on sharing.

We overhauled a lot of hairy architectural issues, which will let us have nicer overview grids in the near future. Alessandro created a Flatpak. This means that going forward, you can easily try out the nightly builds of Photos thanks to the Flatpak support in GNOME Software 3.22.

gnome-photos-flatpak2

Thanks to Kalev Lember for the wonderful screenshot.

The future

I think that we are reaching a point where we can recommend Photos to a wider group of users. With editing and sharing in place, we have filled some of the bigger gaps in the user experience that we want to offer. Yes, there are some missing features and rough edges that we are aware of, so we we are going to spend the next six months addressing the ones that are most important. You can look at our roadmap for the full picture, but I am going to highlight a few.

Better overview grids (GNOME #690623)

We have been using GtkIconView to display the grid of thumbnails that we call the overview. GtkIconView has been around for a long while, but it has some issues – both visual and performance. Therefore, we want to replace it with GtkFlowBox so (a) that the application remains responsive while we are populating the grid, and (b) we can have really pretty visuals.

Eventually, we want this:

photos-photos

Import from device (GNOME #751212)

This is one of the biggest missing features, in my opinion. We really need a way to import content from removable devices and cameras that doesn’t involve mucking around with files and directories.

Petr Stetka has already started working on this, but I am sure he will appreciate any help with this.

More sharing (GNOME #766031)

Last but not the least, I definitely like showing off on Facebook and so do you! So I want to add a Facebook share-point and possibly a few more.

Come, join us

If any of this interests you, then feel free to jump right in. We have a curated list of newcomer bugs and a guide for those who are relatively new. If you are an experienced campaigner, you can look at the roadmap for more significant tasks.

For any help, discussions or general chitchat, #photos on GIMPNet is the place to be.


Acessando seu Fedora pelo Windows com RDP
Para podemos acessar precisamos instalar o Xrdp no Fedora

# dnf install xrdp  -y

Vamos iniciar o serviço

# systemctl start xrdp

Vamos ativar a inicialização do mesmo

# systemctl enable xrdp

Vamos ajustar as regras de firewall

# firewall-cmd --add-port=3389/tcp --permanent

#firewall-cmd --reload 

Feito isso pelo windows vamos acessar com rdp ,
Na exibição vamos alterar o sistema de cores para  "True color (25bits)
como mostra a imagem a baixo



Feito Isso agora vamos acessar











Guia de Referencia para esse Dica
https://www.server-world.info/en/note?os=Fedora_24&p=desktop&f=7
https://www.vivaolinux.com.br/dica/Acesso-remoto-ao-Raspbian-com-xrdp/
Logging to Elasticsearch made simple with syslog-ng

Elasticsearch is gaining momentum as the ultimate destination for log messages. There are two major reasons for this:

  • You can store arbitrary name-value pairs coming from structured logging or message parsing.
  • You can use Kibana as a search and visualization interface.

Logging to Elasticsearch the traditional way

Originally, you could only send logs to Elasticsearch via Logstash. But the problem with Logstash is that it is quite heavy-weight, as it requires Java to run, and most of it was written in Ruby. While the use of Ruby makes it easy to extend Logstash with new features, it uses too much resource to be used universally. It is not something to be installed on thousands of servers, virtual machines or containers.

The workaround for this problem is to use the different Beats data shippers, which are friendlier with resources. If you also need reliability and scalability, you also need buffering. For this purpose, you need an intermediate database or message broker: Beats and Logstash support Redis and Apache Kafka.

Screenshot_2016-09-21_18-17-55If you look at the above architecture, you’ll see that you need to learn many different software to build an efficient, reliable and scalable logging system around Elasticsearch. All of these software have a different purpose, different requirements and different configuration.

Logging to Elasticsearch made simple

The good news is that syslog-ng can fulfill all of these roles. Most of syslog-ng is written in efficient C code, so it can be installed even in containers without extra resource overhead. It uses PatternDB for message parsing, which uses an efficient Radix-tree based algorithm instead of resource-hungry regular expressions. Of course regexp and a number of other parsers are also available, implemented in efficient C or Rust code. The only part of the pipeline where Java is needed is when the central syslog-ng server sends the log messages to the Elasticsearch server. In other words, only the Elasticsearch destination driver of syslog-ng uses Java, and it uses the official JAR client libraries from Elasticsearch for maximum compatibility.

Screenshot_2016-09-21_18-18-11As syslog-ng has disk-based buffering, you do not need external buffering solutions to enhance scalability and reliability, making your logging infrastructure easier to create and maintain. Disk-based buffering has been available in syslog-ng Premium Edition (the commercial version of syslog-ng) for a long time, and recently also became part of syslog-ng Open Source Edition (OSE) 3.8.1.

How to get started with syslog-ng and Elasticsearch

The syslog-ng application comes with detailed documentation to get you started and help you fine tune your installation.

If you want to get started with parsing messages – replacing grok – see the following links:

Are you stuck?

If you have any questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by e-mail or even in real time via chat. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter I’m available as @PCzanik.

A new Amazon seller scam

Amazon, so convenient, yet so annoying when things go wrong. This useless seller looks like a new type of scam to me. It seems to go like this:

  1. New seller appears, offering just about everything in Amazon’s catalog.
  2. You don’t notice this when buying, but the shipping window is open-ended (from a few days up to months). However you are optimistic, after all most Amazon orders arrive pretty quickly.
  3. Seller very quickly notifies you that the item has shipped. Great!
  4. Nothing arrives after a few weeks.
  5. You check the feedback, and now it looks terrible.
  6. You notice that the “tracking number” is completely bogus. Just a made up number and random shipping company (the seller is apparently based in Shenzen, but somehow the bogus tracking number comes from Singapore post?)
  7. You try to cancel the order. However Amazon won’t let you do that, because the item has been dispatched and it’s still in the shipping window (which, remember, doesn’t end for another couple of months).
  8. You contact the seller. Amazon forces sellers to respond within 3 days. This seller does respond! … to every message with the same nonsense autoresponse.
  9. As a result you can’t cancel the order either.
  10. There is no other way to escalate the problem or cancel the order (even though this clearly violates UK law).
  11. Seller now has your money, you have no product, and no way to cancel for another few months.
  12. Profit!

Comments about OARS and CSM age ratings

I’ve had quite a few comments from people stating that using age rating classification values based on American culture is wrong. So far I’ve been using the Common Sense Media research (and various other psychology textbooks) to essentially clean-room implement a content-rating to appropriate age algorithm.

Whilst I do agree that other cultures have different sensitivities (e.g. Smoking in Uganda, references to Nazis in Germany) there doesn’t appear to be much research on the suggested age ratings for different categories for those specific countries. Lots of things are outright banned for sale for various reasons (which the populous may completely ignore), but there doesn’t seem to be many statistics that back up the various anecdotal statements. For instance, are there any US-specific guidelines that say that the age rating for playing a game that involves taking illegal drugs should be 18, rather than the 14 which is inferred from CSM? Or the age rating should be 25+ for any game that features drinking alcohol in Saudi Arabia?

Suggestions (especially references) welcome. Thanks!

Serasa - Comunicado Digital. (33382)

September 21, 2016

All systems go
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
GNOME 3.22 Released

The GNOME Community has just announced the official release of GNOME 3.22.  GNOME 3.22 — which is slated to be used as the desktop environment for Fedora Workstation 25 — provides a multitude of new features, including a the updated Files application, and comprehensive Flatpak integration with the Software application.

Fedora users that want to try out the new features in GNOME 3.22 can install a pre-release version of Fedora 25, which currently contains a pre-release of GNOME 3.22, but will be updated to include the stable 3.22 release. Alternatively, if you are running Fedora 24, and want to try out individual applications from the GNOME 3.22 release, these can be installed via Flatpak.

Files Application (nautilus)

One of the major applications in the GNOME family that got updates for the 3.22 release was the Files application (nautilus). As previously reported here on the Fedora Magazine, Files has a nifty new batch file renaming ability now baked in.

Batch File renaming in GNOME 3.22

Batch File renaming in GNOME 3.22

Another neat new feature in Files is updated sorting and view options controls, allowing you to switch between the grid and list view with a single click, and simplification of the zoom and sorting options. These changes were implemented after a round of usability testing by Outreachy intern Gina Dobrescu.

Updated Sorting controls in the Files application

Updated Sorting controls in the Files application

Software Application

software

GNOME Software 3.22

The Software application in 3.22 is also updated, with the landing page showing more application tiles. Star ratings — that were introduced in a previous release are now more prominently displayed, and new colour coded badges indicate if an application is Free Software. Installation of Flatpak applications from Flatpak repositories is now also supported in the Software application.

Keyboard Settings

The keyboard settings in 3.22 are also updated, providing easier ways to search, browse and configure your keyboard settings and shortcuts

Keyboard Settings in GNOME 3.22

Keyboard Settings in GNOME 3.22

More Information

For more information on what makes up the 3.22 release, check out the official release announcement, and the release notes.

 

There are scheduled downtimes in progress
New status scheduled: planned outage for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Linux application Flowblade version 1.8 .
This video editor is a multitrack non-linear video editor for Linux released under GPL 3 license.
I try to used also with Fedora linux distro ( Fedora 25 alpha) but not work for me.
I don't see in Fedora anything like python-gi-cairo.
Also I put this issue under github project, maybe will be fixed.

According to the official webpage, the software come with:

Features

Editing:

    3 move tools
    3 trim tools
    4 methods to insert / overwrite / append clips on the timeline
    Drag'n'Drop clips on the timeline
    Clip and compositor parenting with other clips
    Max. 9 combined video and audio tracks available

Image compositing:

    6 compositors. Mix, zoom, move and rotate source video with keyframed animation tools
    19 blends. Stardand image blend modes like Add, Hardlight and Overlay are available
    40+ pattern wipes.

Image and audio filtering:

    50+ image filters: color correction, image effects, distorts, alpha manipulation, blur, edge detection, motion effects, freeze frame, etc.
    30+ audio filters: keyframed volume mixing, echo, reverb, distort, etc.

Supported editable media types:

    Most common video and audio formats, depends on installed MLT/FFMPEG codecs
    JPEG, PNG, TGA, TIFF graphics file types
    SVG vector graphics
    Numbered frame sequences

Output encoding:

    Most common video and audio formats, depends on installed MLT/FFMPEG codecs
    User can define rendering by setting FFMpeg args individually
Microsoft aren't forcing Lenovo to block free operating systems
There's a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.

The background is straightforward. Intel platforms allow the storage to be configured in two different ways - "standard" (normal AHCI on SATA systems, normal NVMe on NVMe systems) or "RAID". "RAID" mode is typically just changing the PCI IDs so that the normal drivers won't bind, ensuring that drivers that support the software RAID mode are used. Intel have not submitted any patches to Linux to support the "RAID" mode.

In this specific case, Lenovo's firmware defaults to "RAID" mode and doesn't allow you to change that. Since Linux has no support for the hardware when configured this way, you can't install Linux (distribution installers will boot, but won't find any storage device to install the OS to).

Why would Lenovo do this? I don't know for sure, but it's potentially related to something I've written about before - recent Intel hardware needs special setup for good power management. The storage driver that Microsoft ship doesn't do that setup. The Intel-provided driver does. "RAID" mode prevents the Microsoft driver from binding and forces the user to use the Intel driver, which means they get the correct power management configuration, battery life is better and the machine doesn't melt.

(Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot, and if they managed to figure that out they'd have worse power management. That increases support costs. For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule)

Things are somewhat obfuscated due to a statement from a Lenovo rep:This system has a Signature Edition of Windows 10 Home installed. It is locked per our agreement with Microsoft. It's unclear what this is meant to mean. Microsoft could be insisting that Signature Edition systems ship in "RAID" mode in order to ensure that users get a good power management experience. Or it could be a misunderstanding regarding UEFI Secure Boot - Microsoft do require that Secure Boot be enabled on all Windows 10 systems, but (a) the user must be able to manage the key database and (b) there are several free operating systems that support UEFI Secure Boot and have appropriate signatures. Neither interpretation indicates that there's a deliberate attempt to prevent users from installing their choice of operating system.

The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware - we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in "RAID" mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.

comment count unavailable comments
Fedora / RISC-V stage4 autobuilder is up and running

Bootstrapping Fedora on the new RISC-V architecture continues apace.

I have now written a small autobuilder which picks up new builds from the Fedora Koji build system and attempts to build them in the clean “stage4” environment.

Getting latest packages from Koji ...
Running: 0 (max: 16) Waiting to start: 7
uboot-tools-2016.09.01-1.fc25.src.rpm                       |  11 MB  00:10     
uboot-tools-2016.09.01-1.fc25 build starting
tuned-2.7.1-2.fc25.src.rpm                                  | 136 kB  00:00     
tuned-2.7.1-2.fc25 build starting
rubygem-jgrep-1.4.1-1.fc25.src.rpm                          |  24 kB  00:00     
rubygem-jgrep-1.4.1-1.fc25 build starting
qpid-dispatch-0.6.1-3.fc25.src.rpm                          | 1.3 MB  00:01     
qpid-dispatch-0.6.1-3.fc25 build starting
python-qpid-1.35.0-1.fc25.src.rpm                           | 235 kB  00:01     
python-qpid-1.35.0-1.fc25 build starting
java-1.8.0-openjdk-aarch32-1.8.0.102-4.160812.fc25.src.rpm  |  53 MB  00:54     
java-1.8.0-openjdk-aarch32-1.8.0.102-4.160812.fc25 build starting
NetworkManager-strongswan-1.4.0-1.fc25.src.rpm              | 290 kB  00:00     
NetworkManager-strongswan-1.4.0-1.fc25 build starting
MISSING DEPS: NetworkManager-strongswan-1.4.0-1.fc25 (see
logs/NetworkManager-strongswan/1.4.0-1.fc25/root.log)
   ... etc ...

Given that we don’t have GCC in the stage4 environment yet, almost all of them currently fail due to missing dependencies, but we’re hoping to correct that soon. In the mean time a few packages that have no C dependencies can actually compile. This way we’ll gradually build up the number of packages for Fedora/RISC-V, and that process will accelerate rapidly once we’ve got GCC.

You can browse the built packages and build logs here: https://fedorapeople.org/groups/risc-v/


A new cmocka release version 1.1.0

It took more than a year but finally Jakub and I released a new version of cmocka today. If you don’t know it yet, cmocka is a unit testing framework for C with support for mock objects!

We set the version number to 1.1.0 because we have some new features:

  • Support to catch multiple exceptions
  • Support to verify call ordering (for mocking)
  • Support to pass initial data to test cases
  • A will_return_maybe() function for ignoring mock returns
  • Subtests for groups using TAP output
  • Support to write multiple XML output files if you have several groups in a test
  • and improved documentation

We have some more features we are working on. I hope it will not take such a long time to release them.

Dgplug contributor grant recipient Trishna Guha

I am happy to announce that Trishna Guha is the recipient of a dgplug contributor grant for 2016. She is an upstream contributor in Fedora Cloud SIG, and hacks on Bodhi in her free time. Trishna started her open source journey just a year back during the dgplug summer training 2015, you can read more about her work in a previous blog post. She has also become an active member of the local Pune PyLadies chapter.

The active members of dgplug.org every year contribute funding, which we then use to help out community members as required. For example, we previously used this fund to pay accommodation costs for our women contributors during PyCon. This year we are happy to be able to assist Trishna Guha to attend PyCon India 2016. Her presence and expertise with upstream development will help diversity efforts at various levels. As she is still a college student, we found many students are interested to talk to and learn from her. So, if you are coming down to PyCon India this weekend, remember to visit the Red Hat booth, and have a chat with Trishna.

GNOME 3.22 core apps

GNOME 3.22 is scheduled to be released today. Along with this release come brand new recommendations for distributions on which applications should be installed by default, and which applications should not. I’ve been steadily working on these since joining the release team earlier this year, and I’m quite pleased with the result.

When a user installs a distribution and boots it for the first time, his or her first impression of the system will be influenced by the quality of the applications that are installed by default. Selecting the right set of default applications is critical to achieving a quality user experience. Installing redundant or overly technical applications by default can leave users confused and frustrated with the distribution. Historically, distributions have selected wildly different sets of default applications. There’s nothing inherently wrong with this, but it’s clear that some distributions have done a much better job of this than others. For instance, a default install of Debian 8 with the GNOME desktop includes two different chess applications, GNOME Chess and XBoard. Debian fails here: these applications are redundant, for starters, and the later app looks like an ancient Windows 95 application that’s clearly out of place with the rest of the system. It’s pretty clear that nobody is auditing the set of default applications here, as I doubt anyone would have intentionally included Xboard; it turns out that XBoard gets pulled in by Recommends via an obscure chess engine that’s pulled in by another Recommends from GNOME Chess, so I presume this is just an accident that nobody has ever cared to fix. Debian is far from the only offender here; you can find similar issues in most distributions. This is the motivation for providing the new default app recommendations.

Most distributions will probably ignore these, continue to select default apps on their own, and continue to do so badly. However, many distributions also strive to provide a pure, vanilla GNOME experience out-of-the-box. Such distributions are the target audience for the new default app guidelines. Fedora Workstation has already adopted them as the basis for selecting which apps will be present by default, and the result is a cleaner out-of-the-box experience.

Update: I want to be clear that these guidelines are not appropriate for all distros. Most distros are not interested in providing a “pure GNOME experience.” Distros should judge for themselves if these guidelines are relevant to them.

Classifications

The canonical source of these classifications is maintained in JHBuild, but that’s not very readable, so I’ll list them out here. The guidelines are as follows:

  • Applications classified as core are intended to be installed by default. Distributions should only claim to provide a vanilla GNOME experience if all such applications are included out-of-the-box.
  • Applications classified as extra are NOT intended to be installed by default. Distributions should not claim to provide a vanilla GNOME experience if any such applications are included out-of-the-box.
  • Applications classified as Incubator are somewhere in between. Incubator is a classification for applications that are designed to be core apps, but which have not yet reached a high enough level of quality that we can move them to core and recommend they be installed by default. If you’re looking for somewhere to help out in GNOME, the apps marked Incubator would be good places to start.

Core apps

Distributions that want to provide a pure GNOME experience MUST include all of the following apps by default:

  • Archive Manager (File Roller)
  • Boxes
  • Calculator
  • Calendar
  • Characters (gnome-characters, not gucharmap)
  • Cheese
  • Clocks
  • Contacts
  • Disk Usage Analyzer (Baobab)
  • Disks
  • Document Viewer (Evince)
  • Documents
  • Files (Nautilus)
  • Font Viewer
  • Help (Yelp)
  • Image Viewer (Eye of GNOME)
  • Logs (gnome-logs, not gnome-system-log)
  • Maps
  • Photos
  • Screenshot
  • Software
  • System Monitor
  • Terminal
  • Text Editor (gedit)
  • Videos (Totem)
  • Weather
  • Web (Epiphany)

Notice that all core apps present generic names (though it’s somewhat debatable if Cheese qualifies as a generic name, I think it sounds better than alternatives like Photo Booth). They all also (more or less) follow the GNOME Human Interface Guidelines.

The list of core apps is not set in stone. For example, if Photos or Documents eventually learn to provide good file previews, we wouldn’t need Image Viewer or Document Viewer anymore. And now that Files has native support for compressed archives (new in 3.22!), we may not need Archive Manager much longer.

Currently, about half of these applications are arbitrarily marked as “system” applications in Software, and are impossible to remove. We’ve received complaints about this and are mostly agreed that it should be possible to remove all but the most critical core applications (e.g. allowing users to remove Software itself would clearly be problematic). Unfortunately this didn’t get fixed in time for GNOME 3.22, so we will need to work on improving this situation for GNOME 3.24.

Incubator

Distributions that want to provide a pure GNOME experience REALLY SHOULD NOT include any of the following apps by default:

  • Dictionary
  • Music
  • Notes (Bijiben)
  • Passwords and Keys (Seahorse)

We think these apps are generally useful and should be in core; they’re just not good enough yet. Please help us improve them.

These are not the only apps that we would like to include in core, but they are the only ones that both (a) actually exist and (b) have actual releases. Take a look at our designs for core apps if you’re interested in working on something new.

Extra apps

Distributions that want to provide a pure GNOME experience REALLY SHOULD NOT include any of the following apps by default:

  • Accerciser
  • Builder
  • dconf Editor
  • Devhelp
  • Empathy
  • Evolution
  • Hex Editor (ghex)
  • gitg
  • Glade
  • Multi Writer
  • Nemiver
  • Network Tools (gnome-nettool)
  • Polari
  • Sound Recorder
  • To Do
  • Tweak Tool
  • Vinagre

Not listed are Shotwell, Rhythmbox, or other applications hosted on git.gnome.org that are not (or are no longer) part of official GNOME releases. These applications REALLY SHOULD NOT be included either.

Note that the inclusion of applications in core versus extra is not a quality judgment: that’s what Incubator is for. Rather, we  classify apps as extra when we do not believe they would be beneficial to the out-of-the-box user experience. For instance, even though Evolution is (in my opinion) the highest-quality desktop mail client that exists today, it can be very difficult to configure, the user interface is large and unintuitive, and most users would probably be better served by webmail. Some applications listed here are special purpose tools that are probably not generally useful to the typical user (like Sound Recorder). Other applications, like Builder, are here because they are developer tools, and developer tools are inherently extremely confusing to nontechnical users. (Update: I originally used Polari instead of Builder as the developer tool example in the previous sentence. It was a bad example.)

Games

What about games? It’s OK to install a couple of the higher-quality GNOME games by default, but none are necessary, and it doesn’t make sense to include too many, since they vary in quality. For instance, Fedora Workstation does not include any games, but Ubuntu installs GNOME Mahjongg, GNOME Mines, and GNOME Sudoku. This is harmless, and it seems like a good list. I might add GNOME Chess, or perhaps GNOME Taquin. I’ve omitted games from the list of extra apps up above, as they’re not my focus here.

Third party applications

It’s OK to include a few third-party, non-GNOME applications by default, but they should be kept to a reasonable minimum. For example Fedora Workstation includes Firefox (instead of Epiphany), Problem Reporting (ABRT), SELinux Troubleshooter, Shotwell (instead of GNOME Photos), Rhythmbox, and LibreOffice Calc, Draw, Impress, and Writer. Note that LibreOffice Base is not included here, because it’s not reasonable to include a database management tool on systems designed for nontechnical users. The LibreOffice start center is also not included, because it’s not an application.

Summing up

Distributions, consider following our recommendations when deciding what should be installed by default. Other distributions should feel encouraged to use these classifications as the basis for downstream package groups. At the very least, distributions should audit their set of default applications and decide for themselves if they are appropriate. A few distributions have some horrendous technical stuff visible in the overview by default; Fedora Workstation shows it does not have to be this way.

GNOME Software and Age Ratings

After all the tarballs for GNOME 3.22 the master branch of gnome-software is now open to new features. Along with the usual cleanups and speedups one new feature I’ve been working on is finally merging the age ratings work.

screenshot-from-2016-09-21-10-22-36

The age ratings are provided by the upstream-supplied OARS metadata in the AppData file (which can be generated easily online) and then an age classification is generated automatically using the advice from the appropriately-named Common Sense Media group. At the moment I’m not doing any country-specific mapping, although something like this will be required to show appropriate ratings when handling topics like alcohol and drugs.

At the moment the only applications with ratings in Fedora 26 will be Steam games, but I’ve also emailed any maintainer that includes an <update_contact> email address in the appdata file that also identifies as a game in the desktop categories. If you ship an application with an AppData and you think you should have an age rating please use the generator and add the extra few lines to your AppData file. At the moment there’s no requirement for the extra data, although that might be something we introduce just for games in the future.

I don’t think many other applications will need the extra application metadata, but if you know of any adult only applications (e.g. in Fedora there’s an application for the sole purpose of downloading p0rn) please let me know and I’ll contact the maintainer and ask what they think about the idea. Comments, as always, welcome. Thanks!

Rust meets Fedora

What is Rust?

Rust is a system programming language which runs blazingly fast, and prevents almost all crashes, segfaults, and data races. You might wonder exactly why yet another programming language is useful, since there are already so many of them. This article aims to explain why.

Safety vs. control

Why Rust?

You may have seen a diagram of the above spectrum. On one side there’s C/C++, which has more control of the hardware it’s running on. Therefore it lets the developer optimize performance by executing finer control over the generated machine code. However, this isn’t very safe; it’s easier to cause a segfault, or security bugs like Heartbleed.

On the other hand, there are languages like Python, Ruby, and JavaScript where the developer has little control but creates safer code. The code can’t generate a segfault, although it can generate exceptions which are fairly safe and contained.

Somewhere in the middle, there’s Java and a few other languages which are a mixture of these characteristics. They offer some control of the hardware they run on but try to minimize vulnerabilities.

Rust is a bit different, and doesn’t fall in this spectrum. Instead it gives the developer both safety and control.

Specialties of Rust

Rust is a system programming language like C/C++, except that it gives the developer fine grained control over memory allocations. A garbage collector is not required. It has a minimal runtime, and runs very close to the bare metal. The developer has greater guarantees about the performance of the code. Furthermore, anyone who knows C/C++ can understand and write code for this language.

Rust runs blazingly fast, since it’s a compiled language. It uses LLVM as the compiler backend and can tap into a large suite of optimizations. In many areas it can perform better than C/C++. Like JavaScript, Ruby, and Python, it’s safe by default, meaning it doesn’t cause segfaults, dangling pointers, or null pointers.

Another important feature is the elimination of data races. Nowadays, most computers have multiple cores and many threads running in parallel. However it’s tough for developers to write good parallel code, so this feature removes that necessity. There are two key concepts Rust uses to eliminate data races:

  • Ownership. Each variable is moved to a new location, and prevents the previous location from using it. There is only one owner of each piece of data.
  • Borrowing. Owned values can be borrowed to allow usage for a certain period of time.

Rust in Fedora 24 and 25

To get started, just install the package:

sudo dnf install rust

Here’s a demo program you can create. Edit a file with this content called helloworld.rs on your system:

fn main() {
    println!("Hello, Rust is running on Fedora 25 Alpha!");
}

Then use rustc to compile the program and run the resulting executable:

rustc helloworld.rs
./helloworld

Contributing to Rust testing

Run the following command to install the latest testing version on Fedora:

sudo dnf --enablerepo=updates-testing --refresh --best install rust

Drop us a mail at test@lists.fedoraproject.org or #fedora-qa on IRC Freenode to get started!


Featured image based off this image from Unsplash

Distinct RBAC Policy Rules

The ever elusive bug 968696 is still out there, due, in no small part, to the distributed nature of the policy mechanism. One Question I asked myself as I chased this beastie is “how many distinct policy rules do we actually have to implement?” This is an interesting question because, if we can an automated way to answer that question, it can lead to an automated way to transforming the policy rules themselves, and thus getting to a more unified approach to policy.

The set of policy files used in a Tripleo overcloud have around 1400 rules:

$ find /tmp/policy -name \*.json | xargs wc -l
   73 /tmp/policy/etc/sahara/policy.json
   61 /tmp/policy/etc/glance/policy.json
  138 /tmp/policy/etc/cinder/policy.json
   42 /tmp/policy/etc/gnocchi/policy.json
   20 /tmp/policy/etc/aodh/policy.json
   74 /tmp/policy/etc/ironic/policy.json
  214 /tmp/policy/etc/neutron/policy.json
  257 /tmp/policy/etc/nova/policy.json
  198 /tmp/policy/etc/keystone/policy.json
   18 /tmp/policy/etc/ceilometer/policy.json
  135 /tmp/policy/etc/manila/policy.json
    3 /tmp/policy/etc/heat/policy.json
   88 /tmp/policy/auth_token_scoped.json
  140 /tmp/policy/auth_v3_token_scoped.json
 1461 total

Granted, that might not be distinct rule lines, as some are multi-line, but most rules seem to be on a single line. There is some whitespace, too.

Many of the rules, while written differently, can map to the same implementation. For example:

“rule: False”

can reduce to

“False”

which is the same as

“!”

All are instances of oslo_policy.policy._check.FalseCheck.

With that in mind, I gathered up the set of policy files deployed on a Tripleo overcloud and hacked together some analysis.

Note: Nova embeds its policy rules in code now. In order to convert them to an old-style policy file, you need to run a command line tool:

oslopolicy-policy-generator --namespace nova --output-file /tmp/policy/etc/nova/policy.json

Ironic does something similar, but uses

oslopolicy-sample-generator --namespace=ironic.api --output-file=/tmp/policy/etc/ironic/policy.json

I’ve attached my source code at the bottom of this article. Running the code provides the following summary:

55 unique rules found

The longest rule belongs to Ironic:

OR(OR(OR((ROLE:admin)(ROLE:administrator))AND(OR((tenant == demo)(tenant == baremetal))(ROLE:baremetal_admin)))AND(OR((tenant == demo)(tenant == baremetal))OR((ROLE:observer)(ROLE:baremetal_observer))))

Some look somewhat repetitive, such as

OR((ROLE:admin)(is_admin == 1))

And some downright dangerous:

NOT( (ROLE:heat_stack_user)

A there are ways to work around having an explicit role in your token.

Many are indications of places where we want to use implied roles, such as:

  1. OR((ROLE:admin)(ROLE:administrator))
  2. OR((ROLE:admin)(ROLE:advsvc)
  3. (ROLE:admin)
  4. (ROLE:advsvc)
  5. (ROLE:service)

 

This is the set of keys that appear more than one time:

9 context_is_admin
4 admin_api
2 owner
6 admin_or_owner
2 service:index
2 segregation
7 default

Doing a grep for context_is_admin shows all of them with the following rule:

"context_is_admin": "role:admin",

admin_api is roughly the same:

cinder/policy.json: "admin_api": "is_admin:True",
ironic/policy.json: "admin_api": "role:admin or role:administrator"
nova/policy.json:   "admin_api": "is_admin:True"
manila/policy.json: "admin_api": "is_admin:True",

I think these here are supposed to include the new check for is_admin_project as well.

Owner is defined two different ways in two files:

neutron/policy.json:  "owner": "tenant_id:%(tenant_id)s",
keystone/policy.json: "owner": "user_id:%(user_id)s",

Keystone’s meaning is that the user matches, where as neutron is a project scope check. Both rules should change.

Admin or owner has the same variety

cinder/policy.json:    "admin_or_owner": "is_admin:True or project_id:%(project_id)s",
aodh/policy.json:      "admin_or_owner": "rule:context_is_admin or project_id:%(project_id)s",
neutron/policy.json:   "admin_or_owner": "rule:context_is_admin or rule:owner",
nova/policy.json:      "admin_or_owner": "is_admin:True or project_id:%(project_id)s"
keystone/policy.json:  "admin_or_owner": "rule:admin_required or rule:owner",
manila/policy.json:    "admin_or_owner": "is_admin:True or project_id:%(project_id)s",

Keystone is the odd one out here, with owner again meaning “user matches.”

Segregation is another rules that means admin:

aodh/policy.json:       "segregation": "rule:context_is_admin",
ceilometer/policy.json: "segregation": "rule:context_is_admin",

Probably the trickiest one to deal with is default, as that is a magic term that is used when a rule is not defined:

sahara/policy.json:   "default": "",
glance/policy.json:   "default": "role:admin",
cinder/policy.json:   "default": "rule:admin_or_owner",
aodh/policy.json:     "default": "rule:admin_or_owner",
neutron/policy.json:  "default": "rule:admin_or_owner",
keystone/policy.json: "default": "rule:admin_required",
manila/policy.json:   "default": "rule:admin_or_owner",

There seem to be three catch all approaches:

  1. require admin,
  2. look for a project match but let admin override
  3. let anyone execute the API.

This is the only rule that cannot be made globally unique across all the files.

Here is the complete list of suffixes.  The format is not strict policy format; I munged it to look for duplicates.

(ROLE:admin)
(ROLE:advsvc)
(ROLE:service)
(field == address_scopes:shared=True)
(field == networks:router:external=True)
(field == networks:shared=True)
(field == port:device_owner=~^network:)
(field == subnetpools:shared=True)
(group == nobody)
(is_admin == False)
(is_admin == True)
(is_public_api == True)
(project_id == %(project_id)s)
(project_id == %(resource.project_id)s)
(tenant_id == %(tenant_id)s)
(user_id == %(target.token.user_id)s)
(user_id == %(trust.trustor_user_id)s)
(user_id == %(user_id)s)
AND(OR((tenant == demo)(tenant == baremetal))OR((ROLE:observer)(ROLE:baremetal_observer)))
AND(OR(NOT( (field == rbac_policy:target_tenant=*) (ROLE:admin))OR((ROLE:admin)(tenant_id == %(tenant_id)s)))
FALSE
NOT( (ROLE:heat_stack_user) 
OR((ROLE:admin)(ROLE:administrator))
OR((ROLE:admin)(ROLE:advsvc))
OR((ROLE:admin)(is_admin == 1))
OR((ROLE:admin)(project_id == %(created_by_project_id)s))
OR((ROLE:admin)(project_id == %(project_id)s))
OR((ROLE:admin)(tenant_id == %(network:tenant_id)s))
OR((ROLE:admin)(tenant_id == %(tenant_id)s))
OR((ROLE:advsvc)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s)))
OR((ROLE:advsvc)OR((tenant_id == %(tenant_id)s)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s))))
OR((is_admin == True)(project_id == %(project_id)s))
OR((is_admin == True)(quota_class == %(quota_class)s))
OR((is_admin == True)(user_id == %(user_id)s))
OR((tenant == demo)(tenant == baremetal))
OR((tenant_id == %(tenant_id)s)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s)))
OR(NOT( (field == port:device_owner=~^network:) (ROLE:advsvc)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s)))
OR(NOT( (field == rbac_policy:target_tenant=*) (ROLE:admin))
OR(OR((ROLE:admin)(ROLE:administrator))AND(OR((tenant == demo)(tenant == baremetal))(ROLE:baremetal_admin)))
OR(OR((ROLE:admin)(is_admin == 1))(ROLE:service))
OR(OR((ROLE:admin)(is_admin == 1))(project_id == %(target.project.id)s))
OR(OR((ROLE:admin)(is_admin == 1))(token.project.domain.id == %(target.domain.id)s))
OR(OR((ROLE:admin)(is_admin == 1))(user_id == %(target.token.user_id)s))
OR(OR((ROLE:admin)(is_admin == 1))(user_id == %(user_id)s))
OR(OR((ROLE:admin)(is_admin == 1))AND((user_id == %(user_id)s)(user_id == %(target.credential.user_id)s)))
OR(OR((ROLE:admin)(project_id == %(created_by_project_id)s))(project_id == %(project_id)s))
OR(OR((ROLE:admin)(project_id == %(created_by_project_id)s))(project_id == %(resource.project_id)s))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(ROLE:advsvc))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == address_scopes:shared=True))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == networks:shared=True)(field == networks:router:external=True)(ROLE:advsvc))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == networks:shared=True))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == subnetpools:shared=True))
OR(OR(OR((ROLE:admin)(ROLE:administrator))AND(OR((tenant == demo)(tenant == baremetal))(ROLE:baremetal_admin)))AND(OR((tenant == demo)(tenant == baremetal))OR((ROLE:observer)(ROLE:baremetal_observer))))
OR(OR(OR((ROLE:admin)(is_admin == 1))(ROLE:service))(user_id == %(target.token.user_id)s))

Here is the source code I used to analyze the policy files:

#!/usr/bin/env python

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os
import sys

from oslo_serialization import jsonutils

from oslo_policy import policy
import oslo_policy._checks as _checks


def display_suffix(rules, rule):

    if isinstance (rule, _checks.RuleCheck):
        return display_suffix(rules, rules[rule.match.__str__()])

    if isinstance (rule, _checks.OrCheck):
        answer =  'OR('
        for subrule in rule.rules:
            answer += display_suffix(rules, subrule)
        answer +=  ')'
    elif isinstance (rule, _checks.AndCheck):
        answer =  'AND('
        for subrule in rule.rules:
            answer += display_suffix(rules, subrule)
        answer +=  ')'
    elif isinstance (rule, _checks.TrueCheck):
        answer =  "TRUE"
    elif isinstance (rule, _checks.FalseCheck):
        answer =  "FALSE"
    elif isinstance (rule, _checks.RoleCheck):       
        answer =  ("(ROLE:%s)" % rule.match)
    elif isinstance (rule, _checks.GenericCheck):       
        answer =  ("(%s == %s)" % (rule.kind, rule.match))
    elif isinstance (rule, _checks.NotCheck):       
        answer =  'NOT( %s ' % display_suffix(rules, rule.rule)
    else:        
        answer =  (rule)
    return answer

class Tool():
    def __init__(self):
        self.prefixes = dict()
        self.suffixes = dict()

    def add(self, policy_file):
        policy_data = policy_file.read()
        rules = policy.Rules.load(policy_data, "default")
        suffixes = []
        for key, rule in rules.items():
            suffix = display_suffix(rules, rule)
            self.prefixes[key] = self.prefixes.get(key, 0) + 1
            self.suffixes[suffix] = self.suffixes.get(suffix, 0) + 1

    def report(self):
        suffixes = sorted(self.suffixes.keys())
        for suffix in suffixes:
            print (suffix)
        print ("%d unique rules found" % len(suffixes))
        for prefix, count in self.prefixes.items():
            if count > 1:
                print ("%d %s" % (count, prefix))
        
def main(argv=sys.argv[1:]):
    tool = Tool()
    policy_dir = "/tmp/policy"
    name = 'policy.json'
    suffixes = []
    for root, dirs, files in os.walk(policy_dir):
        if name in files:
            policy_file_path = os.path.join(root, name)
            print (policy_file_path)
            policy_file = open(policy_file_path, 'r')
            tool.add(policy_file)
    tool.report()

if __name__ == "__main__":
    sys.exit(main(sys.argv[1:]))

September 20, 2016

All systems go
New status good: Everything seems to be working. for services: The Koji Buildsystem, Darkserver, Koschei Continuous Integration
There are scheduled downtimes in progress
New status scheduled: planned outage for services: Koschei Continuous Integration, The Koji Buildsystem, Darkserver
Fedora Media Writer Test Day – 2016-09-20

Fedora Media Writer Test Day - 2016-09-20Today, Tuesday, 2016-09-20, is the Fedora Media Writer Test Day! As part of this planned Change for Fedora 25, the Fedora graphical USB writing tool is being extensively revised and rewritten. This tool was formerly called the “Live USB Creator” and is now re=branded as “Fedora Media Writer”.

Why test the Media Writer

The idea is the new tool will be sufficiently capable, reliable, and cross-platform to be the primary download for Fedora Workstation 25. The main ‘flow’ of the Workstation download page will run through the tool instead of giving you a download link to the ISO file and various instructions for using it in different ways. This would be a pretty big change, and of course, it would be a bad idea to do it if the tool isn’t ready.

So this is an important Test Day! We’ll be testing the new version (Fedora, Windows, and macOS) of the tool to see whether it’s working well enough and catch any remaining issues. It’s also pretty easy to join in. All you’ll need is a USB stick you don’t mind overwriting and a system (or ideally more than one!) you can test booting the stick on (but you don’t need to make any permanent changes to it).

Help test the Media Writer!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

The post Fedora Media Writer Test Day – 2016-09-20 appeared first on Fedora Community Blog.

Is dialup still an option?
TL;DR - No.

Here's why.

I was talking with my Open Source Security Podcast co-host Kurt Seifried about what it would be like to access the modern Internet using dialup. So I decided to give this a try. My first thought was to find a modem, but after looking into this, it isn't really an option anymore.

The setup


  • No Modem
  • Fedora 24 VM
  • Firefox as packaged with Fedora 24
  • Use the firewall via wondershaper to control the network speed
  • "App Telemetry" firefox plugin to time the site load time

I know it's not perfect, but it's probably close enough to get a feel for what's going on. I understand this doesn't exactly recreate a modem experience with details like compression, latency, and someone picking up the phone during a download. There was nothing worse than having that 1 megabyte download at 95% when someone decided they needed to make a phone call. Call waiting was also a terrible plague.

If you're too young to understand any of this, be thankful. Anyone who looks at this time with nostalgia is pretty clearly delusional.

I started testing at a 1024 Kb connection and halved my way down to 56 (instead of 64). This seemed like a nice way to get a feel for how these sites react as your speed shifts down.

Baseline

I picked the most popular english language sites listed on the Alexa top 100. I added lwn.net becuase I like them, and my kids had me add twitch. My home Internet connection is 50 Mb down, 5 Mb up. As you can see, in general all these sites load in less than 5 seconds. The numbers represent the site being fully loaded. Most web browsers seem to show something pretty quickly, even if the page is still loading. For the purpose of this test, our numbers are how long it takes a site to fully load. I also show 4 samples because as you'll see later on, some of these sites took a really really long time to load, so four was as much suffering as I could endure. Perhaps someday I'll do this again with extra automation so I don't have to be so involved.

1024 Kb/s

Things really started to go downhill at this point. Anyone who claims a 1 megabit connection is broadband has probably never tried to use such a connection. In general though most of the sites were usable from a very narrow definition ofh the word.

512 Kb/s


You're going to want to start paying attention to Amazon, something really clever is going to happen, it's sort of noticeable in this graph. Also of note is how consistent bing.com is. While not the fastest site, it will remain extremely consistent through the entire test.

256 Kb/s

Here is where you can really see what Amazon is doing. They clearly have some sort of client side magic happening to ensure an acceptable response. For the rest of my testing I saw this behavior. A slow first load, then things were much much faster. Waiting for sites to load at this speed was really painful, it's only going to get worse from here. 15 seconds doesn't sound horrible, but it really is a long time to wait.

128 Kb

Things are not good at 128 Kb/s. Wikipedia looks empty, it was still loading at the same speed as our fist test. I imagine my lack of an ad enhanced experience with them helps keeps it so speedy.

56 Kb

Here is the real data you're waiting for. This is where I set the speed to 56K down, 48K up, which is the ideal speed of a 56K modem. I doubt most of us got that speed very often.

As you can probably see, Twitch takes an extremely long time to load. This should surprise nobody as it's a site that streams video, by definition it's expected you have a fast connection. Here is the graph again with Twitch removed.
The Yahoo column is empty because I couldn't get Yahoo to load. It timed out every single time I tried. Wikipedia looks empty, but it still loaded at 0.3 seconds. After thinking about this it does make sense. There are Wikipedia users who are on dialup in some countries. They have to keep it lean. Amazon still has a slow first load, then nice and speedy (for some definition of speedy) after that. I tried to load a youtube video to see if it would work. After about 10 minutes of nothing happening I gave up.

Typical tasks

I also tried to perform a few tasks I would consider "expected" by someone using the Internet.

For example from the time I typed in gmail.com until I could read a mail message took about 600 seconds I did let every page load completely before clicking or typing on it. Once I had it loaded, and the AJAX interface timed out then told me to switch to HTML mode, it was mostly usable. It was only about 30 seconds to load a message (including images) and 0.2 seconds to return to the inbox.

Logging into Facebook took about 200 seconds. It was basically unusable once it loaded though. Nothing new loaded, it loads quite a few images though, so this makes sense. These things aren't exactly "web optimized" anymore. If you know someone on dialup, don't expect them to be using Facebook.

cnn.com took 800 seconds. Reddit's front page was 750 seconds. Google News was only 33 seconds. The newspaper is probably a better choice if you have dialup.

I finally tried to run a "yum update" in Fedora to see if updating the system was something you could leave running overnight. It's not. After about 4 hours of just downloading repo metadata I gave up. There was no way you can plausibly update a system over dialup. If you're on dialup, the timeouts will probably keep you from getting pwnt better than updates will.

Another problem you hit with a modern system like this is it tries to download things automatically in the background. More than once I had to kill some background tasks that basically ruined my connection. Most system designers today assume everyone has a nice Internet connection so they can do whatever they want in the background. That's clearly a problem when you're running at a speed this slow.

Conclusion

Is the Internet usable on Dialup in 2016? No. You can't even pretend it's maybe usable. It pretty much would suck rocks to use the Internet on dialup today. I'm sure there are some people doing it. I feel bad for them. It's clear we've hit a place where broadband is expected, and honestly, you need fast broadband, even 1 Megabit isn't enough anymore if you want a decent experience. The definition of broadband in the US is now 25Mb down 3Mb up. Anyone who disagrees with that should spend a day at 56K.

I know this wasn't the most scientific study ever done, I would welcome something more rigorous. If you have any questions or ideas hit me up on Twitter: @joshbressers
Participez à la journée de test de Fedora 25 sur le créateur de média

Aujourd'hui, ce mardi 20 septembre, est une journée dédiée à un test précis : sur la création de média installable pour Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

Capture_du_2016-04-18_23-41-52.png

Qu'est-ce que la création de média installable ?

C'est une nouveauté pour Fedora 25. Elle consiste en une réécriture de l'outil de liveusb-creator qui est non seulement disponible sur Fedora mais aussi sur Windows et Mac OS. Cet utilitaire bénéficie ainsi d'une interface plus proche des standards des applications GNOME 3 en terme d'ergonomie et devient beaucoup plus simple d'utilisation.

Son fonctionnement consiste en la sélection l'image voulue comme Workstation, spin KDE, Server ou autre, procède automatiquement au téléchargement et à l'installation sur un média amovible comme une clé USB disponible et compatible.

L'objectif étant de simplifier la procédure d'installation pour les néophytes, car beaucoup d'utilisateurs se perdent après le téléchargement du fichier ISO traditionnel pour procéder à l'installation. Là, tout sera automatisé et fonctionnel sans intervention particulière. De part cet objectif, ce sera le mode de téléchargement de l'image officielle de Fedora qui sera mis en avant à l'avenir.

Les tests du jour couvrent :

  • Le téléchargement de l'image souhaitée ;
  • L'installation sur la clé USB ;
  • La conformité de l'image d'installation (c'est-à-dire fonctionnelle) ;
  • Compatible UEFI et BIOS ;
  • Fonctionnel sous Fedora, Windows et Mac OS.

Le test est un peu inhabituel car il porte sur le fonctionnement de l'application sur d'autres systèmes que Fedora que sont Windows et Mac OS. Si vous avez de tels systèmes disponibles, il ne faut pas hésiter à remonter les soucis rencontrés avec eux. Car ce seront évidemment les systèmes préférentiels pour un tel outil.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

All systems go
New status good: Everything seems to be working. for services: Fedora Infrastructure Cloud, COPR Build System
What is the Fedora Code of Conduct?

We all live in a society. Every society has customs, values, and mores. This is how homo sapiens are different from other species. Since our childhood, in school, then college, and then at work, we follow a shared set of social values. This shared set of values creates a peaceful world. In the open source world, we strive for values that lead to us all being welcoming, generous, and thoughtful. We may differ in opinions or sometimes disagree with each other, but we try to keep the conversation focused on the ideas under discussion, not the person in the discussion.

Fedora is an excellent example of an open source society where contributors respect each other and have healthy discussions, whether they agree or disagree on all topics. This is a sign of a healthy community. Fedora is a big project with contributors and users from different parts of the world . This creates a diverse community of different skills, languages, ages, colors, cultural values, and more. Although it is rare in Fedora, sometimes miscommunication happens and this can result in situations where the discussion moves from the idea to the person.

Introducing our Code of Conduct

We have a few guidelines that we ask people to keep in mind when they’re using Fedora Project resources. These guidelines help everyone feel welcome in our community. These guidelines are known as the Code of Conduct (CoC). One of the main goals of the Fedora Diversity team is to spread knowledge and improve the visibility of the code of conduct. Violations of the CoC can lead to different outcomes. In the past, there were cases of removal from Fedora mailing lists and IRC channels on violations of the CoC. This can differ depending on the scenario and severity of the issue.

Objectives of the Code of Conduct

Our aim is to have a healthy community of diverse people where ideas and opinions are freely shared and discussion happens openly. To help everyone successfully communicate we ask that you keep these guidelines in mind:

  • Be considerate. Your work will be used by other people, and you in turn will depend on the work of others. Any decision you take will affect users and colleagues, and you should take those consequences into account when making decisions.
  • Be respectful. Not all of us will agree all the time, but disagreement is no excuse for poor behavior and poor manners. We might all experience some frustration now and then, but we cannot allow that frustration to turn into a personal attack. It’s important to remember that a community where people feel uncomfortable or threatened is not a productive one. Members of the Fedora community should be respectful when dealing with other contributors as well as with people outside the Fedora community and with users of Fedora.

The Code of Conduct goes on to say:

“When we disagree, we try to understand why. Disagreements, both social and technical, happen all the time and Fedora is no exception. It is important that we resolve disagreements and differing views constructively.” Remember that we’re different. The strength of Fedora comes from its varied community and people from a wide range of backgrounds. Different people have different perspectives on issues. Being unable to understand why someone holds a viewpoint doesn’t mean they’re wrong. Don’t forget that it is human to err and blaming others doesn’t result in productive outcomes. Rather, offer to help resolve issues and to help learn from mistakes.

Together, we can have a healthy and happy community!


Community management by Milky from the Noun Project

The post What is the Fedora Code of Conduct? appeared first on Fedora Community Blog.

Microsoft SQL Server from PHP

Here is a small comparison of the various solutions to use a Microsoft SQL Server database from PHP, on Linux.

All the tests have be run on Fedora 23 but should work on RHEL or CentOS version 7.

Tested extensions:

 

1. Using PDO, ODBC and FreeTDS

Needed components:

  • freetds library and extension pdo_odbc extension
  • PHP version 5 or 7
  • RPM packages: freetds (EPEL), unixODBC, php-pdo, php-odbc

ODBC driver configuration

The driver must de defined in the /etc/odbcinst.ini file:

[FreeTDS]
Description=FreeTDS version 0.95
Driver=/usr/lib64/libtdsodbc.so.0.0.0

Data source configuration

The used server must be defined in the /etc/odbc.ini file (system wide) or in the ~/.odbc.ini file (user):

[sqlsrv_freetds]
Driver=FreeTDS
Description=SQL via FreeTds
Server=sqlserver.domain.tld
Port=1433

Connection check from the command line

$ isql sqlsrv_freetds user secret
SQL> SELECT @@version
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
SQLRowCount returns 1
1 rows fetched
SQL> quit

Connection from PHP

$ php -r '
echo "+ Connection\n";
$pdo = new PDO("odbc:sqlsrv_freetds", "user", "secret");
echo "+ Query\n";
$query = $pdo->query("SELECT @@version");
if ($query) {
    echo "+ Result\n";
    $row = $query->fetch(PDO::FETCH_NUM);
    if ($row) {
        print_r($row);
    }
}
'
+ Connection
+ Query
+ Result
Array
(
    [0] => Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
)

This solution is often the simplest, as all the dependencies are free and available in the Linux distributions.

2. Using PDO, mssql and FreeTDS

Needed components:

  • freetds library and mssql extension
  • PHP version 5 (the extension is deprecated and removed from PHP 7)
  • RPM packages: freetds (EPEL), php-mssql

Connection from PHP

$ php -r '
echo"+ Connection:\n";
$conn = mssql_connect("sqlserver.domain.tld", "user", "secret");
if ($conn) {
    echo"+ Query:\n";
    $query = mssql_query("SELECT @@version", $conn);
    if ($query) {
        echo"+ Result:\n";
        print_r($row = mssql_fetch_array($query, MSSQL_NUM));
    }
}
'
+ Connection
+ Query
+ Result
Array
(
    [0] => Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
)

This solution is also simple as all the dependencies are also free and available in Linux distributions. However, it uses a deprecated extension, and without using the PDO abstraction layer.

3. Using PDO, ODBC and Microsoft® ODBC Driver

Needed components:

ODBC driver configuration

the driver must be defined in the /etc/odbcinst.ini file (automatically added installation) :

[ODBC Driver 13 for SQL Server]
Description=Microsoft ODBC Driver for SQL Server
Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-13.0.so.0.0
Threading=1

Data source configuration

The used server must be defined in the /etc/odbc.ini file (system wide) or the ~/.odbc.ini fle (per user):

[sqlsrv_msodbc]
Driver=ODBC Driver 13 for SQL Server
Description=SQL via Microsoft Drivers
Server=sqlserver.domain.tld

Connection check from the command line

$ isql sqlsrv_msodbc user secret
SQL> SELECT @@version
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
SQLRowCount returns 1
1 rows fetched
SQL> quit

Connection from PHP

$ php -r '
echo "+ Connection\n";
$pdo = new PDO("odbc:sqlsrv_msodbc", "user", "secret");
echo "+ Query\n";
$query = $pdo->query("SELECT @@version");
if ($query) {
    echo "+ Result\n";
    $row = $query->fetch(PDO::FETCH_NUM);
    if ($row) {
        print_r($row);
    }
}
'
+ Connection
+ Query
+ Result
Array
(
    [0] => Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
)

This solution, close to #1 also requires the proprietary drivers.

4. Using the Microsoft® Driver

Needed components:

Connection check from the command line

$ sqlcmd -S sqlserver.domain.tld -U user -P secret -Q "SELECT @@version"
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
(1 rows affected)

Connection from PHP

$ php -r '
echo"+ Connection:\n";
$conn = sqlsrv_connect("sqlserver.domain.tld", array("UID" => "user", "PWD" => "secret"));
if ($conn) {
    echo"+ Query: \n";
    $query = sqlsrv_query($conn, "SELECT @@version");
    if ($query) {
        echo"+ Result:\n";
        print_r($row = sqlsrv_fetch_array($query, SQLSRV_FETCH_NUMERIC));
    }
}
'
+ Connection:
+ Query:
+ Result:
Array
(
    [0] => Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
)

This solution, close to #2 also requires the proprietary drivers, and doesn't use the PDO abstraction layer.

5. Using PDO and the Microsoft® Driver

Needed components:

Connection from PHP

$ php -r '
echo "+ Connection\n";
$pdo = new PDO("sqlsrv:Server=sqlserver.domain.tld", "user", "secret");
echo "+ Query\n";
$query = $pdo->query("SELECT @@version");
if ($query) {
    echo "+ Result\n";
    $row = $query->fetch(PDO::FETCH_NUM);
    if ($row) {
        print_r($row);
    }
}
'

+ Connection
+ Query
+ Result
Array
(
    [0] => Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
)

This solution, close to #1 and #3 also requires the proprietary drivers.

6. Conclusion

I think that using PDO should be preferred, to avoid the lock in a specific database server.

FreeTDS usage have filled a lot of needs in the past, as it was the only solution available for PHP 5. Using the sqlsrv or pdo_sqlsrv extension seems now more pertinent for PHP 7, but sadly requires to use the proprietary drivers (well, if you use Microsoft SQL server, you have already leave the free world).

Up to you for a choice.

AsciiBind all the things!

I have finally finished a, probably way too long, proposal for implementing a new Fedora Docs publishing toolchain using AsciiBinder.

The proposal, also published using AsciiBinder, suggests that we definitively adopt AsciiDoc and convert our DocBook sources to it without delay. Further we should begin publishing with AsciiBinder, ideally by Fedora 26.

The proposal tries to summarize the current state of affairs, define the problems being solved and provides instructions for using a proof of concept technology build to play with the tools.

Please take a read on the full proposal here: http://www.winglemeyer.org/fedora_docs_proposal/latest/proposal/overview.html

libinput and the Lenovo T460 series trackstick

First a definition: a trackstick is also called trackpoint, pointing stick, or "that red knob between G, H, and B". I'll be using trackstick here, because why not.

This post is the continuation of libinput and the Lenovo T450 and T460 series touchpads where we focused on a stalling pointer when moving the finger really slowly. Turns out the T460s at least, possibly others in the *60 series have another bug that caused a behaviour that is much worse but we didn't notice for ages as we were focusing on the high-precision cursor movement. Specifically, the pointer would just randomly stop moving for a short while (spoiler alert: 300ms), regardless of the movement speed.

libinput has built-in palm detection and one of the things it does is to disable the touchpad when the trackstick is in use. It's not uncommon to rest the hand near or on the touchpad while using the trackstick and any detected touch would cause interference with the pointer motion. So events from the touchpad are ignored whenever the trackpoint sends events. [1]

On (some of) the T460s the trackpoint sends spurious events. In the recording I have we have random events at 9s, then again 3.5s later, then 14s later, then 2s later, etc. Each time, our palm detection could would assume the trackpoint was in use and disable the touchpad for 300ms. If you were using the touchpad while this was happening, the touchpad would suddenly stop moving for 300ms and then continue as normal. Depending on how often these spurious events come in and the user's current caffeination state, this was somewhere between odd, annoying and infuriating.

The good news is: this is fixed in libinput now. libinput 1.5 and the upcoming 1.4.3 releases will have a fix that ignores these spurious events and makes the touchpad stalls a footnote of history. Hooray.

[1] we still allow touchpad physical button presses, and trackpoint button clicks won't disable the touchpad

There are scheduled downtimes in progress
New status scheduled: Scheduled reboots in progress for services: Fedora Infrastructure Cloud, COPR Build System
Mirroring Keystone Delegations in FreeIPA/389DS

This is more musing than a practical design.

Most application servers have a means to query LDAP for the authorization information for a user.  This is separate from, and follows after, authentication which may be using one of multiple mechanism, possibly not even querying LDAP (although that would be strange).

And there are other mechanisms (SAML2, SSSD+mod_lookup_identity) that can, also, provide the authorization attributes.

Separating mechanism from meaning, however, we are left with the fact that applications need a way to query attributes to make authorization decisions.  In Keystone, the general pattern is this:

A project is a group of resources.

A user is assigned a role on a project.

A user requests a token for a project. That token references the users roles.

The user passes the token to the server when accessing and API. Access control is based on the roles that the user has in the associated token.

The key point here is that it is the roles associated with the token in question that matter.  From that point on, we have the ability to inject layers of indirection.

Here is where things fall down today. If we take an app like WordPress, and tried to make it query against Red Hat’s LDAP server for the groups to use, there is no mapping  between the groups assigned and the permissions that the user should have.  As the WordPress instance might be run by any one of several organizations within Red Hat, there is no direct mapping possible.

If we map this problem domain to IPA, we see where things fall down.

WordPress, here, is a service.  If the host it is running on is owned by a particular organization (say, EMEA-Sales) it should be the EMEA Sales group that determines who gets what permissions on WordPress.

Aside: WordPress, by the way, makes a great example to use, as it has very clear, well defined roles,  which have a clear scope of authorization for operations.

Subscriber < Contributor < Author < Editor < Administrator

Back to our regular article:

If we define and actor as either a user or agroup of users, a Role assignment is a : (actor, organization, application, role)

 

role-assignment-1

Now, a user should not have to go to IPA, get a token, and hand that to WordPress.  When a user connects to WordPress, and attempts to do any non-public action, they are prompted for credentials, and are authenticated.  At this point, WordPress can do the LDAP query. And here is the question:

“what should an application query for in LDAP”

If we use groups, then we have a nasty naming scheme.  EMEA-sales_wordpress_admin versus LATAM-sales_worpress_admin.  This is appending the query  (organization, application) and the result (role).

Ideally, we would tag the role on the service.  The service already reflects organization and application.

In the RFC based schemas, there is a organizationalRole objectclass which almost mirrors what we want.  But I think the most important thing is to return an object that looks like a Group, most specifically groupofnames.  Fortunately, I think this is just the ‘cn’.

Can we put a group of names under a service?  Its not a container.

‘ipaService’ DESC ‘IPA service objectclass’ AUXILIARY MAY ( memberOf $ managedBy $ ipaKrbAuthzData) X-ORIGIN ‘IPA v2’ )

objectClass: ipaobject
objectClass: top
objectClass: ipaservice
objectClass: pkiuser
objectClass: ipakrbprincipal
objectClass: krbprincipal
objectClass: krbprincipalaux
objectClass: krbTicketPolicyAux

It probably would make more sense to have a separate subtree service-roles,  with each service-name a container, and each role a group-of-names under that container. The application would  filter on (service-name) to get the set of roles.  For a specific user, the service would add an additional filter for memberof.

Now, that is a lot of embedded knowledge in the application, and does not provide any way to do additional business logic in the IPA server or to hide that complexity from the end user.  Ideally, we would have something like automember to populate these role assignments, or, even better, a light-weight way for a user with a role assignment to re-delegate that to another user or principal.

That is what really gets valuable:  user self service for delegation.  We want to make it such that you do not need to be an admin to create a role assignment, but rather (with exceptions) you can delegate to others any role that you have assigned to yourself.  This is a question of scale.

However, more than just scale, we want to be able to track responsibility;  who assigned a user the role that they have, and how did they have the authority to assign it?  When a user no longer has authority, should the people they have delegated to also lose it, or does that delegation get transferred?  Both patterns are required for some uses.

I think this fast gets beyond what can be represented easily in an LDAP schema.  Probably the right step is to use something like automember to place users into role assignments.  Expanding nested groups, while nice, might be too complicated.

September 19, 2016

HackMIT

One of the core missions of a Fedora Ambassador is to represent the Fedora Community at events. On the weekend on September 17 and 18, 2016 I attended HackMIT as a representative of Fedora with Justin Flory. I was also honored to serve as a mentor to several teams.

HackMIT is MIT's headline hackathon. This year there were over 1000 undergraduate students in attendance from around the globe. I met students from Cambridge University, Trinidad, and India.

Meeting the Teams

It is always interesting to see what participants of non-Linux technical events use as their platform of choice. HackMIT 2016 was dominated by Mac laptops running OS X, but several teams were running Linux as a web server for their project. A small number of students were running Linux on their laptop. While most of the participants knew about Linux, and Ubuntu, a solid majority were not familiar with Fedora. Several people asked about Fedora and wanted to know about the Four Foundations and the relationship with Red Hat. Many students familiar with Fedora were surprised and excited to see Fedora represented at HackMIT.

Mentoring

HackMIT 2016 featured a lot of interesting and innovative ideas:
  • HomeBites that helps students, or weary travelers, find local hosts willing to make and share a home cooked meal.
  • Sunrise utilizes a bed and lamp working together to create a smoother wake-up experience. The bed tracks the user's sleep patterns and shares them with the lamp. The lamp then uses that data to gently and gradually increase the light in the room as the proper wake-up time approaches.

There were three projects I found particularly interesting.

MeTime

MeTIme

The MeTime Team

The MeTime team was working on an application that would allow room mates to communicate about needing privacy. A virtual "don't enter" the room sign. MeTime would use the Facebook API functions to allow a user to schedule a MeTime event requesting privacy. The application would support various levels of privacy requests as well as reminders to the requestee. One additional feature the team hopes to incorporate in the future is a proximity alarm to alert the requester that the requestee may have forgotten about the request. If you are interested in taking a look at their code you can find the MeTimeProject on GitHub.

Team Ubuntu

Running Linux

This entire team ran Ubuntu as their primary OS

I met this team as I was handing out Fedora HackMIT 2016 badge information. Two of members of this team brought external monitors and it was easy to spy Ubuntu being run because of the recognizable Aubergine background and Unity Launcher on the left side of the screen. This team was building a music app and leveraging gstreamer to play the music. The initial plan was to use a camera to recognize hand gestures and change tracks based on those gestures.

Conversationalist

Conversationalist

Charles Profitt and May from team Conversationalist reviewing code

I spoke to team Conversationalist during the first day and was intrigued by their idea. The problem they were trying to solve is where one participant in a group meeting dominates the discussion and prevents other voices from being heard in the group. Their solution was to leverage a web application that would have access to the microphone of either a computer or mobile device to determine which person was speaking the loudest. The program would then represent this data visually on an infographic that uses brightness to highlight people who should be given more of an opportunity to speak. On the second day the group was having an issue where the brightness was not shifting based on who was talking. I examined the code with May and asked her a couple of question that helped her make adjustments to the code.

post '/data/userAndVolume' do
        #puts "hey"
        user = params['user']
        vol = params['volume']

        current_volumes[user] = vol
        max_val = current_volumes.values.max
        maximum_users = current_volumes.select {|k,v| v == max_val}
        minimum_users = current_volumes.reject {|k,v| v == max_val}


        maximum_users.each do |user,volume|
                if(talk_counts[user] < 10)
                        talk_counts[user] += 1
                end
        end

        minimum_users.each do |user,volume|
                if(talk_counts[user] > -10)
                        talk_counts[user] -= 1
                end
        end

        JSON.generate(talk_counts)
end

Coversationalist is hosted on GitHub. May, one of Team Conversationalist members, would love to see more people contribute to the program or give their feedback.

Wheee, another addition.

The post Wheee, another addition. appeared first on The Grand Fallacy.

I’m thrilled to announce that Jeremy Cline has joined the Fedora Engineering team, effective today. Like our other recent immigrant, Randy Barlow, Jeremy was previously a member of Red Hat’s Pulp team. (This is mostly coincidental — the Pulp team’s a great place to work, and people there don’t just move to Fedora automatically.) Jeremy is passionate about and has a long history of open source contribution. We had many excellent applicants for our job opening, and weren’t even able to interview every qualified candidate before we had to make a decision. I’m very pleased with the choice, and I hope the Fedora community joins me in welcoming Jeremy!

Epiphany 3.22 (and a couple new stable releases too!)

It’s that time of year again! A new major release of Epiphany is out now, representing another six months of incremental progress. That’s a fancy way of saying that not too much has changed (so how did this blog post get so long?). It’s not for lack of development effort, though. There’s actually lot of action in git master and on sidebranches right now, most of it thanks to my awesome Google Summer of Code students, Gabriel Ivascu and Iulian Radu. However, I decided that most of the exciting changes we’re working on would be deferred to Epiphany 3.24, to give them more time to mature and to ensure quality. And since this is a blog post about Epiphany 3.22, that means you’ll have to wait until next time if you want details about the return of the traditional address bar, the brand-new user interface for bookmarks, the new support for syncing data between Epiphany browsers on different computers with Firefox Sync, or Prism source code view, all features that are brewing for 3.24. This blog also does not cover the cool new stuff in WebKitGTK+ 2.14, like new support for copy/paste and accelerated compositing in Wayland.

New stuff

So, what’s new in 3.22?

  • A new Paste and Go context menu option in the address bar, implemented by Iulian. It’s so simple, but it’s also the greatest thing ever. Why did nobody implement this earlier?
  • A new Duplicate Tab context menu option on tabs, implemented by Gabriel. It’s not something I use myself, but it seems some folks who use it in other browsers were disappointed it was missing in Epiphany.
  • A new keyboard shortcuts dialog is available in the app menu, implemented by Gabriel.

Gabriel also redesigned all the error pages. My favorite one is the new TLS error page, based on a mockup from Jakub Steiner:

Web app improvements

Pivoting to web apps, Daniel Aleksandersen turned his attention to the algorithm we use to pick a desktop icon for newly-created web apps. It was, to say the least, subpar; in Epiphany 3.20, it normally always fell back to using the website’s 16×16 favicon, which doesn’t look so great in a desktop environment where all app icons are expected to be at least 256×256. Epiphany 3.22 will try to pick better icons when websites make it possible. Read more on Daniel’s blog, which goes into detail on how to pick good web app icons.

Also new is support for system-installed web apps. Previously, Epiphany could only handle web apps installed in home directories, which meant it was impossible to package a web app in an RPM or Debian package. That limitation has now been removed. (Update: I had forgotten that limitation was actually removed for GNOME 3.20, but the web apps only worked when running in GNOME and not in other desktops, so it wasn’t really usable. That’s fixed now in 3.22.) This was needed to support packaging Fedora Developer Portal, but of course it can be used to package up any website. It’s probably only interesting to distributions that ship Epiphany by default, though. (Epiphany is installed by default in Fedora Workstation as it’s needed by GNOME Software to run web apps, it’s just hidden from the shell overview unless you “install” it.) At least one media outlet has amusingly reported this as Epiphany attempting to compete generally with Electron, something I did write in a commit message, but which is only true in the specific case where you need to just show a website with absolutely no changes in the GNOME desktop. So if you were expecting to see Visual Studio running in Epiphany: haha, no.

Shortcut woes

On another note, I’m pleased to announce that we managed to accidentally stomp on both shortcuts for opening the GTK+ inspector this cycle, by mapping Duplicate Tab to Ctrl+Shift+D, and by adding a new Ctrl+Shift+I shortcut to open the WebKit web inspector (in addition to F12). Go team! We caught the problem with Ctrl+Shift+D and removed the shortcut in time for the release, so at least you can still use that to open the GTK+ inspector, but I didn’t notice the issue with the web inspector until it was too late, and Ctrl+Shift+I will no longer work as expected in GTK+ apps. Suggestions welcome for whether we should leave the clashing Ctrl+Shift+I shortcut or get rid of it. I am leaning towards removing it, because we normally match Epiphany behavior with GTK+, and only match other browsers when it doesn’t conflict with GTK+. That’s called desktop integration, and it’s worked well for us so far. But a case can be made for matching other browsers, too.

Stable releases

On top of Epiphany 3.22, I’ve also rolled new stable releases 3.20.4 and 3.18.8. I don’t normally blog about stable releases since they only include bugfixes and are usually boring, so why are these worth mentioning here? Two reasons. First, one of the fixes in these releases is quite significant: I discovered that a few important features were broken when multiple tabs share the same web process behind the scenes (a somewhat unusual condition): the load anyway button on the unacceptable TLS certificate error page, password storage with GNOME keyring, removing pages from the new tab overview, and deleting web applications. It was one subtle bug that was to blame for breaking all of those features in this odd corner case, which finally explains some difficult-to-reproduce complaints we’d been getting, so it’s good to put out that bug of the way. Of course, that’s also fixed in Epiphany 3.22, but new stable releases ensure users don’t need a full distribution upgrade to pick up a simple bugfix.

Additionally, the new stable releases are compatible with WebKitGTK+ 2.14 (to be released later this week). The Epiphany 3.20.4 and 3.18.8 releases will intentionally no longer build with older versions of WebKitGTK+, as new WebKitGTK+ releases are important and all distributions must upgrade. But wait, if WebKitGTK+ is kept API and ABI stable in order to encourage distributions to release updates, then why is the new release incompatible with older versions of Epiphany? Well, in addition to stable API, there’s also an unstable DOM API that changes willy-nilly without any soname bumps; we don’t normally notice when it changes, since it’s autogenerated from web IDL files. Sounds terrible, right? In practice, no application has (to my knowledge) ever been affected by an unstable DOM API break before now, but that has changed with WebKitGTK+ 2.14, and an Epiphany update is required. Most applications don’t have to worry about this, though; the unstable API is totally undocumented and not available unless you #define a macro to make it visible, so applications that use it know to expect breakage. But unannounced ABI changes without soname bumps are obviously a big a problem for distributions, which is why we’re fixing this problem once and for all in WebKitGTK+ 2.16. Look out for a future blog post about that, probably from Carlos Garcia.

elementary OS

Lastly, I’m pleased to note that elementary OS Loki is out now. elementary is kinda (totally) competing with us GNOME folks, but it’s cool too, and the default browser has changed from Midori to Epiphany in this release due to unfixed security problems with Midori. They’ve shipped Epiphany 3.18.5, so if there are any elementary fans in the audience, it’s worth asking them to upgrade to 3.18.8. elementary does have some downstream patches to improve desktop integration with their OS — notably, they’ve jumped ahead of us in bringing back the traditional address bar — but desktop integration is kinda the whole point of Epiphany, so I can’t complain. Check it out! (But be sure to complain if they are not releasing WebKit security updates when advised to do so.)

Security, feature, and bug fix release

Bodhi 2.2.0 is a security and feature release, with a few bug fixes as well.

This update addresses CVE-2016-1000008 by
disallowing the re-use of solved captchas. Additionally, the captcha is
warped to make it more difficult to
solve through automation. Thanks to Patrick Uiterwijk for discovering and reporting this issue.

If you would like to read about the features and bugs fixed by this release, please see the release notes.

Understanding evdev

This post explains how the evdev protocol works. After reading this post you should understand what evdev is and how to interpret evdev event dumps to understand what your device is doing. The post is aimed mainly at users having to debug a device, I will thus leave out or simplify some of the technical details. I'll be using the output from evemu-record as example because that is the primary debugging tool for evdev.

What is evdev?

evdev is a Linux-only generic protocol that the kernel uses to forward information and events about input devices to userspace. It's not just for mice and keyboards but any device that has any sort of axis, key or button, including things like webcams and remote controls. Each device is represented as a device node in the form of /dev/input/event0, with the trailing number increasing as you add more devices. The node numbers are re-used after you unplug a device, so don't hardcode the device node into a script. The device nodes are also only readable by root, thus you need to run any debugging tools as root too.

evdev is the primary way to talk to input devices on Linux. All X.Org drivers on Linux use evdev as protocol and libinput as well. Note that "evdev" is also the shortcut used for xf86-input-evdev, the X.Org driver to handle generic evdev devices, so watch out for context when you read "evdev" on a mailing list.

Communicating with evdev devices

Communicating with a device is simple: open the device node and read from it. Any data coming out is a struct input_event, defined in /usr/include/linux/input.h:


struct input_event {
struct timeval time;
__u16 type;
__u16 code;
__s32 value;
};
I'll describe the contents later, but you can see that it's a very simple struct.

Static information about the device such as its name and capabilities can be queried with a set of ioctls. Note that you should always use libevdevto interact with a device, it blunts the few sharp edges evdev has. See the libevdev documentation for usage examples.

evemu-record, our primary debugging tool for anything evdev is very simple. It reads the static information about the device, prints it and then simply reads and prints all events as they come in. The output is in machine-readable format but it's annotated with human-readable comments (starting with #). You can always ignore the non-comment bits. There's a second command, evemu-describe, that only prints the description and exits without waiting for events.

Relative devices and keyboards

The top part of an evemu-record output is the device description. This is a list of static properties that tells us what the device is capable of. For example, the USB mouse I have plugged in here prints:


# Input device name: "PIXART USB OPTICAL MOUSE"
# Input device ID: bus 0x03 vendor 0x93a product 0x2510 version 0x110
# Supported events:
# Event type 0 (EV_SYN)
# Event code 0 (SYN_REPORT)
# Event code 1 (SYN_CONFIG)
# Event code 2 (SYN_MT_REPORT)
# Event code 3 (SYN_DROPPED)
# Event code 4 ((null))
# Event code 5 ((null))
# Event code 6 ((null))
# Event code 7 ((null))
# Event code 8 ((null))
# Event code 9 ((null))
# Event code 10 ((null))
# Event code 11 ((null))
# Event code 12 ((null))
# Event code 13 ((null))
# Event code 14 ((null))
# Event type 1 (EV_KEY)
# Event code 272 (BTN_LEFT)
# Event code 273 (BTN_RIGHT)
# Event code 274 (BTN_MIDDLE)
# Event type 2 (EV_REL)
# Event code 0 (REL_X)
# Event code 1 (REL_Y)
# Event code 8 (REL_WHEEL)
# Event type 4 (EV_MSC)
# Event code 4 (MSC_SCAN)
# Properties:
The device name is the one (usually) set by the manufacturer and so are the vendor and product IDs. The bus is one of the "BUS_USB" and similar constants defined in /usr/include/linux/input.h. The version is often quite arbitrary, only a few devices have something meaningful here.

We also have a set of supported events, categorised by "event type" and "event code" (note how type and code are also part of the struct input_event). The type is a general category, and /usr/include/linux/input-event-codes.h defines quite a few of those. The most important types are EV_KEY (keys and buttons), EV_REL (relative axes) and EV_ABS (absolute axes). In the output above we can see that we have EV_KEY and EV_REL set.

As a subitem of each type we have the event code. The event codes for this device are self-explanatory: BTN_LEFT, BTN_RIGHT and BTN_MIDDLE are the left, right and middle button. The axes are a relative x axis, a relative y axis and a wheel axis (i.e. a mouse wheel). EV_MSC/MSC_SCAN is used for raw scancodes and you can usually ignore it. And finally we have the EV_SYN bits but let's ignore those, they are always set for all devices.

Note that an event code cannot be on its own, it must be a tuple of (type, code). For example, REL_X and ABS_X have the same numerical value and without the type you won't know which one is which.

That's pretty much it. A keyboard will have a lot of EV_KEY bits set and the EV_REL axes are obviously missing (but not always...). Instead of BTN_LEFT, a keyboard would have e.g. KEY_ESC, KEY_A, KEY_B, etc. 90% of device debugging is looking at the event codes and figuring out which ones are missing or shouldn't be there.

Exercise: You should now be able to read a evemu-record description from any mouse or keyboard device connected to your computer and understand what it means. This also applies to most special devices such as remotes - the only thing that changes are the names for the keys/buttons. Just run sudo evemu-describe and pick any device in the list.

The events from relative devices and keyboards

evdev is a serialised protocol. It sends a series of events and then a synchronisation event to notify us that the preceeding events all belong together. This synchronisation event is EV_SYN SYN_REPORT, is generated by the kernel, not the device and hence all EV_SYN codes are always available on all devices.

Let's have a look at a mouse movement. As explained above, half the line is machine-readable but we can ignore that bit and look at the human-readable output on the right.


E: 0.335996 0002 0000 0001 # EV_REL / REL_X 1
E: 0.335996 0002 0001 -002 # EV_REL / REL_Y -2
E: 0.335996 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
This means that within one hardware event, we've moved 1 device unit to the right (x axis) and two device units up (y axis). Note how all events have the same timestamp (0.335996).

Let's have a look at a button press:


E: 0.656004 0004 0004 589825 # EV_MSC / MSC_SCAN 589825
E: 0.656004 0001 0110 0001 # EV_KEY / BTN_LEFT 1
E: 0.656004 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
E: 0.727002 0004 0004 589825 # EV_MSC / MSC_SCAN 589825
E: 0.727002 0001 0110 0000 # EV_KEY / BTN_LEFT 0
E: 0.727002 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
For button events, the value 1 signals button pressed, button 0 signals button released.

And key events look like this:


E: 0.000000 0004 0004 458792 # EV_MSC / MSC_SCAN 458792
E: 0.000000 0001 001c 0000 # EV_KEY / KEY_ENTER 0
E: 0.000000 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
E: 0.560004 0004 0004 458976 # EV_MSC / MSC_SCAN 458976
E: 0.560004 0001 001d 0001 # EV_KEY / KEY_LEFTCTRL 1
E: 0.560004 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
[....]
E: 1.172732 0001 001d 0002 # EV_KEY / KEY_LEFTCTRL 2
E: 1.172732 0000 0000 0001 # ------------ SYN_REPORT (1) ----------
E: 1.200004 0004 0004 458758 # EV_MSC / MSC_SCAN 458758
E: 1.200004 0001 002e 0001 # EV_KEY / KEY_C 1
E: 1.200004 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
Mostly the same as button events. But wait, there is one difference: we have a value of 2 as well. For key events, a value 2 means "key repeat". If you're on the tty, then this is what generates repeat keys for you. In X and Wayland we ignore these repeat events and instead use XKB-based key repeat.

Now look at the keyboard events again and see if you can make sense of the sequence. We have an Enter release (but no press), then ctrl down (and repeat), followed by a 'c' press - but no release. The explanation is simple - as soon as I hit enter in the terminal, evemu-record started recording so it captured the enter release too. And it stopped recording as soon as ctrl+c was down because that's when it was cancelled by the terminal. One important takeaway here: the evdev protocol is not guaranteed to be balanced. You may see a release for a key you've never seen the press for, and you may be missing a release for a key/button you've seen the press for (this happens when you stop recording). Oh, and there's one danger: if you record your keyboard and you type your password, the keys will show up in the output. Security experts generally reocmmend not publishing event logs with your password in it.

Exercise: You should now be able to read a evemu-record events list from any mouse or keyboard device connected to your computer and understand the event sequence.This also applies to most special devices such as remotes - the only thing that changes are the names for the keys/buttons. Just run sudo evemu-record and pick any device listed.

Absolute devices

Things get a bit more complicated when we look at absolute input devices like a touchscreen or a touchpad. Yes, touchpads are absolute devices in hardware and the conversion to relative events is done in userspace by e.g. libinput. The output of my touchpad is below. Note that I've manually removed a few bits to make it easier to grasp, they will appear later in the multitouch discussion.


# Input device name: "SynPS/2 Synaptics TouchPad"
# Input device ID: bus 0x11 vendor 0x02 product 0x07 version 0x1b1
# Supported events:
# Event type 0 (EV_SYN)
# Event code 0 (SYN_REPORT)
# Event code 1 (SYN_CONFIG)
# Event code 2 (SYN_MT_REPORT)
# Event code 3 (SYN_DROPPED)
# Event code 4 ((null))
# Event code 5 ((null))
# Event code 6 ((null))
# Event code 7 ((null))
# Event code 8 ((null))
# Event code 9 ((null))
# Event code 10 ((null))
# Event code 11 ((null))
# Event code 12 ((null))
# Event code 13 ((null))
# Event code 14 ((null))
# Event type 1 (EV_KEY)
# Event code 272 (BTN_LEFT)
# Event code 325 (BTN_TOOL_FINGER)
# Event code 328 (BTN_TOOL_QUINTTAP)
# Event code 330 (BTN_TOUCH)
# Event code 333 (BTN_TOOL_DOUBLETAP)
# Event code 334 (BTN_TOOL_TRIPLETAP)
# Event code 335 (BTN_TOOL_QUADTAP)
# Event type 3 (EV_ABS)
# Event code 0 (ABS_X)
# Value 2919
# Min 1024
# Max 5112
# Fuzz 0
# Flat 0
# Resolution 42
# Event code 1 (ABS_Y)
# Value 3711
# Min 2024
# Max 4832
# Fuzz 0
# Flat 0
# Resolution 42
# Event code 24 (ABS_PRESSURE)
# Value 0
# Min 0
# Max 255
# Fuzz 0
# Flat 0
# Resolution 0
# Event code 28 (ABS_TOOL_WIDTH)
# Value 0
# Min 0
# Max 15
# Fuzz 0
# Flat 0
# Resolution 0
# Properties:
# Property type 0 (INPUT_PROP_POINTER)
# Property type 2 (INPUT_PROP_BUTTONPAD)
# Property type 4 (INPUT_PROP_TOPBUTTONPAD)
We have a BTN_LEFT again and a set of other buttons that I'll explain in a second. But first we look at the EV_ABS output. We have the same naming system as above. ABS_X and ABS_Y are the x and y axis on the device, ABS_PRESSURE is an (arbitrary) ranged pressure value.

Absolute axes have a bit more state than just a simple bit. Specifically, they have a minimum and maximum (not all hardware has the top-left sensor position on 0/0, it can be an arbitrary position, specified by the minimum). Notable here is that the axis ranges are simply the ones announced by the device - there is no guarantee that the values fall within this range and indeed a lot of touchpad devices tend to send values slightly outside that range. Fuzz and flat can be safely ignored, but resolution is interesting. It is given in units per millimeter and thus tells us the size of the device. in the above case: (5112 - 1024)/42 means the device is 97mm wide. The resolution is quite commonly wrong, a lot of axis overrides need the resolution changed to the correct value.

The axis description also has a current value listed. The kernel only sends events when the value changes, so even if the actual hardware keeps sending events, you may never see them in the output if the value remains the same. In other words, holding a finger perfectly still on a touchpad creates plenty of hardware events, but you won't see anything coming out of the event node.

Finally, we have properties on this device. These are used to indicate general information about the device that's not otherwise obvious. In this case INPUT_PROP_POINTER tells us that we need a pointer for this device (it is a touchpad after all, a touchscreen would instead have INPUT_PROP_DIRECT set). INPUT_PROP_BUTTONPAD means that this is a so-called clickpad, it does not have separate physical buttons but instead the whole touchpad clicks. Ignore INPUT_PROP_TOPBUTTONPAD because it only applies to the Lenovo *40 series of devices.

Ok, back to the buttons: aside from BTN_LEFT, we have BTN_TOUCH. This one signals that the user is touching the surface of the touchpad (with some in-kernel defined minimum pressure value). It's not just for finger-touches, it's also used for graphics tablet stylus touchpes (so really, it's more "contact" than "touch" but meh).

The BTN_TOOL_FINGER event tells us that a finger is in detectable range. This gives us two bits of information: first, we have a finger (a tablet would have e.g. BTN_TOOL_PEN) and second, we may have a finger in proximity without touching. On many touchpads, BTN_TOOL_FINGER and BTN_TOUCH come in the same event, but others can detect a finger hovering over the touchpad too (in which case you'd also hope for ABS_DISTANCE being available on the touchpad).

Finally, the BTN_TOOL_DOUBLETAP up to BTN_TOOL_QUINTTAP tell us whether the device can detect 2 through to 5 fingers on the touchpad. This doesn't actually track the fingers, it merely tells you "3 fingers down" in the case of BTN_TOOL_TRIPLETAP.

Exercise: Look at your touchpad's description and figure out if the size of the touchpad is correct based on the axis information [1]. Check how many fingers your touchpad can detect and whether it can do pressure or distance detection.

The events from absolute devices

Events from absolute axes are not really any different than events from relative devices which we already covered. The same type/code combination with a value and a timestamp, all framed by EV_SYN SYN_REPORT events. Here's an example of me touching the touchpad:


E: 0.000001 0001 014a 0001 # EV_KEY / BTN_TOUCH 1
E: 0.000001 0003 0000 3335 # EV_ABS / ABS_X 3335
E: 0.000001 0003 0001 3308 # EV_ABS / ABS_Y 3308
E: 0.000001 0003 0018 0069 # EV_ABS / ABS_PRESSURE 69
E: 0.000001 0001 0145 0001 # EV_KEY / BTN_TOOL_FINGER 1
E: 0.000001 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +0ms
E: 0.021751 0003 0018 0070 # EV_ABS / ABS_PRESSURE 70
E: 0.021751 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +21ms
E: 0.043908 0003 0000 3334 # EV_ABS / ABS_X 3334
E: 0.043908 0003 0001 3309 # EV_ABS / ABS_Y 3309
E: 0.043908 0003 0018 0065 # EV_ABS / ABS_PRESSURE 65
E: 0.043908 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +22ms
E: 0.052469 0001 014a 0000 # EV_KEY / BTN_TOUCH 0
E: 0.052469 0003 0018 0000 # EV_ABS / ABS_PRESSURE 0
E: 0.052469 0001 0145 0000 # EV_KEY / BTN_TOOL_FINGER 0
E: 0.052469 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +9ms
In the first event you see BTN_TOOL_FINGER and BTN_TOUCH set (this touchpad doesn't detect hovering fingers). An x/y coordinate pair and a pressure value. The pressure changes in the second event, the third event changes pressure and location. Finally, we have BTN_TOOL_FINGER and BTN_TOUCH released on finger up, and the pressure value goes back to 0. Notice how the second event didn't contain any x/y coordinates? As I said above, the kernel only sends updates on absolute axes when the value changed.

Ok, let's look at a three-finger tap (again, minus the ABS_MT_ bits):


E: 0.000001 0001 014a 0001 # EV_KEY / BTN_TOUCH 1
E: 0.000001 0003 0000 2149 # EV_ABS / ABS_X 2149
E: 0.000001 0003 0001 3747 # EV_ABS / ABS_Y 3747
E: 0.000001 0003 0018 0066 # EV_ABS / ABS_PRESSURE 66
E: 0.000001 0001 014e 0001 # EV_KEY / BTN_TOOL_TRIPLETAP 1
E: 0.000001 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +0ms
E: 0.034209 0003 0000 2148 # EV_ABS / ABS_X 2148
E: 0.034209 0003 0018 0064 # EV_ABS / ABS_PRESSURE 64
E: 0.034209 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +34ms
[...]
E: 0.138510 0003 0000 4286 # EV_ABS / ABS_X 4286
E: 0.138510 0003 0001 3350 # EV_ABS / ABS_Y 3350
E: 0.138510 0003 0018 0055 # EV_ABS / ABS_PRESSURE 55
E: 0.138510 0001 0145 0001 # EV_KEY / BTN_TOOL_FINGER 1
E: 0.138510 0001 014e 0000 # EV_KEY / BTN_TOOL_TRIPLETAP 0
E: 0.138510 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +23ms
E: 0.147834 0003 0000 4287 # EV_ABS / ABS_X 4287
E: 0.147834 0003 0001 3351 # EV_ABS / ABS_Y 3351
E: 0.147834 0003 0018 0037 # EV_ABS / ABS_PRESSURE 37
E: 0.147834 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +9ms
E: 0.157151 0001 014a 0000 # EV_KEY / BTN_TOUCH 0
E: 0.157151 0003 0018 0000 # EV_ABS / ABS_PRESSURE 0
E: 0.157151 0001 0145 0000 # EV_KEY / BTN_TOOL_FINGER 0
E: 0.157151 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +10ms
In the first event, the touchpad detected all three fingers at the same time. So get BTN_TOUCH, x/y/pressure and BTN_TOOL_TRIPLETAP set. Note that the various BTN_TOOL_* bits are mutually exclusive. BTN_TOOL_FINGER means "exactly 1 finger down" and you can't have exactly 1 finger down when you have three fingers down. In the second event x and pressure update (y has no event, it stayed the same).

In the event after the break, we switch from three fingers to one finger. BTN_TOOL_TRIPLETAP is released, BTN_TOOL_FINGER is set. That's very common. Humans aren't robots, you can't release all fingers at exactly the same time, so depending on the hardware scanout rate you have intermediate states where one finger has left already, others are still down. In this case I released two fingers between scanouts, one was still down. It's not uncommon to see a full cycle from BTN_TOOL_FINGER to BTN_TOOL_DOUBLETAP to BTN_TOOL_TRIPLETAP on finger down or the reverse on finger up.

Exercise: test out the pressure values on your touchpad and see how close you can get to the actual announced range. Check how accurate the multifinger detection is by tapping with two, three, four and five fingers. (In both cases, you'll likely find that it's very much hit and miss).

Multitouch and slots

Now we're at the most complicated topic regarding evdev devices. In the case of multitouch devices, we need to send multiple touches on the same axes. So we need an additional dimension and that is called multitouch slots (there is another, older multitouch protocol that doesn't use slots but it is so rare now that you don't need to bother).

First: all axes that are multitouch-capable are repeated as ABS_MT_foo axis. So if you have ABS_X, you also get ABS_MT_POSITION_X and both axes have the same axis ranges and resolutions. The reason here is backwards-compatibility: if a device only sends multitouch events, older programs only listening to the ABS_X etc. events won't work. Some axes may only be available for single-touch (ABS_MT_TOOL_WIDTH in this case).

Let's have a look at my touchpad, this time without the axes removed:


# Input device name: "SynPS/2 Synaptics TouchPad"
# Input device ID: bus 0x11 vendor 0x02 product 0x07 version 0x1b1
# Supported events:
# Event type 0 (EV_SYN)
# Event code 0 (SYN_REPORT)
# Event code 1 (SYN_CONFIG)
# Event code 2 (SYN_MT_REPORT)
# Event code 3 (SYN_DROPPED)
# Event code 4 ((null))
# Event code 5 ((null))
# Event code 6 ((null))
# Event code 7 ((null))
# Event code 8 ((null))
# Event code 9 ((null))
# Event code 10 ((null))
# Event code 11 ((null))
# Event code 12 ((null))
# Event code 13 ((null))
# Event code 14 ((null))
# Event type 1 (EV_KEY)
# Event code 272 (BTN_LEFT)
# Event code 325 (BTN_TOOL_FINGER)
# Event code 328 (BTN_TOOL_QUINTTAP)
# Event code 330 (BTN_TOUCH)
# Event code 333 (BTN_TOOL_DOUBLETAP)
# Event code 334 (BTN_TOOL_TRIPLETAP)
# Event code 335 (BTN_TOOL_QUADTAP)
# Event type 3 (EV_ABS)
# Event code 0 (ABS_X)
# Value 5112
# Min 1024
# Max 5112
# Fuzz 0
# Flat 0
# Resolution 41
# Event code 1 (ABS_Y)
# Value 2930
# Min 2024
# Max 4832
# Fuzz 0
# Flat 0
# Resolution 37
# Event code 24 (ABS_PRESSURE)
# Value 0
# Min 0
# Max 255
# Fuzz 0
# Flat 0
# Resolution 0
# Event code 28 (ABS_TOOL_WIDTH)
# Value 0
# Min 0
# Max 15
# Fuzz 0
# Flat 0
# Resolution 0
# Event code 47 (ABS_MT_SLOT)
# Value 0
# Min 0
# Max 1
# Fuzz 0
# Flat 0
# Resolution 0
# Event code 53 (ABS_MT_POSITION_X)
# Value 0
# Min 1024
# Max 5112
# Fuzz 8
# Flat 0
# Resolution 41
# Event code 54 (ABS_MT_POSITION_Y)
# Value 0
# Min 2024
# Max 4832
# Fuzz 8
# Flat 0
# Resolution 37
# Event code 57 (ABS_MT_TRACKING_ID)
# Value 0
# Min 0
# Max 65535
# Fuzz 0
# Flat 0
# Resolution 0
# Event code 58 (ABS_MT_PRESSURE)
# Value 0
# Min 0
# Max 255
# Fuzz 0
# Flat 0
# Resolution 0
# Properties:
# Property type 0 (INPUT_PROP_POINTER)
# Property type 2 (INPUT_PROP_BUTTONPAD)
# Property type 4 (INPUT_PROP_TOPBUTTONPAD)
We have an x and y position for multitouch as well as a pressure axis. There are also two special multitouch axes that aren't really axes. ABS_MT_SLOT and ABS_MT_TRACKING_ID. The former specifies which slot is currently active, the latter is used to track touch points.

Slots are a static property of a device. My touchpad, as you can see above ony supports 2 slots (min 0, max 1) and thus can track 2 fingers at a time. Whenever the first finger is set down it's coordinates will be tracked in slot 0, the second finger will be tracked in slot 1. When the finger in slot 0 is lifted, the second finger continues to be tracked in slot 1, and if a new finger is set down, it will be tracked in slot 0. Sounds more complicated than it is, think of it as an array of possible touchpoints.

The tracking ID is an incrementing number that lets us tell touch points apart and also tells us when a touch starts and when it ends. The two values are either -1 or a positive number. Any positive number means "new touch" and -1 means "touch ended". So when you put two fingers down and lift them again, you'll get a tracking ID of 1 in slot 0, a tracking ID of 2 in slot 1, then a tracking ID of -1 in both slots to signal they ended. The tracking ID value itself is meaningless, it simply increases as touches are created.

Let's look at a single tap:


E: 0.000001 0003 0039 0387 # EV_ABS / ABS_MT_TRACKING_ID 387
E: 0.000001 0003 0035 2560 # EV_ABS / ABS_MT_POSITION_X 2560
E: 0.000001 0003 0036 2905 # EV_ABS / ABS_MT_POSITION_Y 2905
E: 0.000001 0003 003a 0059 # EV_ABS / ABS_MT_PRESSURE 59
E: 0.000001 0001 014a 0001 # EV_KEY / BTN_TOUCH 1
E: 0.000001 0003 0000 2560 # EV_ABS / ABS_X 2560
E: 0.000001 0003 0001 2905 # EV_ABS / ABS_Y 2905
E: 0.000001 0003 0018 0059 # EV_ABS / ABS_PRESSURE 59
E: 0.000001 0001 0145 0001 # EV_KEY / BTN_TOOL_FINGER 1
E: 0.000001 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +0ms
E: 0.021690 0003 003a 0067 # EV_ABS / ABS_MT_PRESSURE 67
E: 0.021690 0003 0018 0067 # EV_ABS / ABS_PRESSURE 67
E: 0.021690 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +21ms
E: 0.033482 0003 003a 0068 # EV_ABS / ABS_MT_PRESSURE 68
E: 0.033482 0003 0018 0068 # EV_ABS / ABS_PRESSURE 68
E: 0.033482 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +12ms
E: 0.044268 0003 0035 2561 # EV_ABS / ABS_MT_POSITION_X 2561
E: 0.044268 0003 0000 2561 # EV_ABS / ABS_X 2561
E: 0.044268 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +11ms
E: 0.054093 0003 0035 2562 # EV_ABS / ABS_MT_POSITION_X 2562
E: 0.054093 0003 003a 0067 # EV_ABS / ABS_MT_PRESSURE 67
E: 0.054093 0003 0000 2562 # EV_ABS / ABS_X 2562
E: 0.054093 0003 0018 0067 # EV_ABS / ABS_PRESSURE 67
E: 0.054093 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +10ms
E: 0.064891 0003 0035 2569 # EV_ABS / ABS_MT_POSITION_X 2569
E: 0.064891 0003 0036 2903 # EV_ABS / ABS_MT_POSITION_Y 2903
E: 0.064891 0003 003a 0059 # EV_ABS / ABS_MT_PRESSURE 59
E: 0.064891 0003 0000 2569 # EV_ABS / ABS_X 2569
E: 0.064891 0003 0001 2903 # EV_ABS / ABS_Y 2903
E: 0.064891 0003 0018 0059 # EV_ABS / ABS_PRESSURE 59
E: 0.064891 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +10ms
E: 0.073634 0003 0039 -001 # EV_ABS / ABS_MT_TRACKING_ID -1
E: 0.073634 0001 014a 0000 # EV_KEY / BTN_TOUCH 0
E: 0.073634 0003 0018 0000 # EV_ABS / ABS_PRESSURE 0
E: 0.073634 0001 0145 0000 # EV_KEY / BTN_TOOL_FINGER 0
E: 0.073634 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +9ms
We have a tracking ID (387) signalling finger down, as well as a position plus pressure. then some updates and eventually a tracking ID of -1 (signalling finger up). Notice how there is no ABS_MT_SLOT here - the kernel buffers those too so while you stay in the same slot (0 in this case) you don't see any events for it. Also notice how you get both single-finger as well as multitouch in the same event stream. This is for backwards compatibility [2]

Ok, time for a two-finger tap:


E: 0.000001 0003 0039 0496 # EV_ABS / ABS_MT_TRACKING_ID 496
E: 0.000001 0003 0035 2609 # EV_ABS / ABS_MT_POSITION_X 2609
E: 0.000001 0003 0036 3791 # EV_ABS / ABS_MT_POSITION_Y 3791
E: 0.000001 0003 003a 0054 # EV_ABS / ABS_MT_PRESSURE 54
E: 0.000001 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.000001 0003 0039 0497 # EV_ABS / ABS_MT_TRACKING_ID 497
E: 0.000001 0003 0035 3012 # EV_ABS / ABS_MT_POSITION_X 3012
E: 0.000001 0003 0036 3088 # EV_ABS / ABS_MT_POSITION_Y 3088
E: 0.000001 0003 003a 0056 # EV_ABS / ABS_MT_PRESSURE 56
E: 0.000001 0001 014a 0001 # EV_KEY / BTN_TOUCH 1
E: 0.000001 0003 0000 2609 # EV_ABS / ABS_X 2609
E: 0.000001 0003 0001 3791 # EV_ABS / ABS_Y 3791
E: 0.000001 0003 0018 0054 # EV_ABS / ABS_PRESSURE 54
E: 0.000001 0001 014d 0001 # EV_KEY / BTN_TOOL_DOUBLETAP 1
E: 0.000001 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +0ms
E: 0.012909 0003 002f 0000 # EV_ABS / ABS_MT_SLOT 0
E: 0.012909 0003 0039 -001 # EV_ABS / ABS_MT_TRACKING_ID -1
E: 0.012909 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.012909 0003 0039 -001 # EV_ABS / ABS_MT_TRACKING_ID -1
E: 0.012909 0001 014a 0000 # EV_KEY / BTN_TOUCH 0
E: 0.012909 0003 0018 0000 # EV_ABS / ABS_PRESSURE 0
E: 0.012909 0001 014d 0000 # EV_KEY / BTN_TOOL_DOUBLETAP 0
E: 0.012909 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +12ms
This was a really quick two-finger tap that illustrates the tracking IDs nicely. In the first event we get a touch down, then an ABS_MT_SLOT event. This tells us that subsequent events belong to the other slot, so it's the other finger. There too we get a tracking ID + position. In the next event we get an ABS_MT_SLOT to switch back to slot 0. Tracking ID of -1 means that touch ended, and then we see the touch in slot 1 ended too.

Time for a two-finger scroll:


E: 0.000001 0003 0039 0557 # EV_ABS / ABS_MT_TRACKING_ID 557
E: 0.000001 0003 0035 2589 # EV_ABS / ABS_MT_POSITION_X 2589
E: 0.000001 0003 0036 3363 # EV_ABS / ABS_MT_POSITION_Y 3363
E: 0.000001 0003 003a 0048 # EV_ABS / ABS_MT_PRESSURE 48
E: 0.000001 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.000001 0003 0039 0558 # EV_ABS / ABS_MT_TRACKING_ID 558
E: 0.000001 0003 0035 3512 # EV_ABS / ABS_MT_POSITION_X 3512
E: 0.000001 0003 0036 3028 # EV_ABS / ABS_MT_POSITION_Y 3028
E: 0.000001 0003 003a 0044 # EV_ABS / ABS_MT_PRESSURE 44
E: 0.000001 0001 014a 0001 # EV_KEY / BTN_TOUCH 1
E: 0.000001 0003 0000 2589 # EV_ABS / ABS_X 2589
E: 0.000001 0003 0001 3363 # EV_ABS / ABS_Y 3363
E: 0.000001 0003 0018 0048 # EV_ABS / ABS_PRESSURE 48
E: 0.000001 0001 014d 0001 # EV_KEY / BTN_TOOL_DOUBLETAP 1
E: 0.000001 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +0ms
E: 0.027960 0003 002f 0000 # EV_ABS / ABS_MT_SLOT 0
E: 0.027960 0003 0035 2590 # EV_ABS / ABS_MT_POSITION_X 2590
E: 0.027960 0003 0036 3395 # EV_ABS / ABS_MT_POSITION_Y 3395
E: 0.027960 0003 003a 0046 # EV_ABS / ABS_MT_PRESSURE 46
E: 0.027960 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.027960 0003 0035 3511 # EV_ABS / ABS_MT_POSITION_X 3511
E: 0.027960 0003 0036 3052 # EV_ABS / ABS_MT_POSITION_Y 3052
E: 0.027960 0003 0000 2590 # EV_ABS / ABS_X 2590
E: 0.027960 0003 0001 3395 # EV_ABS / ABS_Y 3395
E: 0.027960 0003 0018 0046 # EV_ABS / ABS_PRESSURE 46
E: 0.027960 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +27ms
E: 0.051720 0003 002f 0000 # EV_ABS / ABS_MT_SLOT 0
E: 0.051720 0003 0035 2609 # EV_ABS / ABS_MT_POSITION_X 2609
E: 0.051720 0003 0036 3447 # EV_ABS / ABS_MT_POSITION_Y 3447
E: 0.051720 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.051720 0003 0036 3080 # EV_ABS / ABS_MT_POSITION_Y 3080
E: 0.051720 0003 0000 2609 # EV_ABS / ABS_X 2609
E: 0.051720 0003 0001 3447 # EV_ABS / ABS_Y 3447
E: 0.051720 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +24ms
[...]
E: 0.272034 0003 002f 0000 # EV_ABS / ABS_MT_SLOT 0
E: 0.272034 0003 0039 -001 # EV_ABS / ABS_MT_TRACKING_ID -1
E: 0.272034 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.272034 0003 0039 -001 # EV_ABS / ABS_MT_TRACKING_ID -1
E: 0.272034 0001 014a 0000 # EV_KEY / BTN_TOUCH 0
E: 0.272034 0003 0018 0000 # EV_ABS / ABS_PRESSURE 0
E: 0.272034 0001 014d 0000 # EV_KEY / BTN_TOOL_DOUBLETAP 0
E: 0.272034 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +30ms
Note that "scroll" is something handled in userspace, so what you see here is just a two-finger move. Everything in there i something we've already seen, but pay attention to the two middle events: as updates come in for each finger, the ABS_MT_SLOT changes before the upates are sent. The kernel filter for identical events is still in effect, so in the third event we don't get an update for the X position on slot 1. The filtering is per-touchpoint, so in this case this means that slot 1 position x is still on 3511, just as it was in the previous event.

That's all you have to remember, really. If you think of evdev as a serialised way of sending an array of touchpoints, with the slots as the indices then it should be fairly clear. The rest is then just about actually looking at the touch positions and making sense of them.

Exercise: do a pinch gesture on your touchpad. See if you can track the two fingers moving closer together. Then do the same but only move one finger. See how the non-moving finger gets less updates.

That's it. There are a few more details to evdev but much of that is just more event types and codes. The few details you really have to worry about when processing events are either documented in libevdev or abstracted away completely. The above should be enough to understand what your device does, and what goes wrong when your device isn't working. Good luck.

[1] If not, file a bug against systemd's hwdb and CC me so we can put corrections in
[2] We treat some MT-capable touchpads as single-touch devices in libinput because the MT data is garbage

New badge: FUDCon Phnom Penh 2016 Attendee !
FUDCon Phnom Penh 2016 AttendeeYou attended FUDCon Phnom Penh 2016!