September 27, 2016

radv: status update or is Talos Principle rendering yet?
The answer is YES!!

I fixed the last bug with instance rendering and Talos renders great on radv now.

Also with the semi-interesting branch vkQuake also renders, there are some upstream bugs that needs fixing in spirv/nir that I'm awaiting and upstream resolution on, but I've included some prelim fixes in semi-interesting for now, that'll go away when upstream fixes are decided on.

Here's a screenshot:

Alterar ordem de boot no grub2 - Fedora 24

Na empresa onde presto serviço tem um notebook “comunitário”, vários gerentes utilizam ele.

Como era de se esperar, o sistema desse notebook é o Windows 10, mas como agora eu sou responsável pelo setor de TI, coloquei o Fedora 24 nele, pra eu poder usar também, obviamente respeitei os demais usuários e deixei em dual boot.

Para garantir que os usuários menos experientes não iniciem o Fedora por engano (enquanto eu não “converto” todos), alterei a ordem do boot no Grub, e essa é a dica de 3 passos que deixarei aqui.

  • Passo 1 = Identificando o Windows no grub:

$sudo cat /boot/grub2/grub.cfg | grep Windows

No meu caso, o resultado foi: menuentry ‘Windows 10 (loader) (on /dev/sda1)’ …

Só nos interessa o que está entre as primeiras aspas simples, após “menuentry”

  • Passo 2 = Alterar ordem preferencial do grub:

$sudo grub2-set-default 'Windows 10 (loader) (on /dev/sda1)’

  • Passo 3 = Atualizar o grub:

$sudo grub2-mkconfig -o /boot/grub2/grub.cfg

Pronto, agora toda vez que o notebook iniciar, o Windows estará selecionado.

September 26, 2016

Fedora 25 Alpha and processing.
About processing you can find more from processing website.
Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. There are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning and prototyping. 
Is simple to use. You can used with java also with  python and android mode.
Come with many examples and tutorials.
Today I tested with Fedora 25 alpha.
I download the 64bits tgz file. I extract the archive into my home user.
I used the binary file to run it and I install some modes from menu Tools and them Modes tab.
I run one simple code to see if is running without errors.
And working well, see the result:
GLPI version 9.1

GLPI (Free IT and asset management software) version 9.1 is available. RPM are available in remi repository for Fedora ≥ 22 and Enterprise Linux ≥ 5

As all plugins projets have not yet released a stable version, so version 0.90 stay available in remi repository.

Available in the repository:

  • glpi-9.1-1
  • glpi-data-injection-2.4.2-1

Attention Warning: for security reason, the installation wizard is only allowed from the server where GLPI is installed. See the configuration file (/etc/httpd/conf.d/glpi.conf) to temporarily allow more clients.

You are welcome to try this version, in a dedicated test environment, give your feedback and post your questions and bugs on:

 

Epiphany Icon Refresh

We have a nice new app icon for Epiphany 3.24, thanks to Jakub Steiner:

<figure class="wp-caption aligncenter" id="attachment_599" style="width: 300px">new-icon<figcaption class="wp-caption-text">Our new icon. Ignore the version numbers, it’s for 3.24.</figcaption></figure>

Wow pretty!

The old icon was not actually specific to Epiphany, but was taken from the system, so it could be totally different depending on your icon theme. Here’s the icon currently used in GNOME, for comparison:

<figure class="wp-caption aligncenter" id="attachment_600" style="width: 300px">old-icon<figcaption class="wp-caption-text">The old icon, for comparison</figcaption></figure>

You can view the new icon it in its full 512×512 glory by navigating to about:web:

<figure class="wp-caption aligncenter" id="attachment_601" style="width: 300px">big-icon<figcaption class="wp-caption-text">It’s big (click for full size)</figcaption></figure>

(The old GNOME icon was a mere 256×256.)

Thanks Jakub!

Who left all this fire everywhere?
If you're paying attention, you saw the news about Yahoo's breach. Five hundred million accounts. That's a whole lot of data if you think about it.  But here's the thing. If you're a security person, are you surprised by this? If you are, you've not been paying attention.

It's pretty well accepted that there are two types of large infrastructures. Those who know they've been hacked, and those who don't yet know they've been hacked. Any group as large as Yahoo probably has more attackers inside their infrastructure than anyone really wants to think about. This is certainly true of every single large infrastructure and cloud provider and consumer out there. Think about that for a little bit. If you're part of a large infrastructure, you have threat actors inside your network right now, probably more than you think.

There are two really important things to think about.

Firstly, if you have any sort of important data, and it's not well protected, odds are very high that it's left your network. Remember that not every hack gets leaked in public, sometimes you'll never find out. On that note, if anyone has any data on what percentage of compromises leaked I'd love to know.

The most important thing is around how we need to build infrastructure with a security mindset. This is a place public cloud actually has an advantage. If you have a deployment in a public cloud, you're naturally going to be less trusting of the machines than you would be if they were in racks you can see. Neither is really any safer, it's just you trust one less which will result in a more secured infrastructure. Gone are the days where having a nice firewall is all the security you need.

Now every architect should assume whatever they're doing has bad actors on the network and in the machines. If you keep this in mind, it really changes how you do things. Storing lots of sensitive data in the same place isn't wise. Break things apart when you can. Make sure data is encrypted as much as possible. Plan for failure, have you done an exercise where you assume the worst then decide what you do next? This is the new reality we have to exist in. It'll take time to catch up of course, but there's not really a choice. This is one of those change or die situations. Nobody can afford to ignore the problems around leaking sensitive data for much longer. The times, they are a changin.

Leave your comments on Twitter: @joshbressers
New Firefox 49 features in Fedora

The latest release 49 of Firefox comes with some interesting new features. Here’s what they mean for Fedora users and how to enable them beyond default setup.

Make a safe playground

When you’re testing Firefox, you should create a new fresh profile. If something goes wrong, you won’t lose data. The extra profile also allows you to run additional instances at the same time, each with a different configuration.

Open a terminal and create a new Firefox profile:

$ firefox --ProfileManager

Then run your profile:

$ firefox -P profile_name --no-remote

The –no-remote parameter launches an independent instance, instead of connecting to a running one.

Now for the fun part! Type about:config in the location bar to bring up hidden configuration options. The remaining tips in this article require you to edit these configuration keys. All changes usually require you to restart the browser.

Graphics acceleration

Firefox integrates the Skia graphics library as seen in Google Chrome. Unlike Cairo, the former default, Skia promises faster and parallel graphics rendering on Linux.

Skia is not yet enabled completely, but only for canvas HTML5 elements. For a full Skia experience, which may provide anything from ultra-speed to a crash on startup, set gfx.content.azure.backends to skia.

Electrolysis

Electrolysis not only dissolves water but is also meant to speed up Firefox. When Electrolysis is enabled, all web content runs in a separated process under the plugin-container, emancipated from the main browser.

Firefox 49 is a bit picky, and not every piece of content will work this way. To check content status, open the about:support page and look at the Multiprocess Windows row. If some content is not working with Electrolysis, you can try other options to tune the function. A good start is to disable incompatible extensions and set browser.tabs.remote.autostart to true.

For more instructions, including how to force-enable Electrolysis, refer to the Mozilla Wiki.

Dark times are back

At least for your browser, they are. If you like dark themes on the desktop and want the same for the web, toogle widget.allow-gtk-dark-theme to true. Firefox will use a default dark theme for both the user interface and web content.

September 25, 2016

Advanced Multimedia on the Linux Command Line

There was a time that Apple macOS was the best platform to handle multimedia (audio, image, video). This might be still true in the GUI space. But Linux presents a much wider range of possibilities when you go to the command line, specially if you want to:

  • Process hundreds or thousands of files at once
  • Same as above, organized in many folders while keeping the folder structure
  • Same as above but with much fine grained options, including lossless processing that most GUI tools won’t give you

The Open Source community has produced state of the art command line tools as ffmpeg, exiftool and others, which I use every day to do non-trivial things, along with Shell advanced scripting. Sure, you can get these tools installed on Mac or Windows, and you can even use almost all these recipes on these platforms, but Linux is the native platform for these tools, and easier to get the environment ready.

These are my personal notes and I encourage you to understand each step of the recipes and adapt to your workflows. It is organized in Audio, Video and Image+Photo sections.

I use Fedora Linux and I mention Fedora package names to be installed. You can easily find same packages on your Ubuntu, Debian, Gentoo etc, and use these same recipes.

<section id="audio">

Audio

</section> <section id="audio.showinfo">

Show information (tags, bitrate etc) about a multimedia file

ffprobe file.mp3
ffprobe file.m4v
ffprobe file.mkv
</section> <section id="audio.flac2alac">

Lossless conversion of all FLAC files into more compatible, but still Open Source, ALAC

ls *flac | while read f; do
	ffmpeg -i "$f" -acodec alac -vn "${f[@]/%flac/m4a}" < /dev/null;
done
</section> <section id="audio.flac2mp3">

Convert all FLAC files into 192kbps MP3

ls *flac | while read f; do
   ffmpeg -i "$f" -qscale:a 2 -vn "${f[@]/%flac/mp3}" < /dev/null;
done
</section> <section id="audio.flac2mp3hierarchy">

Same as above but under a complex directory structure

# Create identical directory structure under new "mp3" folder
find . -type d | while read d; do
   mkdir -p "alac/$d"
done

find . -name "*flac" | sort | while read f; do
   ffmpeg -i "$f" -acodec alac -vn "alac/${f[@]/%flac/m4a}" < /dev/null;
done
</section> <section id="audio.cue2files">

Convert APE+CUE, FLAC+CUE, WAV+CUE album-on-a-file into a one file per track ALAC or MP3

If some of your friends has the horrible tendency to commit this crime and rip CDs as 1 file for entire CD, there is an automation to fix it. APE is the most difficult and this is what I’ll show. FLAC and WAV are shortcuts of this method.

  1. Make a lossless conversion of the APE file into something more manageable, as WAV:
    ffmpeg -i audio-cd.ape audio-cd.wav
  2. Now the magic: use the metadata on the CUE file to split the single file into separate tracks, renaming them accordingly. You’ll need the shnplit command, available in the shntool package on Fedora (to install: yum install shntool):
    shnsplit -t "%n • %p ♫ %t" audio-cd.wav < audio-cd.cue
  3. Now you have a series of nicely named WAV files, one per CD track. Lets convert them into lossless ALAC using one of the above recipes:
    ls *wav | while read f; do
       ffmpeg -i "$f" -acodec alac -vn "${f[@]/%wav/m4a}" < /dev/null;
    done

    This will get you lossless ALAC files converted from the intermediary WAV files. You can also convert them into FLAC or MP3 using one of the other recipes above.

Now the files are ready for your tagger.
</section>

<section id="video">

Video

</section> <section id="video.srt">

Add chapters and soft subtitles from SRT file to M4V/MP4 movie

This is a lossless and fast process, chapters and subtitles are added as tags and streams to the file; audio and video streams are not reencoded.

  1. Make sure your SRT file is UTF-8 encoded:
    bash$ file subtitles_file.srt
    subtitles_file.srt: ISO-8859 text, with CRLF line terminators
    

    It is not UTF-8 encoded, it is some ISO-8859 variant, which I need to know to correctly convert it. My example uses a Brazilian Portuguese subtitle file, which I know is ISO-8859-15 (latin1) encoded because most latin scripts use this encoding.

  2. Lets convert it to UTF-8:
    bash$ iconv -f latin1 -t utf8 subtitles_file.srt > subtitles_file_utf8.srt
    bash$ file subtitles_file_utf8.srt
    subtitles_file_utf8.srt: UTF-8 Unicode text, with CRLF line terminators
    
  3. Check chapters file:
    bash$ cat chapters.txt
    CHAPTER01=00:00:00.000
    CHAPTER01NAME=Chapter 1
    CHAPTER02=00:04:31.605
    CHAPTER02NAME=Chapter 2
    CHAPTER03=00:12:52.063
    CHAPTER03NAME=Chapter 3
    …
    
  4. Now we are ready to add them all to the movie along with setting the movie name and embedding a cover image to ensure the movie looks nice on your media player list of content. Note that this process will write the movie file in place, will not create another file, so make a backup of your movie while you are learning:
    MP4Box -ipod \
           -itags 'track=The Movie Name:cover=cover.jpg' \
           -add 'subtitles_file_utf8.srt:lang=por' \
           -chap 'chapters.txt:lang=eng' \
           movie.mp4
    

The MP4Box command is part of GPac.
OpenSubtitles.org has a large collection of subtitles in many languages and you can search its database with the IMDB ID of the movie. And ChapterDB has the same for chapters files.

</section> <section id="video.decrypt">

Decrypt and rip a DVD the loss less way

  1. Make sure you have the RPMFusion and the Negativo17 repos configured
  2. Install libdvdcss and vobcopy
    dnf -y install libdvdcss vobcopy
  3. Mount the DVD and rip it, has to be done as root
    mount /dev/sr0 /mnt/dvd;
    cd /target/folder;
    vobcopy -m /mnt/dvd .

You’ll get a directory tree with decrypted VOB and BUP files. You can generate an ISO file from them or, much more practical, use HandBrake to convert the DVD titles into MP4/M4V (more compatible with wide range of devices) or MKV/WEBM files.

</section> <section id="video.slowmotion">

Convert 240fps video into 30fps slow motion, the loss-less way

Modern iPhones can record videos at 240 or 120fps so when you’ll watch them at 30fps they’ll look slow-motion. But regular players will play them at 240 or 120fps, hiding the slo-mo effect.
We’ll need to handle audio and video in different ways. The video FPS fix from 240 to 30 is loss less, the audio stretching is lossy.

# make sure you have the right packages installed
dnf install mkvtoolnix sox gpac faac
#!/bin/bash

# Script by Avi Alkalay
# Freely distributable

f="$1"
ofps=30
noext=${f%.*}
ext=${f##*.}

# Get original video frame rate
ifps=`ffprobe -v error -select_streams v:0 -show_entries stream=r_frame_rate -of default=noprint_wrappers=1:nokey=1 "$f" < /dev/null  | sed -e 's|/1||'`
echo

# exit if not high frame rate
[[ "$ifps" -ne 120 ]] && [[ "$ifps" -ne 240 ]] && exit

fpsRate=$((ifps/ofps))
fpsRateInv=`awk "BEGIN {print $ofps/$ifps}"`

# loss less video conversion into 30fps through repackaging into MKV
mkvmerge -d 0 -A -S -T \
	--default-duration 0:${ofps}fps \
	"$f" -o "v$noext.mkv"

# loss less repack from MKV to MP4
ffmpeg -loglevel quiet -i "v$noext.mkv" -vcodec copy "v$noext.mp4"
echo

# extract subtitles, if original movie has it
ffmpeg -loglevel quiet -i "$f" "s$noext.srt"
echo

# resync subtitles using similar method with mkvmerge
mkvmerge --sync "0:0,${fpsRate}" "s$noext.srt" -o "s$noext.mkv"

# get simple synced SRT file
rm "s$noext.srt"
ffmpeg -i "s$noext.mkv" "s$noext.srt"

# remove undesired formating from subtitles
sed -i -e 's|<font size="8"><font face="Helvetica">\(.*\)</font></font>|\1|' "s$noext.srt"

# extract audio to WAV format
ffmpeg -loglevel quiet -i "$f" "$noext.wav"

# make audio longer based on ratio of input and output framerates
sox "$noext.wav" "a$noext.wav" speed $fpsRateInv

# lossy stretched audio conversion back into AAC (M4A) 64kbps (because we know the original audio was mono 64kbps)
faac -q 200 -w -s --artist a "a$noext.wav"

# repack stretched audio and video into original file while removing the original audio and video tracks
cp "$f" "${noext}-slow.${ext}"
MP4Box -ipod -rem 1 -rem 2 -rem 3 -add "v$noext.mp4" -add "a$noext.m4a" -add "s$noext.srt" "${noext}-slow.${ext}"

# remove temporary files 
rm -f "$noext.wav" "a$noext.wav" "v$noext.mkv" "v$noext.mp4" "a$noext.m4a" "s$noext.srt" "s$noext.mkv"
</section> <section id="video.1photo">

1 Photo + 1 Song = 1 Movie

If the audio is already AAC-encoded, create an MP4/M4V file:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.m4a -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a copy movie.m4v

The above method will create a very efficient 0.2 frames per second (-framerate 0.2) H.264 video from the photo while simply adding the audio losslessly. Such very-low-frames-per-second video may present sync problems with subtitles on some players. In this case simply remove the -framerate 0.2 parameter to get a regular 25fps video with the cost of a bigger file size.
The -vf scale=960:-1 parameter tells FFMPEG to resize the image to 960px width and calculate the proportional height. Remove it in case you want a video with the same resolution of the photo. A 12 megapixels photo file (around 4032×3024) will get you a near 4K video.
If the audio is MP3, create an MKV file:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.mp3 -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a copy movie.mkv

If audio is not AAC/M4A but you still want an M4V file, convert audio to AAC 192kbps:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.mp3 -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a aac -strict experimental -b:a 192k movie.mkv

See more about FFMPEG photo resizing.

</section> <section id="image">

Image and Photo

</section> <section id="image.noexif">

Move images with no EXIF header to another folder

mkdir noexif;
exiftool -filename -T -if '(not $datetimeoriginal or ($datetimeoriginal eq "0000:00:00 00:00:00"))' *jpg | xargs -i mv "{}" noexif/
</section> <section id="image.file2exif">

Set EXIF photo create time based on file create time

Warning: use this only if image files have correct creation time on filesystem and if they don’t have an EXIF header.

exiftool -overwrite_original '-DateTimeOriginal< ${FileModifyDate}' *CR2 *JPG *jpg
</section> <section id="image.rotate">

Rotate photos based on EXIF’s Orientation flag, plus make them progressive. Lossless

jhead -autorot -cmd "jpegtran -progressive '&i' > '&o'" -ft *jpg
</section> <section id="image.rename">

Rename photos to a more meaningful filename

This process will rename silly, sequential, confusing and meaningless photo file names as they come from your camera into a readable, sorteable and useful format. Example:

IMG_1234.JPG2015.07.24-17.21.33 • Max playing with water【iPhone 6s✚】.jpg

Note that new file name has the date and time it was taken, whats in the photo and the camera model that was used.

  1. First keep the original filename, as it came from the camera, in the OriginalFileName tag:
    exiftool -overwrite_original '-OriginalFileName<${filename}' *CR2 *JPG *jpg
  2. Now rename:
    exiftool '-filename<${DateTimeOriginal} 【${Model}】%.c.%e' -d %Y.%m.%d-%H.%M.%S *CR2 *JPG *jpg
  3. Remove the ‘0’ index if not necessary:
    \ls *JPG *jpg | while read f; do
        nf=`echo "$f" | sed -e 's/0.JPG/.jpg/i'`;
        t=`echo "$f" | sed -e 's/0.JPG/1.jpg/i'`;
        [[ ! -f "$t" ]] && mv "$f" "$nf";
    done
  4. Optional: make lower case extensions:
    \ls *JPG | while read f; do
        nf=`echo "$f" | sed -e 's/JPG/jpg/'`;
        mv "$f" "$nf";
    done
  5. Optional: simplify camera name, for example turn “Canon PowerShot G1 X” into “Canon G1X” and make lower case extension at the same time:
    ls *JPG *jpg | while read f; do
        nf=`echo "$f" | sed -e 's/Canon PowerShot G1 X/Canon G1X/;
          s/iPhone 6s Plus/iPhone 6s✚/;
          s/Canon PowerShot SD990 IS/Canon SD990 IS/;
          s/JPG/jpg/;'`;
        mv "$f" "$nf";
    done

You’ll get file names as 2015.07.24-17.21.33 【Canon 5D Mark II】.jpg. If you took more then 1 photo in the same second, exiftool will automatically add an index before the extension.

</section> <section id="image.semantic">

Even more semantic photo file names based on Subject tag

\ls *【*】* | while read f; do
	s=`exiftool -T -Subject "$f"`;
	nf=`echo "$f" | sed -e "s/ 【/ • $s 【/; s/\:/∶/g;"`;
	mv "$f" "$nf";
done
</section> <section id="image.fullrename">

Full rename: a consolidation of some of the previous commands

exiftool '-filename<${DateTimeOriginal} • ${Subject} 【${Model}】%.c.%e' -d %Y.%m.%d-%H.%M.%S *CR2 *JPG *jpg
</section> <section id="image.creator">

Set photo “Creator” tag based on camera model

  1. First list all cameras that contributed photos to current directory:
    exiftool -T -Model *jpg | sort -u

    Output is the list of camera models on this photos:

    Canon EOS REBEL T5i
    DSC-H100
    iPhone 4
    iPhone 4S
    iPhone 5
    iPhone 6
    iPhone 6s Plus
  2. Now set creator on photo files based on what you know about camera owners:
    CRE="John Doe";    exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/DSC-H100/'            *.jpg
    CRE="Jane Black";  exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/Canon EOS REBEL T5i/' *.jpg
    CRE="Mary Doe";    exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 5/'            *.jpg
    CRE="Peter Black"; exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 4S/'           *.jpg
    CRE="Avi Alkalay"; exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 6s Plus/'      *.jpg
</section> <section id="image.faces">

Recursively search people in photos

If you geometrically mark people faces and their names in your photos using tools as Picasa, you can easily search for the photos which contain “Suzan” or “Marcelo” this way:

exiftool -fast -r -T -Directory -FileName -RegionName -if '$RegionName=~/Suzan|Marcelo/' .

-Directory, -FileName and -RegionName specify the things you want to see in the output. You can remove -RegionName for a cleaner output.
The -r is to search recursively. This is pretty powerful.

</section> <section id="image.timezone">

Make photos timezone-aware

Your camera will tag your photos only with local time on CreateDate or DateTimeOriginal tags. There is another set of tags called GPSDateStamp and GPSTimeStamp that must contain the UTC time the photos were taken, but your camera won’t help you here. Hopefully you can derive these values if you know the timezone the photos were taken. Here are two examples, one for photos taken in timezone -02:00 (Brazil daylight savings time) and on timezone +09:00 (Japan):

exiftool -overwrite_original '-gpsdatestamp<${CreateDate}-02:00' '-gpstimestamp<${CreateDate}-02:00' *.jpg
exiftool -overwrite_original '-gpsdatestamp<${CreateDate}+09:00' '-gpstimestamp<${CreateDate}+09:00' Japan_Photos_folder

Use exiftool to check results on a modified photo:

exiftool -s -G -time:all -gps:all 2013.10.12-23.45.36-139.jpg
[EXIF]          CreateDate                      : 2013:10:12 23:45:36
[Composite]     GPSDateTime                     : 2013:10:13 01:45:36Z
[EXIF]          GPSDateStamp                    : 2013:10:13
[EXIF]          GPSTimeStamp                    : 01:45:36

This shows that the local time when the photo was taken was 2013:10:12 23:45:36. To use exiftool to set timezone to -02:00 actually means to find the correct UTC time, which can be seen on GPSDateTime as 2013:10:13 01:45:36Z. The difference between these two tags gives us the timezone. So we can read photo time as 2013:10:12 23:45:36-02:00.

</section> <section id="image.movesgeotag">

Geotag photos based on time and Moves mobile app records

Moves is an amazing app for your smartphone that simply records for yourself (not social and not shared) everywhere you go and all places visited, 24h a day.

  1. Make sure all photos’ CreateDate or DateTimeOriginal tags are correct and precise, achieve this simply by setting correctly the camera clock before taking the pictures.
  2. Login and export your Moves history.
  3. Geotag the photos informing ExifTool the timezone they were taken, -08:00 (Las Vegas) in this example:
    exiftool -overwrite_original -api GeoMaxExtSecs=86400 -geotag ../moves_export/gpx/yearly/storyline/storyline_2015.gpx '-geotime<${CreateDate}-08:00' Folder_with_photos_from_trip_to_Las_Vegas

Some important notes:

  • It is important to put the entire ‘-geotime’ parameter inside simple apostrophe or simple quotation mark ('), as I did in the example.
  • The ‘-geotime’ parameter is needed even if image files are timezone-aware (as per previous tutorial).
  • The ‘-api GeoMaxExtSecs=86400’ parameter should not be used unless the photo was taken more than 90 minutes of any detected movement by the GPS.
</section> <section id="image.grid">

Concatenate all images together in one big image

  • In 1 column and 8 lines:
    montage -mode concatenate -tile 1x8 *jpg COMPOSED.JPG
  • In 8 columns and 1 line:
    montage -mode concatenate -tile 8x1 *jpg COMPOSED.JPG
  • In a 4×2 matrix:
    montage -mode concatenate -tile 4x2 *jpg COMPOSED.JPG

The montage command is part of the ImageMagick package.
</section>

Retrospectiva HackLab Almería 2012-2015 y pico

Este fin de semana tuve el privilegio de ser invitado por GDG Spain y en particular por ALMO para presentar en el Spanish GDG Summit 2016 la experiencia de la actividad en el HackLab Almería:

Aunque llegué muy inseguro porque soy muy crítico con los que considero fracasos míos, al conocer las vicisitudes de los grupos locales del GDG comprobé que a nosotros no nos va tan mal y que tenemos experiencias muy interesantes para terceros.

De paso me ha servido para reconsiderar parte del trabajo hecho y para documentar más claramente nuestras cosas para nuestra propia gente: creo que es buena idea que todos le demos un repaso.

Es posible que haya algún error y alguna carencia. Todas las opiniones son absolutamente personales y no todo el mundo ha de compartirlas. No tengo tanto interés en discutir las afirmaciones como de corregir errores o inconsistencias. Tened presente de que no es una memoria completa de actividades porque eso sería enooorme, sólo una retrospectiva esquemática.

Está escrita en formato de mapa-mental usando Freemind 1.0.1. El formato tal vez os parezca engorroso, pero las restricciones de tiempo y de presentación de la información no me han permitido nada mejor. Lamento las molestias. Podéis descargar el fichero que incluye el mapa desde aquí: 201609-GDG-Experiencia_HackLabAl.zip

PS: esta misma entrada ha sido publicada en el foro del HackLab Almería.

Rencontre avec Adrian Mârza

Poursuivant mon objectif de connaître la communauté des traducteurs, j’ai eu l’occasion de rencontrer le coordinateur de la traduction de Fedora en roumain lors d’un déplacement à Bucarest.

Adrian Marza est originaire de Iași (prononciation : i-a-ch) et est plus particulièrement impliqué dans la communauté Mozilla en tant que volontaire, pour laquelle il se déplace pour diverses occasions, notamment à Paris, Berlin, Ljubljana, etc. Vous pouvez parcourir son blog qui en parle un peu au milieu d’autres sujets souvent reliés au monde du logiciel libre.

Son blog est écrit en anglais, cependant ma conviction est la suivante : chacun d’entre nous devrait faire l’effort d’écrire dans sa langue dont nous sommes tous ambassadeurs et qu’un petit article écrit en roumain de temps en temps serait bénéfique à sa langue. Amis francophones, vous devrez également faire de même !

Je lui ai présenté mon projet concernant la traduction concernant l’approche de la langue d’une façon transversale, via des outils qui ne sont pas structurés par projet mais par plate-forme, lançant des tests et éclairant l’évolution sur l’ensemble d’une distribution Linux : ma proposition sur une approche globale.

Au détour de nombreux échanges sur les langues et leurs évolutions, nous avons abordés quelques outils, notamment la plate-forme transvision que je souhaiterais implémenter sur l’ensemble de Fedora 25. Pensez à utiliser le menu du haut, c’est une collection d’outils et non pas un seul service ! Lui-même ne connaissait pas ce menu d’un outil pourtant conçu par la communauté (française) Mozilla. Leur plate-forme de traduction est également intéressante à parcourir, car elle est utilisée tant par LibreOffice que Mozilla : Pootle.

Apparemment, je ne suis pas le seul à avoir ce type d’idées de vue globale, car en plus des outils qui sont cités dans ma proposition, il m’a fait découvrir un outil de consolidation des traductions dans le logiciel libre : Amagama.

Le travail que j’ai réalisé pour faire un état des lieux de la traduction des AppData l’a aussi bien intéressé, peut-être lancerons-nous une instance pour la langue roumaine un jour ? Mais la façon de traduire le logiciel libre est-elle vraiment adaptée à ce type de défis ?

Dans tous les cas, la découverte de l’outil de qualité syntaxique, orthographique et grammaticale Pology lui a beaucoup plut, surtout quand il a vu que ces tests sont déjà lancés fréquemment par Robert Antoni Buj Gelonch

J’ai beaucoup apprécié cette rencontre qui m’a permis de partager nos problématiques communes et ce qui nous motive profondément : contribuer à notre modeste échelle à une société meilleure.

Fedora Ambassadors: Measuring Success

Open Source Advocate

I have been a Linux dabbler since 1994 when I first tried Suse Linux. In 2006 I became a full-time Linux user when I converted by laptop to Linux in October of 2006. Like many Linux users I sampled many different distributions while choosing the one that best fit my personality. Eventually I settled on Ubuntu with the release of Ubuntu 7.10 (Gutsy Gibon). Despite choosing Ubuntu I always saw myself as a Linux and open source advocate first and an Ubuntu advocate second. I respected and valued that Linux and open source allowed people the freedom to make personal choices.

I helped organize the Ubuntu New York Local Team in their drive to become an approved team starting in November of 2008. In January of 2009 my application to become an Ubuntu Member was approved. Between November of 2008 and October of 2012 I helped organize and attended 93 Ubuntu, Linux or FOSS events. This included the first FOSSCON that was held at RIT in June of 2010.

In addition to local events I was involved in the global Ubuntu Community as a member of the Ubuntu Beginners Team, Ubuntu LoCo Council and Ubuntu Community Council. I was also fortunate to be sponsored to attend three Ubuntu Developer Summits (UDS). It was during my time serving on the Ubuntu Community Council that I yearned to have more time to get back to what I felt my core mission was; advocacy. I knew that when my term on the Ubuntu Community Council ended in November of 2015 that I could refocus my efforts.

Fedora Ambassador

I became a Fedora Ambassador on March 30th of 2009, but prior to December of 2015 I was more focused on Ubuntu related activities than Fedora. In late October of 2015 I reached out to a long time friend and FOSS Rock Star Remy DeCausmaker. Remy helped me find a few places I could contribute in the Fedora Project. Through these efforts I met Justin Flory who has an amazing passion for Open Source and Fedora. Almost a year later and I am very active as a contributor to the Fedora Project as an author and Ambassador. I have published 23 articles on Fedora Magazine including 19 How Do You Fedora interviews. Thanks to Justin inviting me along I also attended BrickHack and HackMIT as a Fedora Ambassador. HackMIT involved two six hour drives which allowed for a great amount of time to discuss and reflect on being a Fedora Ambassador. One of the topics in the discussion was how to measure the success of an event.

Measuring Success

Over the many years of being an open source advocate I have learned that the method to measure success can take many different forms. When organizing events for the New York LoCo we measured success by how many people attended the event. When I went to technical conferences success was measured by the number of CDs distributed. As I speaker I measured success by the number of people who attended the presentation. With Fedora Magazine I look at the number of views and comments for each article.

On the long ride home from HackMIT 2016 Justin and I discussed how to measure the success of our efforts. The Fedora Project has a badge for attending HackMIT 2016 and ten people have earned the badge. When your remove Justin and me that means 8 out of 1000 participants earned the Fedora HackMIT 2016 badge. What does this mean? I took a closer look at the badge and learned that six of the eight registered their FAS account during the event. Two already had FAS accounts. The numbers lead to several questions:

  • Will the six people who created an account to earn the badge become Fedora Contributors?
  • Will any of the people who did not earn the badge contribute to Fedora?
  • Is the badge a good measure of a successful outreach event?

The first two are good questions. It is difficult to track the first question and impossible to track the second one. The third question is the one that concerns me the most. I think badges are a good way to measure an inreach event, but a poor measure of an outreach effort. I would like to see a better way to measure the success of an event.

Fedora Ambassadors: Mission Statement

The mission of a Fedora Ambassador is clearly stated on the wiki page.

"Ambassadors are the representatives of Fedora. Ambassadors ensure the public understand Fedora's principles and the work that Fedora is doing. Additionally Ambassadors are responsible for helping to grow the contributor base, and to act as a liaison between other FLOSS projects and the Fedora community."

The Fedora Badge granted to attendees does not measure any of these items. I know that I personally handed out 200 fliers about the badge. In doing so I spoke to roughly 80% of the participants and had several good conversations about the Four Foundations. I showed excitement when people were using FOSS in their projects. I answered questions about the best light weight web server. I answered questions about why I chose Fedora. I expressed excitement when I found an entire team using Ubuntu Linux. All of those interactions embody the spirit of the mission. On the long drive home I posed a few questions as we discussed HackMIT:

  • Was the overall awareness of Fedora increased?
  • Was the overall awareness of Linux increased?
  • Was the overall awareness of FOSS increased?
  • Are the participants more likely to check Fedora out in the future?
  • Are the participants more likely to open source their work?

To answer these questions would require a survey. The survey would have to be relatively short, and not require a FAS account or require the person to identify themselves. This will make it more likely that participants would complete the survey. Beyond evaluating a single event the results for event categories could be combined and compared. Take all the answers for hackathon events and compare them to all the answers for maker faire events. With this data it might be possible to know what type of events provide the best opportunity for Ambassadors to make an impact. This would help the Fedora Community determine how to best spend limited funds and volunteer hours.

6 months a task warrior

A while back I added a task to my taskwarrior database: evaluate how things were going at 6 months of use. Today is that day. 🙂

A quick recap: about 6 months ago I switched from my ad-hoc list in a vim session and emails in a mailbox to using task warrior for tracking my various tasks ( http://taskwarrior.org/ )

Some stats:

Category Data
Pending 17
Waiting 20
Recurring 12
Completed 1094
Deleted 18
Total 1161
Annotations 1059
Unique tags 45
Projects 10
Blocked tasks 0
Blocking tasks 0
Data size 2.0 MiB
Undo transactions 3713
Sync backlog transactions 0
Tasks tagged 39.8%
Oldest task 2016-03-24-13:45
Newest task 2016-09-25-10:03
Task used for 6mo
Task added every 3h
Task completed every 4h
Task deleted every 10d
Average time pending 4d
Average desc length 32 characters

Overall I have gotten a pretty good amount of use from task. I do find it a bit sad that I have only been completing a task every 4 hours and adding one every 3 hours. At that rate things aren’t going to be great after a while. I have been using annotations a lot, which I think is a good thing. Looking back at old tasks I can get a better idea of what I did to solve a task or more context around it (I always try and add links to bugzilla or trac or pagure if there’s a ticket or bug involved).

I’d say I am happier for using task and will continue using it. It’s very nice to be able to see what all is pending and easily add things when people ask you for things and you are otherwise busy. I’d recommend it to anyone looking for a nice way to track tasks.

Clickable Pungi logs

When debugging problems with composes, the logs left behind by all stages of the compose run are tremendously helpful. However, they are rather difficult to read due to the sheer volume. Being exposed to them quite intensively for close to a year helps, but it still is a nasty chore.

The most accessible way to look at the logs is via a web browser on kojipkgs. It's just httpd displaying the raw log files on the disk.

It took me too long to figure out this could be made much more pleasant that copy-pasting stuff from the wall of text.

How about a user script that would run in Greasemonkey and allow clicking through to different log files or even Koji tasks?

<figure> Is this not better?<figcaption>Is this not better?</figcaption> </figure>

Turns out it's not that difficult.

Did you know that when Firefox displays a text/plain file, it internally creates an HTML document with all the content in one <pre> tag.

The whole script essentially just runs a search and replace operation on the whole page. We can have a bunch of functions that take the whole content as text and return it slightly modified.

First step will make URLs clickable.

function link_urls(str) {
  let pat = /https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{2,256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_\+.~#?&//=]*)/g;
  return str.replace(pat, '<a href="$&">$&</a>');
}

I didn't write the crazy regular expression myself. I got from Stack Overflow.

Next step can make paths to other files in the same compose clickable.

function link_local_files(url, pathname, mount, str) {
  let pat = new RegExp(mount + pathname + '(/[^ ,"\n]+)', 'g');
  return str.replace(pat, function (path, file) {
    return '<a href="' + url + file + '">' + path + '</a>';
  });
}

The last thing left is not particularly general: linking Koji tasks identifiers.

function link_tasks(taskinfo, str) {
  return str.replace('\d{8,}/m', '<a href="' + taskinfo + '$&">$&</a>')
            .replace(/(Runroot task failed|'task_id'): (\d{8,})/g,
                     '$1: <a href="' + taskinfo + '$2">$2</a>);
  }
}

Tying all these steps together and passing in the extra arguments is rather trivial but not very generic.

window.onload = function () {
  let origin = window.location.origin;
  let pathname = window.location.pathname.split('/', 4).join('/');
  let url = origin + pathname;
  let taskinfo = 'https://koji.fedoraproject.org/koji/taskinfo?taskID=';
  let mount = '/mnt/koji';

  var content = document.getElementsByTagName('pre')[0];
  var text = content.innerHTML;
  content.innerHTML = link_local_files(
    url, pathname, mount,
    link_tasks(taskinfo, link_urls(text))
  );
}

If you find this useful, feel free to grab the whole script with a header.

A GNU Start

I am thrilled to say that last week I became the newest member of the Fedora Engineering team. I will be working on the applications that help the Fedora community create a fantastic Linux distribution. I’m excited to be joining the team and I look forward to working with everyone!

Previously, I worked on the Pulp project, which is a content management system used in Red Hat Satellite 6. I learned a great deal while working with some excellent engineers on this project.

September 24, 2016

Bodhi 2.2.2 released

This is another in a series of bug fix releases for Bodhi this week. In this release, we've fixed
the following issues:

  • Disallow comment text to be set to the NULL value in the database #949.
  • Fix autopush on updates that predate the 2.2.0 release #950.
  • Don't wait on mashes when there aren't any 68de510c.
Fedora 25: Webkitgtk4 Update knockt Evolution, Epiphany und andere aus

Im Ramen des Updates auf Gnome 3.22 für Fedora 25 werden auch das webkitgtk4 Pakete auf Version 2.14.0-1 aktualisiert, was jedoch dazu führt, das Evolution keine Mails mehr darstellt und Epiphany keine Webseiten mehr anzeigt. Potentiell sind jedoch alle Anwendungen betroffen, die webkitgtk4 nutzen.

Wer das Update bereits eingespielt hat und von dem Problem betroffen ist, kann als Workaround ein Downgrade der webkitgtk4 Pakete mittels

su -c'dnf downgrade webkitgtk4\*'

durchführen. Jedoch muss bei zukünftigen Updates darauf geachtet werden, das webkitgtk4 nicht wieder auf die kaputte Version aktualisiert wird. Bei dnf lässt sich dies durch den zusätzlichen Parameter „-x“ erreichen, welcher dnf anweist, etwaige Updates eines Paketes zu ignorieren. Im aktuellen Fall würde der Update-Befehl für dnf wie folgt aussehen:

su -c'dnf update -x webkitgtk4\*'

September 23, 2016

We’re looking for a GNOME developer

We in the Red Hat desktop team are looking for a junior software developer who will work on GNOME. Particularly in printing and document viewing areas of the project.

The location of the position is Brno, Czech Republic, where you’d join a truly international team of desktop developers. It’s a junior position, so candidates just off the university, or even still studying are welcome. We require solid English communication skills and experience with C (and ideally C++, too). But what is a huge plus is experience with GNOME development and participation in the community.

Interested? You can directly apply for the position at jobs.redhat.com or if you have any question, you can write me: eischmann [] redhat [] com.

198px-gnomelogo-svg


Blender nightly in Flatpak

Over the past week I started on an experiment: building the git master branch of Blender in Flatpak.

And I decided to go crazy on it, and also build all its dependencies from their respective git master branch.

I've just pushed the result to my flatpak repo, and it seems to work in my limited testing.

As a result, you can now try out the bleeding edge of Blender development safely with Flatpak, and here's how.

First, install the Freedesktop Flatpak runtime:

$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg
$ flatpak remote-add --user --gpg-import=./gnome-sdk.gpg gnome https://sdk.gnome.org/repo/
$ flatpak install --user gnome org.freedesktop.Platform 1.4

Next, install the Blender app from the master branch of my repo:

$ flatpak remote-add --user --no-gpg-verify bochecha https://www.daitauha.fr/static/flatpak/repo-apps/
$ flatpak install --user bochecha org.blender.app master

That's it!

I want to be clear that I will not build this every day (or every night) as a real "nightly". I just don't have the computing resources to do that, and every build is a big hit on my laptop. (Did I mention this includes building Boost from git master? 😅)

However I'll try to rebuild it from time to time, to pick up updates.

Also, I want to note that this is an experiment in pushing the bleeding edge for Blender to the maximum with Flatpak. If upstream Blender eventually provided nightly builds as Flatpak (for which I'd be happy to help them), they probably would compromise on which dependencies to build from stable releases, and which ones to build from their git master branches.

For example, they probably wouldn't use Python from master like I do. Right now that means this build uses the future 3.7 release of Python, even though 3.6 hasn't been released yet. ☻

Another bad idea in this build is Boost from master, which takes ages just to fetch its myriad of git submodules, let alone build it.

But for an experiment in craziness, it works surprisingly well.

Try it out, and let me know how it goes!

What’s new in 389 Directory Server 1.3.5

As a member of the 389 Directory Server (389DS) core team, I am always excited about our new releases. We have some really great features in 1.3.5. However, our changelogs are always large so I want to just touch on a few of my favourites.

389 Directory Server is an LDAPv3 compliant server, used around the world for Identity Management, Authentication, Authorisation and much more. It is the foundation of the FreeIPA project’s server. As a result, it’s not something we often think about or even get excited for: but every day many of us rely on 389DS to be correct, secure and fast behind the scenes.

389 Directory Server version 1.3.5 is available now in the official Fedora 24, Fedora 25, and rawhide repositories.

Tuning database cache size

Database cache tuning is something that is frequently discussed around 389DS to gain the best performance from your server. We have overhauled the database automatic tuning code to detect memory available on the system more accurately, we split it better between backends, and make better decisions if the ram requested is too much.

For those who manually tune their backend memory usage, we now have better detection of if your tuning is going to cause stability issues. We issue better warnings, and tell you exactly which parameters you need to alter to correct problems before they happen. By putting the config values you need to alter in the error message, it saves
time and confusion by directing you, the administrator, to exactly what you need to do to improve your server health and stability.

We have also eliminated an entire class of issues with database import and re-indexing by automatically tuning the buffer sizes during the process: No more tweaking database cache sizes to import those large databases!

Auditing for attempted changes

We have added new features called the auditfail log. Previously, if a change was made, we would log who made the change, and what they changed to the audit log. But if someone attempted a change, and it failed, we would not log it.

In 1.3.5 this has changed. You can enable the auditfail log with in cn=config

nsslapd-auditfaillog-enabled: on

When a change is attempted, and fails, the reason why (I.E. incorrect object class, lack of permission) and the data that they attempted to change is logged. This is great for debugging applications, but also a great win for security as we can see if someone is attempting to change data they do not have access to.

Hardening and stability

We have been applying static and dynamic analysis tools to 389DS during this development cycle. Combined with our extensive test suites, we have closed many stability bugs (overflows, use after free, double free, segfaults and more) proactively during our development. This has made 1.3.5 in my view, what will be the most reliable, secure version of 389DS we have ever released.

Conclusion

389DS 1.3.5 is out now in Fedora 24: if you are running 389DS or FreeIPA, you are already hopefully seeing the benefits of this release!

There are many more changes than this in the 1.3.5 release: to learn more, see our release notes. Our team’s goal has been to eliminate administrative issues (not document, eliminate – never to be seen again!), improve performance and stability, and to provide better, correct defaults in the server. So many of these changes are “out of sight” to users and even administrators; but they are invaluable for improving services like FreeIPA that build upon 389 Directory Server.

New badge: F25 i18n Test Day Participant !
F25 i18n Test Day ParticipantYou helped test i18n features in Fedora 25\! Thanks\!
New badge: F24 i18n Test Day Participant !
F24 i18n Test Day Participant You helped test i18n features in Fedora 24! Thanks!

September 22, 2016

A bug fix release, primarily focusing on mashing issues

Bodhi 2.2.1 is a bug fix release, primarily focusing on mashing issues:

  • Register date locked during mashing #952.
  • UTF-8 encode the updateinfo before writing it to disk #955.
  • Improved logging during updateinfo generation #956.
  • Removed some unused code 07ff664f.
  • Fix some incorrect imports 9dd5bdbc and b1cc12ad.
  • Rely on self.skip_mash to detect when it is ok to skip a mash ad65362e.
Importing a Public SSH Key

Rex was setting up a server and wanted some help.  His hosting provider had set him up with a username and password for authentication. He wanted me to log in to the machine under his account to help out.  I didn’t want him to have to give me his password.  Rex is a smart guy, but he is not a Linux user.  He is certainly not a system administrator.  The system was CentOS.  The process was far more difficult to walk


CORRECTION: I had the keys swapped. It is important to keep the private key private, and that is the one in $HOME/.ssh/id_rsa

I use public keys cryptography all the time to log in to remote systems.  The OpenSSH client uses a keypair that is stored on my laptop under $HOME/.ssh.  The public key is in $HOME/.ssh/id_rsa.pub and the private one is in $HOME/.ssh/id_rsa.  In order for the ssh command to use this keypair to authenticate me when I try to login, the key stored in $HOME/.ssh/id_rsa.pub first needs to be copied, to the remote machine’s $HOME/.ssh/authorized_keys file.  If the permissions on this file are wrong, or the permissions on the directory  $HOME/.ssh are wrong, ssh will refuse my authentication attempt.

Trying to work this out over chat with someone unfamiliar with the process was frustrating.

This is what the final product looks like.

rex@drmcs [~]# ls -la $HOME/.ssh/
total 12
drwx------ 2 rex rex 4096 Sep 21 13:01 ./
drwx------ 9 rex rex 4096 Sep 21 13:28 ../
-rw------- 1 rex rex  421 Sep 21 13:01 authorized_keys

This should be scriptable.

#!/bin/bash
SSH_DIR=$HOME/.ssh/
AUTHN_FILE=$SSH_DIR/authorized_keys

SSH_KEY="PASTE PUBLIC KEY HERE, ALL ON ONE LINE, THEN REMOVE THE NEXT LINE"
exit 0

mkdir -p $SSH_DIR
chmod 700 $SSH_DIR
touch $AUTHN_FILE
chmod 600 $AUTHN_FILE
echo $SSH_KEY >> $AUTHN_FILE

However, it occured to me that he really should not even be adding me to his account, but, instead, should be creating a separate account for me, only giving me access to that, which would let me look around but not touch. Second attempt:

#!/bin/bash

NEW_USER="NEW USERNAME"
SSH_KEY="PASTE PUBLIC KEY HERE, ALL ON ONE LINE, THEN REMOVE THE NEXT LINE"
exit 0

/usr/sbin/useradd $NEW_USER
SSH_DIR=/home/$NEW_USER/.ssh/
AUTHN_FILE=$SSH_DIR/authorized_keys

mkdir -p $SSH_DIR
chmod 700 $SSH_DIR
touch $AUTHN_FILE 
chmod 600 $AUTHN_FILE
echo $SSH_KEY >> $AUTHN_FILE 

chown -R $NEW_USER:$NEW_USER $SSH_DIR

To clean up the account when I am done, Rex can run:

sudo /usr/sbin/userdel -r admiyo

Which will not only remove my account, but also the directory /home/ayoung
If I have left a login he will see:

userdel: user admiyo is currently used by process 3561
#photos happenings

Out of a mojito bar in South Beach with a lobotomised plastic picnic spoon and a crew of control freaks.

3.22 is here

We recently released GNOME 3.22. It will be in Fedora Workstation 25. Go look at the video — it’s awesome!

GNOME Photos has again taken significant strides forward – just like we did six months ago in 3.20. One of the big things that we added this time was sharing. This nicely rounds out our existing online acccounts integration, and complements the work we did on editing six months ago.

gnome-photos-sharing

Sharing is an important step towards a more tightly integrated online account experience in GNOME. We have been interested in a desktop-wide sharing service for some time. With Flatpak portals becoming a reality, I hope that the sharing feature in Photos can be spun off into a portal for GNOME.

Thanks to Umang Jain, our GSoC intern this summer for working on sharing.

We overhauled a lot of hairy architectural issues, which will let us have nicer overview grids in the near future. Alessandro created a Flatpak. This means that going forward, you can easily try out the nightly builds of Photos thanks to the Flatpak support in GNOME Software 3.22.

gnome-photos-flatpak2

Thanks to Kalev Lember for the wonderful screenshot.

The future

I think that we are reaching a point where we can recommend Photos to a wider group of users. With editing and sharing in place, we have filled some of the bigger gaps in the user experience that we want to offer. Yes, there are some missing features and rough edges that we are aware of, so we we are going to spend the next six months addressing the ones that are most important. You can look at our roadmap for the full picture, but I am going to highlight a few.

Better overview grids (GNOME #690623)

We have been using GtkIconView to display the grid of thumbnails that we call the overview. GtkIconView has been around for a long while, but it has some issues – both visual and performance. Therefore, we want to replace it with GtkFlowBox so (a) that the application remains responsive while we are populating the grid, and (b) we can have really pretty visuals.

Eventually, we want this:

photos-photos

Import from device (GNOME #751212)

This is one of the biggest missing features, in my opinion. We really need a way to import content from removable devices and cameras that doesn’t involve mucking around with files and directories.

Petr Stetka has already started working on this, but I am sure he will appreciate any help with this.

More sharing (GNOME #766031)

Last but not the least, I definitely like showing off on Facebook and so do you! So I want to add a Facebook share-point and possibly a few more.

Come, join us

If any of this interests you, then feel free to jump right in. We have a curated list of newcomer bugs and a guide for those who are relatively new. If you are an experienced campaigner, you can look at the roadmap for more significant tasks.

For any help, discussions or general chitchat, #photos on GIMPNet is the place to be.


Acessando seu Fedora pelo Windows com RDP
Para podemos acessar precisamos instalar o Xrdp no Fedora

# dnf install xrdp  -y

Vamos iniciar o serviço

# systemctl start xrdp

Vamos ativar a inicialização do mesmo

# systemctl enable xrdp

Vamos ajustar as regras de firewall

# firewall-cmd --add-port=3389/tcp --permanent

#firewall-cmd --reload 

Feito isso pelo windows vamos acessar com rdp ,
Na exibição vamos alterar o sistema de cores para  "True color (25bits)
como mostra a imagem a baixo



Feito Isso agora vamos acessar











Guia de Referencia para esse Dica
https://www.server-world.info/en/note?os=Fedora_24&p=desktop&f=7
https://www.vivaolinux.com.br/dica/Acesso-remoto-ao-Raspbian-com-xrdp/
Logging to Elasticsearch made simple with syslog-ng

Elasticsearch is gaining momentum as the ultimate destination for log messages. There are two major reasons for this:

  • You can store arbitrary name-value pairs coming from structured logging or message parsing.
  • You can use Kibana as a search and visualization interface.

Logging to Elasticsearch the traditional way

Originally, you could only send logs to Elasticsearch via Logstash. But the problem with Logstash is that it is quite heavy-weight, as it requires Java to run, and most of it was written in Ruby. While the use of Ruby makes it easy to extend Logstash with new features, it uses too much resource to be used universally. It is not something to be installed on thousands of servers, virtual machines or containers.

The workaround for this problem is to use the different Beats data shippers, which are friendlier with resources. If you also need reliability and scalability, you also need buffering. For this purpose, you need an intermediate database or message broker: Beats and Logstash support Redis and Apache Kafka.

Screenshot_2016-09-21_18-17-55If you look at the above architecture, you’ll see that you need to learn many different software to build an efficient, reliable and scalable logging system around Elasticsearch. All of these software have a different purpose, different requirements and different configuration.

Logging to Elasticsearch made simple

The good news is that syslog-ng can fulfill all of these roles. Most of syslog-ng is written in efficient C code, so it can be installed even in containers without extra resource overhead. It uses PatternDB for message parsing, which uses an efficient Radix-tree based algorithm instead of resource-hungry regular expressions. Of course regexp and a number of other parsers are also available, implemented in efficient C or Rust code. The only part of the pipeline where Java is needed is when the central syslog-ng server sends the log messages to the Elasticsearch server. In other words, only the Elasticsearch destination driver of syslog-ng uses Java, and it uses the official JAR client libraries from Elasticsearch for maximum compatibility.

Screenshot_2016-09-21_18-18-11As syslog-ng has disk-based buffering, you do not need external buffering solutions to enhance scalability and reliability, making your logging infrastructure easier to create and maintain. Disk-based buffering has been available in syslog-ng Premium Edition (the commercial version of syslog-ng) for a long time, and recently also became part of syslog-ng Open Source Edition (OSE) 3.8.1.

How to get started with syslog-ng and Elasticsearch

The syslog-ng application comes with detailed documentation to get you started and help you fine tune your installation.

If you want to get started with parsing messages – replacing grok – see the following links:

Are you stuck?

If you have any questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by e-mail or even in real time via chat. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter I’m available as @PCzanik.

A new Amazon seller scam

Amazon, so convenient, yet so annoying when things go wrong. This useless seller looks like a new type of scam to me. It seems to go like this:

  1. New seller appears, offering just about everything in Amazon’s catalog.
  2. You don’t notice this when buying, but the shipping window is open-ended (from a few days up to months). However you are optimistic, after all most Amazon orders arrive pretty quickly.
  3. Seller very quickly notifies you that the item has shipped. Great!
  4. Nothing arrives after a few weeks.
  5. You check the feedback, and now it looks terrible.
  6. You notice that the “tracking number” is completely bogus. Just a made up number and random shipping company (the seller is apparently based in Shenzen, but somehow the bogus tracking number comes from Singapore post?)
  7. You try to cancel the order. However Amazon won’t let you do that, because the item has been dispatched and it’s still in the shipping window (which, remember, doesn’t end for another couple of months).
  8. You contact the seller. Amazon forces sellers to respond within 3 days. This seller does respond! … to every message with the same nonsense autoresponse.
  9. As a result you can’t cancel the order either.
  10. There is no other way to escalate the problem or cancel the order (even though this clearly violates UK law).
  11. Seller now has your money, you have no product, and no way to cancel for another few months.
  12. Profit!

Comments about OARS and CSM age ratings

I’ve had quite a few comments from people stating that using age rating classification values based on American culture is wrong. So far I’ve been using the Common Sense Media research (and various other psychology textbooks) to essentially clean-room implement a content-rating to appropriate age algorithm.

Whilst I do agree that other cultures have different sensitivities (e.g. Smoking in Uganda, references to Nazis in Germany) there doesn’t appear to be much research on the suggested age ratings for different categories for those specific countries. Lots of things are outright banned for sale for various reasons (which the populous may completely ignore), but there doesn’t seem to be many statistics that back up the various anecdotal statements. For instance, are there any US-specific guidelines that say that the age rating for playing a game that involves taking illegal drugs should be 18, rather than the 14 which is inferred from CSM? Or the age rating should be 25+ for any game that features drinking alcohol in Saudi Arabia?

Suggestions (especially references) welcome. Thanks!

Serasa - Comunicado Digital. (33382)

September 21, 2016

All systems go
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
GNOME 3.22 Released

The GNOME Community has just announced the official release of GNOME 3.22.  GNOME 3.22 — which is slated to be used as the desktop environment for Fedora Workstation 25 — provides a multitude of new features, including a the updated Files application, and comprehensive Flatpak integration with the Software application.

Fedora users that want to try out the new features in GNOME 3.22 can install a pre-release version of Fedora 25, which currently contains a pre-release of GNOME 3.22, but will be updated to include the stable 3.22 release. Alternatively, if you are running Fedora 24, and want to try out individual applications from the GNOME 3.22 release, these can be installed via Flatpak.

Files Application (nautilus)

One of the major applications in the GNOME family that got updates for the 3.22 release was the Files application (nautilus). As previously reported here on the Fedora Magazine, Files has a nifty new batch file renaming ability now baked in.

Batch File renaming in GNOME 3.22

Batch File renaming in GNOME 3.22

Another neat new feature in Files is updated sorting and view options controls, allowing you to switch between the grid and list view with a single click, and simplification of the zoom and sorting options. These changes were implemented after a round of usability testing by Outreachy intern Gina Dobrescu.

Updated Sorting controls in the Files application

Updated Sorting controls in the Files application

Software Application

software

GNOME Software 3.22

The Software application in 3.22 is also updated, with the landing page showing more application tiles. Star ratings — that were introduced in a previous release are now more prominently displayed, and new colour coded badges indicate if an application is Free Software. Installation of Flatpak applications from Flatpak repositories is now also supported in the Software application.

Keyboard Settings

The keyboard settings in 3.22 are also updated, providing easier ways to search, browse and configure your keyboard settings and shortcuts

Keyboard Settings in GNOME 3.22

Keyboard Settings in GNOME 3.22

More Information

For more information on what makes up the 3.22 release, check out the official release announcement, and the release notes.

 

There are scheduled downtimes in progress
New status scheduled: planned outage for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Linux application Flowblade version 1.8 .
This video editor is a multitrack non-linear video editor for Linux released under GPL 3 license.
I try to used also with Fedora linux distro ( Fedora 25 alpha) but not work for me.
I don't see in Fedora anything like python-gi-cairo.
Also I put this issue under github project, maybe will be fixed.

According to the official webpage, the software come with:

Features

Editing:

    3 move tools
    3 trim tools
    4 methods to insert / overwrite / append clips on the timeline
    Drag'n'Drop clips on the timeline
    Clip and compositor parenting with other clips
    Max. 9 combined video and audio tracks available

Image compositing:

    6 compositors. Mix, zoom, move and rotate source video with keyframed animation tools
    19 blends. Stardand image blend modes like Add, Hardlight and Overlay are available
    40+ pattern wipes.

Image and audio filtering:

    50+ image filters: color correction, image effects, distorts, alpha manipulation, blur, edge detection, motion effects, freeze frame, etc.
    30+ audio filters: keyframed volume mixing, echo, reverb, distort, etc.

Supported editable media types:

    Most common video and audio formats, depends on installed MLT/FFMPEG codecs
    JPEG, PNG, TGA, TIFF graphics file types
    SVG vector graphics
    Numbered frame sequences

Output encoding:

    Most common video and audio formats, depends on installed MLT/FFMPEG codecs
    User can define rendering by setting FFMpeg args individually
Microsoft aren't forcing Lenovo to block free operating systems
There's a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.

The background is straightforward. Intel platforms allow the storage to be configured in two different ways - "standard" (normal AHCI on SATA systems, normal NVMe on NVMe systems) or "RAID". "RAID" mode is typically just changing the PCI IDs so that the normal drivers won't bind, ensuring that drivers that support the software RAID mode are used. Intel have not submitted any patches to Linux to support the "RAID" mode.

In this specific case, Lenovo's firmware defaults to "RAID" mode and doesn't allow you to change that. Since Linux has no support for the hardware when configured this way, you can't install Linux (distribution installers will boot, but won't find any storage device to install the OS to).

Why would Lenovo do this? I don't know for sure, but it's potentially related to something I've written about before - recent Intel hardware needs special setup for good power management. The storage driver that Microsoft ship doesn't do that setup. The Intel-provided driver does. "RAID" mode prevents the Microsoft driver from binding and forces the user to use the Intel driver, which means they get the correct power management configuration, battery life is better and the machine doesn't melt.

(Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot, and if they managed to figure that out they'd have worse power management. That increases support costs. For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule)

Things are somewhat obfuscated due to a statement from a Lenovo rep:This system has a Signature Edition of Windows 10 Home installed. It is locked per our agreement with Microsoft. It's unclear what this is meant to mean. Microsoft could be insisting that Signature Edition systems ship in "RAID" mode in order to ensure that users get a good power management experience. Or it could be a misunderstanding regarding UEFI Secure Boot - Microsoft do require that Secure Boot be enabled on all Windows 10 systems, but (a) the user must be able to manage the key database and (b) there are several free operating systems that support UEFI Secure Boot and have appropriate signatures. Neither interpretation indicates that there's a deliberate attempt to prevent users from installing their choice of operating system.

The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware - we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in "RAID" mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.

comment count unavailable comments
Fedora / RISC-V stage4 autobuilder is up and running

Bootstrapping Fedora on the new RISC-V architecture continues apace.

I have now written a small autobuilder which picks up new builds from the Fedora Koji build system and attempts to build them in the clean “stage4” environment.

Getting latest packages from Koji ...
Running: 0 (max: 16) Waiting to start: 7
uboot-tools-2016.09.01-1.fc25.src.rpm                       |  11 MB  00:10     
uboot-tools-2016.09.01-1.fc25 build starting
tuned-2.7.1-2.fc25.src.rpm                                  | 136 kB  00:00     
tuned-2.7.1-2.fc25 build starting
rubygem-jgrep-1.4.1-1.fc25.src.rpm                          |  24 kB  00:00     
rubygem-jgrep-1.4.1-1.fc25 build starting
qpid-dispatch-0.6.1-3.fc25.src.rpm                          | 1.3 MB  00:01     
qpid-dispatch-0.6.1-3.fc25 build starting
python-qpid-1.35.0-1.fc25.src.rpm                           | 235 kB  00:01     
python-qpid-1.35.0-1.fc25 build starting
java-1.8.0-openjdk-aarch32-1.8.0.102-4.160812.fc25.src.rpm  |  53 MB  00:54     
java-1.8.0-openjdk-aarch32-1.8.0.102-4.160812.fc25 build starting
NetworkManager-strongswan-1.4.0-1.fc25.src.rpm              | 290 kB  00:00     
NetworkManager-strongswan-1.4.0-1.fc25 build starting
MISSING DEPS: NetworkManager-strongswan-1.4.0-1.fc25 (see
logs/NetworkManager-strongswan/1.4.0-1.fc25/root.log)
   ... etc ...

Given that we don’t have GCC in the stage4 environment yet, almost all of them currently fail due to missing dependencies, but we’re hoping to correct that soon. In the mean time a few packages that have no C dependencies can actually compile. This way we’ll gradually build up the number of packages for Fedora/RISC-V, and that process will accelerate rapidly once we’ve got GCC.

You can browse the built packages and build logs here: https://fedorapeople.org/groups/risc-v/


A new cmocka release version 1.1.0

It took more than a year but finally Jakub and I released a new version of cmocka today. If you don’t know it yet, cmocka is a unit testing framework for C with support for mock objects!

We set the version number to 1.1.0 because we have some new features:

  • Support to catch multiple exceptions
  • Support to verify call ordering (for mocking)
  • Support to pass initial data to test cases
  • A will_return_maybe() function for ignoring mock returns
  • Subtests for groups using TAP output
  • Support to write multiple XML output files if you have several groups in a test
  • and improved documentation

We have some more features we are working on. I hope it will not take such a long time to release them.

Dgplug contributor grant recipient Trishna Guha

I am happy to announce that Trishna Guha is the recipient of a dgplug contributor grant for 2016. She is an upstream contributor in Fedora Cloud SIG, and hacks on Bodhi in her free time. Trishna started her open source journey just a year back during the dgplug summer training 2015, you can read more about her work in a previous blog post. She has also become an active member of the local Pune PyLadies chapter.

The active members of dgplug.org every year contribute funding, which we then use to help out community members as required. For example, we previously used this fund to pay accommodation costs for our women contributors during PyCon. This year we are happy to be able to assist Trishna Guha to attend PyCon India 2016. Her presence and expertise with upstream development will help diversity efforts at various levels. As she is still a college student, we found many students are interested to talk to and learn from her. So, if you are coming down to PyCon India this weekend, remember to visit the Red Hat booth, and have a chat with Trishna.

GNOME 3.22 core apps

GNOME 3.22 is scheduled to be released today. Along with this release come brand new recommendations for distributions on which applications should be installed by default, and which applications should not. I’ve been steadily working on these since joining the release team earlier this year, and I’m quite pleased with the result.

When a user installs a distribution and boots it for the first time, his or her first impression of the system will be influenced by the quality of the applications that are installed by default. Selecting the right set of default applications is critical to achieving a quality user experience. Installing redundant or overly technical applications by default can leave users confused and frustrated with the distribution. Historically, distributions have selected wildly different sets of default applications. There’s nothing inherently wrong with this, but it’s clear that some distributions have done a much better job of this than others. For instance, a default install of Debian 8 with the GNOME desktop includes two different chess applications, GNOME Chess and XBoard. Debian fails here: these applications are redundant, for starters, and the later app looks like an ancient Windows 95 application that’s clearly out of place with the rest of the system. It’s pretty clear that nobody is auditing the set of default applications here, as I doubt anyone would have intentionally included Xboard; it turns out that XBoard gets pulled in by Recommends via an obscure chess engine that’s pulled in by another Recommends from GNOME Chess, so I presume this is just an accident that nobody has ever cared to fix. Debian is far from the only offender here; you can find similar issues in most distributions. This is the motivation for providing the new default app recommendations.

Most distributions will probably ignore these, continue to select default apps on their own, and continue to do so badly. However, many distributions also strive to provide a pure, vanilla GNOME experience out-of-the-box. Such distributions are the target audience for the new default app guidelines. Fedora Workstation has already adopted them as the basis for selecting which apps will be present by default, and the result is a cleaner out-of-the-box experience.

Update: I want to be clear that these guidelines are not appropriate for all distros. Most distros are not interested in providing a “pure GNOME experience.” Distros should judge for themselves if these guidelines are relevant to them.

Classifications

The canonical source of these classifications is maintained in JHBuild, but that’s not very readable, so I’ll list them out here. The guidelines are as follows:

  • Applications classified as core are intended to be installed by default. Distributions should only claim to provide a vanilla GNOME experience if all such applications are included out-of-the-box.
  • Applications classified as extra are NOT intended to be installed by default. Distributions should not claim to provide a vanilla GNOME experience if any such applications are included out-of-the-box.
  • Applications classified as Incubator are somewhere in between. Incubator is a classification for applications that are designed to be core apps, but which have not yet reached a high enough level of quality that we can move them to core and recommend they be installed by default. If you’re looking for somewhere to help out in GNOME, the apps marked Incubator would be good places to start.

Core apps

Distributions that want to provide a pure GNOME experience MUST include all of the following apps by default:

  • Archive Manager (File Roller)
  • Boxes
  • Calculator
  • Calendar
  • Characters (gnome-characters, not gucharmap)
  • Cheese
  • Clocks
  • Contacts
  • Disk Usage Analyzer (Baobab)
  • Disks
  • Document Viewer (Evince)
  • Documents
  • Files (Nautilus)
  • Font Viewer
  • Help (Yelp)
  • Image Viewer (Eye of GNOME)
  • Logs (gnome-logs, not gnome-system-log)
  • Maps
  • Photos
  • Screenshot
  • Software
  • System Monitor
  • Terminal
  • Text Editor (gedit)
  • Videos (Totem)
  • Weather
  • Web (Epiphany)

Notice that all core apps present generic names (though it’s somewhat debatable if Cheese qualifies as a generic name, I think it sounds better than alternatives like Photo Booth). They all also (more or less) follow the GNOME Human Interface Guidelines.

The list of core apps is not set in stone. For example, if Photos or Documents eventually learn to provide good file previews, we wouldn’t need Image Viewer or Document Viewer anymore. And now that Files has native support for compressed archives (new in 3.22!), we may not need Archive Manager much longer.

Currently, about half of these applications are arbitrarily marked as “system” applications in Software, and are impossible to remove. We’ve received complaints about this and are mostly agreed that it should be possible to remove all but the most critical core applications (e.g. allowing users to remove Software itself would clearly be problematic). Unfortunately this didn’t get fixed in time for GNOME 3.22, so we will need to work on improving this situation for GNOME 3.24.

Incubator

Distributions that want to provide a pure GNOME experience REALLY SHOULD NOT include any of the following apps by default:

  • Dictionary
  • Music
  • Notes (Bijiben)
  • Passwords and Keys (Seahorse)

We think these apps are generally useful and should be in core; they’re just not good enough yet. Please help us improve them.

These are not the only apps that we would like to include in core, but they are the only ones that both (a) actually exist and (b) have actual releases. Take a look at our designs for core apps if you’re interested in working on something new.

Extra apps

Distributions that want to provide a pure GNOME experience REALLY SHOULD NOT include any of the following apps by default:

  • Accerciser
  • Builder
  • dconf Editor
  • Devhelp
  • Empathy
  • Evolution
  • Hex Editor (ghex)
  • gitg
  • Glade
  • Multi Writer
  • Nemiver
  • Network Tools (gnome-nettool)
  • Polari
  • Sound Recorder
  • To Do
  • Tweak Tool
  • Vinagre

Not listed are Shotwell, Rhythmbox, or other applications hosted on git.gnome.org that are not (or are no longer) part of official GNOME releases. These applications REALLY SHOULD NOT be included either.

Note that the inclusion of applications in core versus extra is not a quality judgment: that’s what Incubator is for. Rather, we  classify apps as extra when we do not believe they would be beneficial to the out-of-the-box user experience. For instance, even though Evolution is (in my opinion) the highest-quality desktop mail client that exists today, it can be very difficult to configure, the user interface is large and unintuitive, and most users would probably be better served by webmail. Some applications listed here are special purpose tools that are probably not generally useful to the typical user (like Sound Recorder). Other applications, like Builder, are here because they are developer tools, and developer tools are inherently extremely confusing to nontechnical users. (Update: I originally used Polari instead of Builder as the developer tool example in the previous sentence. It was a bad example.)

Games

What about games? It’s OK to install a couple of the higher-quality GNOME games by default, but none are necessary, and it doesn’t make sense to include too many, since they vary in quality. For instance, Fedora Workstation does not include any games, but Ubuntu installs GNOME Mahjongg, GNOME Mines, and GNOME Sudoku. This is harmless, and it seems like a good list. I might add GNOME Chess, or perhaps GNOME Taquin. I’ve omitted games from the list of extra apps up above, as they’re not my focus here.

Third party applications

It’s OK to include a few third-party, non-GNOME applications by default, but they should be kept to a reasonable minimum. For example Fedora Workstation includes Firefox (instead of Epiphany), Problem Reporting (ABRT), SELinux Troubleshooter, Shotwell (instead of GNOME Photos), Rhythmbox, and LibreOffice Calc, Draw, Impress, and Writer. Note that LibreOffice Base is not included here, because it’s not reasonable to include a database management tool on systems designed for nontechnical users. The LibreOffice start center is also not included, because it’s not an application.

Summing up

Distributions, consider following our recommendations when deciding what should be installed by default. Other distributions should feel encouraged to use these classifications as the basis for downstream package groups. At the very least, distributions should audit their set of default applications and decide for themselves if they are appropriate. A few distributions have some horrendous technical stuff visible in the overview by default; Fedora Workstation shows it does not have to be this way.

GNOME Software and Age Ratings

After all the tarballs for GNOME 3.22 the master branch of gnome-software is now open to new features. Along with the usual cleanups and speedups one new feature I’ve been working on is finally merging the age ratings work.

screenshot-from-2016-09-21-10-22-36

The age ratings are provided by the upstream-supplied OARS metadata in the AppData file (which can be generated easily online) and then an age classification is generated automatically using the advice from the appropriately-named Common Sense Media group. At the moment I’m not doing any country-specific mapping, although something like this will be required to show appropriate ratings when handling topics like alcohol and drugs.

At the moment the only applications with ratings in Fedora 26 will be Steam games, but I’ve also emailed any maintainer that includes an <update_contact> email address in the appdata file that also identifies as a game in the desktop categories. If you ship an application with an AppData and you think you should have an age rating please use the generator and add the extra few lines to your AppData file. At the moment there’s no requirement for the extra data, although that might be something we introduce just for games in the future.

I don’t think many other applications will need the extra application metadata, but if you know of any adult only applications (e.g. in Fedora there’s an application for the sole purpose of downloading p0rn) please let me know and I’ll contact the maintainer and ask what they think about the idea. Comments, as always, welcome. Thanks!

Rust meets Fedora

What is Rust?

Rust is a system programming language which runs blazingly fast, and prevents almost all crashes, segfaults, and data races. You might wonder exactly why yet another programming language is useful, since there are already so many of them. This article aims to explain why.

Safety vs. control

Why Rust?

You may have seen a diagram of the above spectrum. On one side there’s C/C++, which has more control of the hardware it’s running on. Therefore it lets the developer optimize performance by executing finer control over the generated machine code. However, this isn’t very safe; it’s easier to cause a segfault, or security bugs like Heartbleed.

On the other hand, there are languages like Python, Ruby, and JavaScript where the developer has little control but creates safer code. The code can’t generate a segfault, although it can generate exceptions which are fairly safe and contained.

Somewhere in the middle, there’s Java and a few other languages which are a mixture of these characteristics. They offer some control of the hardware they run on but try to minimize vulnerabilities.

Rust is a bit different, and doesn’t fall in this spectrum. Instead it gives the developer both safety and control.

Specialties of Rust

Rust is a system programming language like C/C++, except that it gives the developer fine grained control over memory allocations. A garbage collector is not required. It has a minimal runtime, and runs very close to the bare metal. The developer has greater guarantees about the performance of the code. Furthermore, anyone who knows C/C++ can understand and write code for this language.

Rust runs blazingly fast, since it’s a compiled language. It uses LLVM as the compiler backend and can tap into a large suite of optimizations. In many areas it can perform better than C/C++. Like JavaScript, Ruby, and Python, it’s safe by default, meaning it doesn’t cause segfaults, dangling pointers, or null pointers.

Another important feature is the elimination of data races. Nowadays, most computers have multiple cores and many threads running in parallel. However it’s tough for developers to write good parallel code, so this feature removes that necessity. There are two key concepts Rust uses to eliminate data races:

  • Ownership. Each variable is moved to a new location, and prevents the previous location from using it. There is only one owner of each piece of data.
  • Borrowing. Owned values can be borrowed to allow usage for a certain period of time.

Rust in Fedora 24 and 25

To get started, just install the package:

sudo dnf install rust

Here’s a demo program you can create. Edit a file with this content called helloworld.rs on your system:

fn main() {
    println!("Hello, Rust is running on Fedora 25 Alpha!");
}

Then use rustc to compile the program and run the resulting executable:

rustc helloworld.rs
./helloworld

Contributing to Rust testing

Run the following command to install the latest testing version on Fedora:

sudo dnf --enablerepo=updates-testing --refresh --best install rust

Drop us a mail at test@lists.fedoraproject.org or #fedora-qa on IRC Freenode to get started!


Featured image based off this image from Unsplash

Distinct RBAC Policy Rules

The ever elusive bug 968696 is still out there, due, in no small part, to the distributed nature of the policy mechanism. One Question I asked myself as I chased this beastie is “how many distinct policy rules do we actually have to implement?” This is an interesting question because, if we can an automated way to answer that question, it can lead to an automated way to transforming the policy rules themselves, and thus getting to a more unified approach to policy.

The set of policy files used in a Tripleo overcloud have around 1400 rules:

$ find /tmp/policy -name \*.json | xargs wc -l
   73 /tmp/policy/etc/sahara/policy.json
   61 /tmp/policy/etc/glance/policy.json
  138 /tmp/policy/etc/cinder/policy.json
   42 /tmp/policy/etc/gnocchi/policy.json
   20 /tmp/policy/etc/aodh/policy.json
   74 /tmp/policy/etc/ironic/policy.json
  214 /tmp/policy/etc/neutron/policy.json
  257 /tmp/policy/etc/nova/policy.json
  198 /tmp/policy/etc/keystone/policy.json
   18 /tmp/policy/etc/ceilometer/policy.json
  135 /tmp/policy/etc/manila/policy.json
    3 /tmp/policy/etc/heat/policy.json
   88 /tmp/policy/auth_token_scoped.json
  140 /tmp/policy/auth_v3_token_scoped.json
 1461 total

Granted, that might not be distinct rule lines, as some are multi-line, but most rules seem to be on a single line. There is some whitespace, too.

Many of the rules, while written differently, can map to the same implementation. For example:

“rule: False”

can reduce to

“False”

which is the same as

“!”

All are instances of oslo_policy.policy._check.FalseCheck.

With that in mind, I gathered up the set of policy files deployed on a Tripleo overcloud and hacked together some analysis.

Note: Nova embeds its policy rules in code now. In order to convert them to an old-style policy file, you need to run a command line tool:

oslopolicy-policy-generator --namespace nova --output-file /tmp/policy/etc/nova/policy.json

Ironic does something similar, but uses

oslopolicy-sample-generator --namespace=ironic.api --output-file=/tmp/policy/etc/ironic/policy.json

I’ve attached my source code at the bottom of this article. Running the code provides the following summary:

55 unique rules found

The longest rule belongs to Ironic:

OR(OR(OR((ROLE:admin)(ROLE:administrator))AND(OR((tenant == demo)(tenant == baremetal))(ROLE:baremetal_admin)))AND(OR((tenant == demo)(tenant == baremetal))OR((ROLE:observer)(ROLE:baremetal_observer))))

Some look somewhat repetitive, such as

OR((ROLE:admin)(is_admin == 1))

And some downright dangerous:

NOT( (ROLE:heat_stack_user)

A there are ways to work around having an explicit role in your token.

Many are indications of places where we want to use implied roles, such as:

  1. OR((ROLE:admin)(ROLE:administrator))
  2. OR((ROLE:admin)(ROLE:advsvc)
  3. (ROLE:admin)
  4. (ROLE:advsvc)
  5. (ROLE:service)

 

This is the set of keys that appear more than one time:

9 context_is_admin
4 admin_api
2 owner
6 admin_or_owner
2 service:index
2 segregation
7 default

Doing a grep for context_is_admin shows all of them with the following rule:

"context_is_admin": "role:admin",

admin_api is roughly the same:

cinder/policy.json: "admin_api": "is_admin:True",
ironic/policy.json: "admin_api": "role:admin or role:administrator"
nova/policy.json:   "admin_api": "is_admin:True"
manila/policy.json: "admin_api": "is_admin:True",

I think these here are supposed to include the new check for is_admin_project as well.

Owner is defined two different ways in two files:

neutron/policy.json:  "owner": "tenant_id:%(tenant_id)s",
keystone/policy.json: "owner": "user_id:%(user_id)s",

Keystone’s meaning is that the user matches, where as neutron is a project scope check. Both rules should change.

Admin or owner has the same variety

cinder/policy.json:    "admin_or_owner": "is_admin:True or project_id:%(project_id)s",
aodh/policy.json:      "admin_or_owner": "rule:context_is_admin or project_id:%(project_id)s",
neutron/policy.json:   "admin_or_owner": "rule:context_is_admin or rule:owner",
nova/policy.json:      "admin_or_owner": "is_admin:True or project_id:%(project_id)s"
keystone/policy.json:  "admin_or_owner": "rule:admin_required or rule:owner",
manila/policy.json:    "admin_or_owner": "is_admin:True or project_id:%(project_id)s",

Keystone is the odd one out here, with owner again meaning “user matches.”

Segregation is another rules that means admin:

aodh/policy.json:       "segregation": "rule:context_is_admin",
ceilometer/policy.json: "segregation": "rule:context_is_admin",

Probably the trickiest one to deal with is default, as that is a magic term that is used when a rule is not defined:

sahara/policy.json:   "default": "",
glance/policy.json:   "default": "role:admin",
cinder/policy.json:   "default": "rule:admin_or_owner",
aodh/policy.json:     "default": "rule:admin_or_owner",
neutron/policy.json:  "default": "rule:admin_or_owner",
keystone/policy.json: "default": "rule:admin_required",
manila/policy.json:   "default": "rule:admin_or_owner",

There seem to be three catch all approaches:

  1. require admin,
  2. look for a project match but let admin override
  3. let anyone execute the API.

This is the only rule that cannot be made globally unique across all the files.

Here is the complete list of suffixes.  The format is not strict policy format; I munged it to look for duplicates.

(ROLE:admin)
(ROLE:advsvc)
(ROLE:service)
(field == address_scopes:shared=True)
(field == networks:router:external=True)
(field == networks:shared=True)
(field == port:device_owner=~^network:)
(field == subnetpools:shared=True)
(group == nobody)
(is_admin == False)
(is_admin == True)
(is_public_api == True)
(project_id == %(project_id)s)
(project_id == %(resource.project_id)s)
(tenant_id == %(tenant_id)s)
(user_id == %(target.token.user_id)s)
(user_id == %(trust.trustor_user_id)s)
(user_id == %(user_id)s)
AND(OR((tenant == demo)(tenant == baremetal))OR((ROLE:observer)(ROLE:baremetal_observer)))
AND(OR(NOT( (field == rbac_policy:target_tenant=*) (ROLE:admin))OR((ROLE:admin)(tenant_id == %(tenant_id)s)))
FALSE
NOT( (ROLE:heat_stack_user) 
OR((ROLE:admin)(ROLE:administrator))
OR((ROLE:admin)(ROLE:advsvc))
OR((ROLE:admin)(is_admin == 1))
OR((ROLE:admin)(project_id == %(created_by_project_id)s))
OR((ROLE:admin)(project_id == %(project_id)s))
OR((ROLE:admin)(tenant_id == %(network:tenant_id)s))
OR((ROLE:admin)(tenant_id == %(tenant_id)s))
OR((ROLE:advsvc)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s)))
OR((ROLE:advsvc)OR((tenant_id == %(tenant_id)s)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s))))
OR((is_admin == True)(project_id == %(project_id)s))
OR((is_admin == True)(quota_class == %(quota_class)s))
OR((is_admin == True)(user_id == %(user_id)s))
OR((tenant == demo)(tenant == baremetal))
OR((tenant_id == %(tenant_id)s)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s)))
OR(NOT( (field == port:device_owner=~^network:) (ROLE:advsvc)OR((ROLE:admin)(tenant_id == %(network:tenant_id)s)))
OR(NOT( (field == rbac_policy:target_tenant=*) (ROLE:admin))
OR(OR((ROLE:admin)(ROLE:administrator))AND(OR((tenant == demo)(tenant == baremetal))(ROLE:baremetal_admin)))
OR(OR((ROLE:admin)(is_admin == 1))(ROLE:service))
OR(OR((ROLE:admin)(is_admin == 1))(project_id == %(target.project.id)s))
OR(OR((ROLE:admin)(is_admin == 1))(token.project.domain.id == %(target.domain.id)s))
OR(OR((ROLE:admin)(is_admin == 1))(user_id == %(target.token.user_id)s))
OR(OR((ROLE:admin)(is_admin == 1))(user_id == %(user_id)s))
OR(OR((ROLE:admin)(is_admin == 1))AND((user_id == %(user_id)s)(user_id == %(target.credential.user_id)s)))
OR(OR((ROLE:admin)(project_id == %(created_by_project_id)s))(project_id == %(project_id)s))
OR(OR((ROLE:admin)(project_id == %(created_by_project_id)s))(project_id == %(resource.project_id)s))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(ROLE:advsvc))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == address_scopes:shared=True))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == networks:shared=True)(field == networks:router:external=True)(ROLE:advsvc))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == networks:shared=True))
OR(OR((ROLE:admin)(tenant_id == %(tenant_id)s))(field == subnetpools:shared=True))
OR(OR(OR((ROLE:admin)(ROLE:administrator))AND(OR((tenant == demo)(tenant == baremetal))(ROLE:baremetal_admin)))AND(OR((tenant == demo)(tenant == baremetal))OR((ROLE:observer)(ROLE:baremetal_observer))))
OR(OR(OR((ROLE:admin)(is_admin == 1))(ROLE:service))(user_id == %(target.token.user_id)s))

Here is the source code I used to analyze the policy files:

#!/usr/bin/env python

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os
import sys

from oslo_serialization import jsonutils

from oslo_policy import policy
import oslo_policy._checks as _checks


def display_suffix(rules, rule):

    if isinstance (rule, _checks.RuleCheck):
        return display_suffix(rules, rules[rule.match.__str__()])

    if isinstance (rule, _checks.OrCheck):
        answer =  'OR('
        for subrule in rule.rules:
            answer += display_suffix(rules, subrule)
        answer +=  ')'
    elif isinstance (rule, _checks.AndCheck):
        answer =  'AND('
        for subrule in rule.rules:
            answer += display_suffix(rules, subrule)
        answer +=  ')'
    elif isinstance (rule, _checks.TrueCheck):
        answer =  "TRUE"
    elif isinstance (rule, _checks.FalseCheck):
        answer =  "FALSE"
    elif isinstance (rule, _checks.RoleCheck):       
        answer =  ("(ROLE:%s)" % rule.match)
    elif isinstance (rule, _checks.GenericCheck):       
        answer =  ("(%s == %s)" % (rule.kind, rule.match))
    elif isinstance (rule, _checks.NotCheck):       
        answer =  'NOT( %s ' % display_suffix(rules, rule.rule)
    else:        
        answer =  (rule)
    return answer

class Tool():
    def __init__(self):
        self.prefixes = dict()
        self.suffixes = dict()

    def add(self, policy_file):
        policy_data = policy_file.read()
        rules = policy.Rules.load(policy_data, "default")
        suffixes = []
        for key, rule in rules.items():
            suffix = display_suffix(rules, rule)
            self.prefixes[key] = self.prefixes.get(key, 0) + 1
            self.suffixes[suffix] = self.suffixes.get(suffix, 0) + 1

    def report(self):
        suffixes = sorted(self.suffixes.keys())
        for suffix in suffixes:
            print (suffix)
        print ("%d unique rules found" % len(suffixes))
        for prefix, count in self.prefixes.items():
            if count > 1:
                print ("%d %s" % (count, prefix))
        
def main(argv=sys.argv[1:]):
    tool = Tool()
    policy_dir = "/tmp/policy"
    name = 'policy.json'
    suffixes = []
    for root, dirs, files in os.walk(policy_dir):
        if name in files:
            policy_file_path = os.path.join(root, name)
            print (policy_file_path)
            policy_file = open(policy_file_path, 'r')
            tool.add(policy_file)
    tool.report()

if __name__ == "__main__":
    sys.exit(main(sys.argv[1:]))

September 20, 2016

All systems go
New status good: Everything seems to be working. for services: The Koji Buildsystem, Darkserver, Koschei Continuous Integration
There are scheduled downtimes in progress
New status scheduled: planned outage for services: Koschei Continuous Integration, The Koji Buildsystem, Darkserver
Fedora Media Writer Test Day – 2016-09-20

Fedora Media Writer Test Day - 2016-09-20Today, Tuesday, 2016-09-20, is the Fedora Media Writer Test Day! As part of this planned Change for Fedora 25, the Fedora graphical USB writing tool is being extensively revised and rewritten. This tool was formerly called the “Live USB Creator” and is now re=branded as “Fedora Media Writer”.

Why test the Media Writer

The idea is the new tool will be sufficiently capable, reliable, and cross-platform to be the primary download for Fedora Workstation 25. The main ‘flow’ of the Workstation download page will run through the tool instead of giving you a download link to the ISO file and various instructions for using it in different ways. This would be a pretty big change, and of course, it would be a bad idea to do it if the tool isn’t ready.

So this is an important Test Day! We’ll be testing the new version (Fedora, Windows, and macOS) of the tool to see whether it’s working well enough and catch any remaining issues. It’s also pretty easy to join in. All you’ll need is a USB stick you don’t mind overwriting and a system (or ideally more than one!) you can test booting the stick on (but you don’t need to make any permanent changes to it).

Help test the Media Writer!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

The post Fedora Media Writer Test Day – 2016-09-20 appeared first on Fedora Community Blog.

Is dialup still an option?
TL;DR - No.

Here's why.

I was talking with my Open Source Security Podcast co-host Kurt Seifried about what it would be like to access the modern Internet using dialup. So I decided to give this a try. My first thought was to find a modem, but after looking into this, it isn't really an option anymore.

The setup


  • No Modem
  • Fedora 24 VM
  • Firefox as packaged with Fedora 24
  • Use the firewall via wondershaper to control the network speed
  • "App Telemetry" firefox plugin to time the site load time

I know it's not perfect, but it's probably close enough to get a feel for what's going on. I understand this doesn't exactly recreate a modem experience with details like compression, latency, and someone picking up the phone during a download. There was nothing worse than having that 1 megabyte download at 95% when someone decided they needed to make a phone call. Call waiting was also a terrible plague.

If you're too young to understand any of this, be thankful. Anyone who looks at this time with nostalgia is pretty clearly delusional.

I started testing at a 1024 Kb connection and halved my way down to 56 (instead of 64). This seemed like a nice way to get a feel for how these sites react as your speed shifts down.

Baseline

I picked the most popular english language sites listed on the Alexa top 100. I added lwn.net becuase I like them, and my kids had me add twitch. My home Internet connection is 50 Mb down, 5 Mb up. As you can see, in general all these sites load in less than 5 seconds. The numbers represent the site being fully loaded. Most web browsers seem to show something pretty quickly, even if the page is still loading. For the purpose of this test, our numbers are how long it takes a site to fully load. I also show 4 samples because as you'll see later on, some of these sites took a really really long time to load, so four was as much suffering as I could endure. Perhaps someday I'll do this again with extra automation so I don't have to be so involved.

1024 Kb/s

Things really started to go downhill at this point. Anyone who claims a 1 megabit connection is broadband has probably never tried to use such a connection. In general though most of the sites were usable from a very narrow definition ofh the word.

512 Kb/s


You're going to want to start paying attention to Amazon, something really clever is going to happen, it's sort of noticeable in this graph. Also of note is how consistent bing.com is. While not the fastest site, it will remain extremely consistent through the entire test.

256 Kb/s

Here is where you can really see what Amazon is doing. They clearly have some sort of client side magic happening to ensure an acceptable response. For the rest of my testing I saw this behavior. A slow first load, then things were much much faster. Waiting for sites to load at this speed was really painful, it's only going to get worse from here. 15 seconds doesn't sound horrible, but it really is a long time to wait.

128 Kb

Things are not good at 128 Kb/s. Wikipedia looks empty, it was still loading at the same speed as our fist test. I imagine my lack of an ad enhanced experience with them helps keeps it so speedy.

56 Kb

Here is the real data you're waiting for. This is where I set the speed to 56K down, 48K up, which is the ideal speed of a 56K modem. I doubt most of us got that speed very often.

As you can probably see, Twitch takes an extremely long time to load. This should surprise nobody as it's a site that streams video, by definition it's expected you have a fast connection. Here is the graph again with Twitch removed.
The Yahoo column is empty because I couldn't get Yahoo to load. It timed out every single time I tried. Wikipedia looks empty, but it still loaded at 0.3 seconds. After thinking about this it does make sense. There are Wikipedia users who are on dialup in some countries. They have to keep it lean. Amazon still has a slow first load, then nice and speedy (for some definition of speedy) after that. I tried to load a youtube video to see if it would work. After about 10 minutes of nothing happening I gave up.

Typical tasks

I also tried to perform a few tasks I would consider "expected" by someone using the Internet.

For example from the time I typed in gmail.com until I could read a mail message took about 600 seconds I did let every page load completely before clicking or typing on it. Once I had it loaded, and the AJAX interface timed out then told me to switch to HTML mode, it was mostly usable. It was only about 30 seconds to load a message (including images) and 0.2 seconds to return to the inbox.

Logging into Facebook took about 200 seconds. It was basically unusable once it loaded though. Nothing new loaded, it loads quite a few images though, so this makes sense. These things aren't exactly "web optimized" anymore. If you know someone on dialup, don't expect them to be using Facebook.

cnn.com took 800 seconds. Reddit's front page was 750 seconds. Google News was only 33 seconds. The newspaper is probably a better choice if you have dialup.

I finally tried to run a "yum update" in Fedora to see if updating the system was something you could leave running overnight. It's not. After about 4 hours of just downloading repo metadata I gave up. There was no way you can plausibly update a system over dialup. If you're on dialup, the timeouts will probably keep you from getting pwnt better than updates will.

Another problem you hit with a modern system like this is it tries to download things automatically in the background. More than once I had to kill some background tasks that basically ruined my connection. Most system designers today assume everyone has a nice Internet connection so they can do whatever they want in the background. That's clearly a problem when you're running at a speed this slow.

Conclusion

Is the Internet usable on Dialup in 2016? No. You can't even pretend it's maybe usable. It pretty much would suck rocks to use the Internet on dialup today. I'm sure there are some people doing it. I feel bad for them. It's clear we've hit a place where broadband is expected, and honestly, you need fast broadband, even 1 Megabit isn't enough anymore if you want a decent experience. The definition of broadband in the US is now 25Mb down 3Mb up. Anyone who disagrees with that should spend a day at 56K.

I know this wasn't the most scientific study ever done, I would welcome something more rigorous. If you have any questions or ideas hit me up on Twitter: @joshbressers
Participez à la journée de test de Fedora 25 sur le créateur de média

Aujourd'hui, ce mardi 20 septembre, est une journée dédiée à un test précis : sur la création de média installable pour Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

Capture_du_2016-04-18_23-41-52.png

Qu'est-ce que la création de média installable ?

C'est une nouveauté pour Fedora 25. Elle consiste en une réécriture de l'outil de liveusb-creator qui est non seulement disponible sur Fedora mais aussi sur Windows et Mac OS. Cet utilitaire bénéficie ainsi d'une interface plus proche des standards des applications GNOME 3 en terme d'ergonomie et devient beaucoup plus simple d'utilisation.

Son fonctionnement consiste en la sélection l'image voulue comme Workstation, spin KDE, Server ou autre, procède automatiquement au téléchargement et à l'installation sur un média amovible comme une clé USB disponible et compatible.

L'objectif étant de simplifier la procédure d'installation pour les néophytes, car beaucoup d'utilisateurs se perdent après le téléchargement du fichier ISO traditionnel pour procéder à l'installation. Là, tout sera automatisé et fonctionnel sans intervention particulière. De part cet objectif, ce sera le mode de téléchargement de l'image officielle de Fedora qui sera mis en avant à l'avenir.

Les tests du jour couvrent :

  • Le téléchargement de l'image souhaitée ;
  • L'installation sur la clé USB ;
  • La conformité de l'image d'installation (c'est-à-dire fonctionnelle) ;
  • Compatible UEFI et BIOS ;
  • Fonctionnel sous Fedora, Windows et Mac OS.

Le test est un peu inhabituel car il porte sur le fonctionnement de l'application sur d'autres systèmes que Fedora que sont Windows et Mac OS. Si vous avez de tels systèmes disponibles, il ne faut pas hésiter à remonter les soucis rencontrés avec eux. Car ce seront évidemment les systèmes préférentiels pour un tel outil.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

All systems go
New status good: Everything seems to be working. for services: Fedora Infrastructure Cloud, COPR Build System
What is the Fedora Code of Conduct?

We all live in a society. Every society has customs, values, and mores. This is how homo sapiens are different from other species. Since our childhood, in school, then college, and then at work, we follow a shared set of social values. This shared set of values creates a peaceful world. In the open source world, we strive for values that lead to us all being welcoming, generous, and thoughtful. We may differ in opinions or sometimes disagree with each other, but we try to keep the conversation focused on the ideas under discussion, not the person in the discussion.

Fedora is an excellent example of an open source society where contributors respect each other and have healthy discussions, whether they agree or disagree on all topics. This is a sign of a healthy community. Fedora is a big project with contributors and users from different parts of the world . This creates a diverse community of different skills, languages, ages, colors, cultural values, and more. Although it is rare in Fedora, sometimes miscommunication happens and this can result in situations where the discussion moves from the idea to the person.

Introducing our Code of Conduct

We have a few guidelines that we ask people to keep in mind when they’re using Fedora Project resources. These guidelines help everyone feel welcome in our community. These guidelines are known as the Code of Conduct (CoC). One of the main goals of the Fedora Diversity team is to spread knowledge and improve the visibility of the code of conduct. Violations of the CoC can lead to different outcomes. In the past, there were cases of removal from Fedora mailing lists and IRC channels on violations of the CoC. This can differ depending on the scenario and severity of the issue.

Objectives of the Code of Conduct

Our aim is to have a healthy community of diverse people where ideas and opinions are freely shared and discussion happens openly. To help everyone successfully communicate we ask that you keep these guidelines in mind:

  • Be considerate. Your work will be used by other people, and you in turn will depend on the work of others. Any decision you take will affect users and colleagues, and you should take those consequences into account when making decisions.
  • Be respectful. Not all of us will agree all the time, but disagreement is no excuse for poor behavior and poor manners. We might all experience some frustration now and then, but we cannot allow that frustration to turn into a personal attack. It’s important to remember that a community where people feel uncomfortable or threatened is not a productive one. Members of the Fedora community should be respectful when dealing with other contributors as well as with people outside the Fedora community and with users of Fedora.

The Code of Conduct goes on to say:

“When we disagree, we try to understand why. Disagreements, both social and technical, happen all the time and Fedora is no exception. It is important that we resolve disagreements and differing views constructively.” Remember that we’re different. The strength of Fedora comes from its varied community and people from a wide range of backgrounds. Different people have different perspectives on issues. Being unable to understand why someone holds a viewpoint doesn’t mean they’re wrong. Don’t forget that it is human to err and blaming others doesn’t result in productive outcomes. Rather, offer to help resolve issues and to help learn from mistakes.

Together, we can have a healthy and happy community!


Community management by Milky from the Noun Project

The post What is the Fedora Code of Conduct? appeared first on Fedora Community Blog.

Microsoft SQL Server from PHP

Here is a small comparison of the various solutions to use a Microsoft SQL Server database from PHP, on Linux.

All the tests have be run on Fedora 23 but should work on RHEL or CentOS version 7.

Tested extensions:

 

1. Using PDO, ODBC and FreeTDS

Needed components:

  • freetds library and extension pdo_odbc extension
  • PHP version 5 or 7
  • RPM packages: freetds (EPEL), unixODBC, php-pdo, php-odbc

ODBC driver configuration

The driver must de defined in the /etc/odbcinst.ini file:

[FreeTDS]
Description=FreeTDS version 0.95
Driver=/usr/lib64/libtdsodbc.so.0.0.0

Data source configuration

The used server must be defined in the /etc/odbc.ini file (system wide) or in the ~/.odbc.ini file (user):

[sqlsrv_freetds]
Driver=FreeTDS
Description=SQL via FreeTds
Server=sqlserver.domain.tld
Port=1433

Connection check from the command line

$ isql sqlsrv_freetds user secret
SQL> SELECT @@version
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
SQLRowCount returns 1
1 rows fetched
SQL> quit

Connection from PHP

$ php -r '
echo "+ Connection\n";
$pdo = new PDO("odbc:sqlsrv_freetds", "user", "secret");
echo "+ Query\n";
$query = $pdo->query("SELECT @@version");
if ($query) {
    echo "+ Result\n";
    $row = $query->fetch(PDO::FETCH_NUM);
    if ($row) {
        print_r($row);
    }
}
'
+ Connection
+ Query
+ Result
Array
(
    [0] => Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
)

This solution is often the simplest, as all the dependencies are free and available in the Linux distributions.

2. Using PDO, mssql and FreeTDS

Needed components:

  • freetds library and mssql extension
  • PHP version 5 (the extension is deprecated and removed from PHP 7)
  • RPM packages: freetds (EPEL), php-mssql

Connection from PHP

$ php -r '
echo"+ Connection:\n";
$conn = mssql_connect("sqlserver.domain.tld", "user", "secret");
if ($conn) {
    echo"+ Query:\n";
    $query = mssql_query("SELECT @@version", $conn);
    if ($query) {
        echo"+ Result:\n";
        print_r($row = mssql_fetch_array($query, MSSQL_NUM));
    }
}
'
+ Connection
+ Query
+ Result
Array
(
    [0] => Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
)

This solution is also simple as all the dependencies are also free and available in Linux distributions. However, it uses a deprecated extension, and without using the PDO abstraction layer.

3. Using PDO, ODBC and Microsoft® ODBC Driver

Needed components:

ODBC driver configuration

the driver must be defined in the /etc/odbcinst.ini file (automatically added installation) :

[ODBC Driver 13 for SQL Server]
Description=Microsoft ODBC Driver for SQL Server
Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-13.0.so.0.0
Threading=1

Data source configuration

The used server must be defined in the /etc/odbc.ini file (system wide) or the ~/.odbc.ini fle (per user):

[sqlsrv_msodbc]
Driver=ODBC Driver 13 for SQL Server
Description=SQL via Microsoft Drivers
Server=sqlserver.domain.tld

Connection check from the command line

$ isql sqlsrv_msodbc user secret
SQL> SELECT @@version
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
SQLRowCount returns 1
1 rows fetched
SQL> quit

Connection from PHP

$ php -r '
echo "+ Connection\n";
$pdo = new PDO("odbc:sqlsrv_msodbc", "user", "secret");
echo "+ Query\n";
$query = $pdo->query("SELECT @@version");
if ($query) {
    echo "+ Result\n";
    $row = $query->fetch(PDO::FETCH_NUM);
    if ($row) {
        print_r($row);
    }
}
'
+ Connection
+ Query
+ Result
Array
(
    [0] => Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
)

This solution, close to #1 also requires the proprietary drivers.

4. Using the Microsoft® Driver

Needed components:

Connection check from the command line

$ sqlcmd -S sqlserver.domain.tld -U user -P secret -Q "SELECT @@version"
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
(1 rows affected)

Connection from PHP

$ php -r '
echo"+ Connection:\n";
$conn = sqlsrv_connect("sqlserver.domain.tld", array("UID" => "user", "PWD" => "secret"));
if ($conn) {
    echo"+ Query: \n";
    $query = sqlsrv_query($conn, "SELECT @@version");
    if ($query) {
        echo"+ Result:\n";
        print_r($row = sqlsrv_fetch_array($query, SQLSRV_FETCH_NUMERIC));
    }
}
'
+ Connection:
+ Query:
+ Result:
Array
(
    [0] => Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
)

This solution, close to #2 also requires the proprietary drivers, and doesn't use the PDO abstraction layer.

5. Using PDO and the Microsoft® Driver

Needed components:

Connection from PHP

$ php -r '
echo "+ Connection\n";
$pdo = new PDO("sqlsrv:Server=sqlserver.domain.tld", "user", "secret");
echo "+ Query\n";
$query = $pdo->query("SELECT @@version");
if ($query) {
    echo "+ Result\n";
    $row = $query->fetch(PDO::FETCH_NUM);
    if ($row) {
        print_r($row);
    }
}
'

+ Connection
+ Query
+ Result
Array
(
    [0] => Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
)

This solution, close to #1 and #3 also requires the proprietary drivers.

6. Conclusion

I think that using PDO should be preferred, to avoid the lock in a specific database server.

FreeTDS usage have filled a lot of needs in the past, as it was the only solution available for PHP 5. Using the sqlsrv or pdo_sqlsrv extension seems now more pertinent for PHP 7, but sadly requires to use the proprietary drivers (well, if you use Microsoft SQL server, you have already leave the free world).

Up to you for a choice.

AsciiBind all the things!

I have finally finished a, probably way too long, proposal for implementing a new Fedora Docs publishing toolchain using AsciiBinder.

The proposal, also published using AsciiBinder, suggests that we definitively adopt AsciiDoc and convert our DocBook sources to it without delay. Further we should begin publishing with AsciiBinder, ideally by Fedora 26.

The proposal tries to summarize the current state of affairs, define the problems being solved and provides instructions for using a proof of concept technology build to play with the tools.

Please take a read on the full proposal here: http://www.winglemeyer.org/fedora_docs_proposal/latest/proposal/overview.html