Fedora People

GUADEC 2018

Posted by Petr Kovar on July 16, 2018 03:01 PM

Back from GUADEC, held in the beautiful Andalusian city of Almería, Spain, from 6th July through 11th July, 2018, I wanted to share a few notes wrt documentation and localization activities at the conference and during the traditional post-conference hacking days.

Mike Hill and Rudolfs Mazurs already blogged about their reflections of GUADEC. I’d add that we had some i18n and L10n-related talks in the schedule (machine translations, input methods), which was a nice improvement in terms of representation from previous years.

The space available for the Birds of a Feather sessions this year was rather limited, so we could only secure an afternoon slot (thanks Kat!) for our docs+translators meetup. We were joined for a while by a group of local documentation writers creating Spanish manuals for the local market. After that, we focused on two main areas.

One was related to the recent migration of GNOME projects to GitLab and involved looking into usability of our wiki docs for contributors and, specifically, newcomers. We found quite a few outdated references to git.gnome.org, Bugzilla and the like, with the biggest issue, however, being the suboptimal overall structure of the contributor guides on the wiki. We also looked into how to improve submitting user feedback and switching languages for the users of help.gnome.org (and yelp).

The other area discussed was making our CI checks for GNOME documentation modules much more robust, with the idea of using the GitLab CI integration to its full potential with tests verifying translations and more.

You can find all notes from the meetup in our Etherpad.

There was also some continued discussion on reworking the GNOME developer center, but I couldn’t take part in its final installment on Wednesday, as I was already flying out.

I’d like to thank the GNOME Foundation for their continued support in funding my travel.

Fedora 28 Release Party at Mexico City

Posted by Fedora Community Blog on July 16, 2018 12:30 PM

On May 29, 2018 we celebrate our second release party in the UAM Azcapotzalco, this time the talks was given by Alberto Rodriguez Sanchez (bt0dotninja) one of the Fedora Ambassadors in the Mexico City. This release party had two main activities:

  1. “Introducing Fedora 28 ” talk.
  2. and one improvised “How to contribute to the Fedora project” talk.

Four F’s section

Introducing Fedora 28 talk

Poster

This talk was focus in the major improvements and new features of Fedora 28 from the perspective of casual user, developer and system administrator with emphasis on the following points:

  • Server: Modularity and Security.
  • Atomic: Atomic CLI and OS-tree.
  • Desktop: Atomic Desktop (Team Silverblue) and third party repos.

Also we talked about the actual community objectives: Modularity, CI/CD and IoT.

How to contribute to the Fedora Project

Originally, this talk was not planned but the interest from some attendees become evident so I talk about my experience as Fedora contributor from my first WCIDFF visit to become part of the CommOps team.

We did a little demonstration with the details of the creation of a FAS account and a trip into WCIDFF and the Fedora Developer Portal.

From WCIDFF to CommOps

This release party in numbers

 

Indicators F27 RP F28 RP
Attendees 26 13
New FAS accounts 0 8
New Installations 2 3
Pizza 0 8

 

Conclusion

One of the most important lessons learned has to see that even Fedora is popular in some places should always try to reach new groups and improve the diffusion of events and even when the how to contribute to the Fedora project was not planned talk (this time at least) it’s a important part of every release party. I really enjoyed organizing this release party and I am really hoping that this event becomes a tradition. See you in the F29 Release party.

Pizza time

The post Fedora 28 Release Party at Mexico City appeared first on Fedora Community Blog.

How to enable full auditing in audit daemon?

Posted by Lukas Vrabec on July 16, 2018 09:14 AM
Full auditing in audit deamon could be useful e.g. to identify which object on system has too tight rules and object is causing dac_override SELinux denial. More info in my previous post.

 Open /etc/audit/rules.d/audit.rules file in an editor.

 1. Remove following line if it exists:

-a task,never

2. Add following line at the end of the file:

-w /etc/shadow -p w

 3. Restart the audit daemon:

 # service auditd restart

 4. Re-run your scenario.

Full auditing is useful when full paths to accessed objects are needed or certain audit event fields, which are normally hidden, should be visible.

The procedure works on Red Hat Enterprise Linux  >= 5 and Fedoras.

If /etc/audit/rules.d/audit.rules file does not exist, please edit /etc/audit/audit.rules directly. Older versions of audit did not generate /etc/audit/audit.rules from /etc/audit/rules.d/audit.rules.

 

Thanks Milos Malik for this article.

The post How to enable full auditing in audit daemon? appeared first on Lukas Vrabec.

3 cool productivity apps for Fedora 28

Posted by Fedora Magazine on July 16, 2018 08:00 AM

Productivity apps are especially popular on mobile devices. But when you sit down to do work, you’re often at a laptop or desktop computer. Let’s say you use a Fedora system for your platform. Can you find apps that help you get your work done? Of course! Read on for tips on apps to help you focus on your goals.

All these apps are available for free on your Fedora system. And they also respect your freedom. (Many also let you use existing services where you may have an account.)

FocusWriter

FocusWriter is simply a full screen word processor. The app makes you more productive because it covers everything else on your screen. When you use FocusWriter, you have nothing between you and your text. With this app at work, you can focus on your thoughts with fewer distractions.

Screenshot of FocusWriter

FocusWriter lets you adjust fonts, colors, and theme to best suit your preferences. It also remembers your last document and location. This feature lets you jump right back into focusing on writing without delay.

To install FocusWriter, use the Software app in your Fedora Workstation. Or run this command in a terminal using sudo:

sudo dnf install focuswriter

GNOME ToDo

This unique app is designed, as you can guess, for the GNOME desktop environment. It’s a great fit for your Fedora Workstation for that reason. ToDo has a simple purpose: it lets you make lists of things you need to get done.

Screenshot from GNOME ToDo on Fedora 28

Using ToDo, you can prioritize and schedule deadlines for all your tasks. You can also build as many tasks lists as you want. ToDo has numerous extensions for useful functions to boost your productivity. These include GNOME Shell notifications, and list management with a todo.txt file. ToDo can even interface with a Todoist or Google account if you use one. It synchronizes tasks so you can share across your devices.

To install, search for ToDo in Software, or at the command line run:

sudo dnf install gnome-todo

Zanshin

If you are a KDE using productivity fan, you may enjoy Zanshin. This organizer helps you plan your actions across multiple projects. It has a full featured interface, and lets you browse across your various tasks to see what’s most important to do next.

Screenshot of Zanshin on Fedora 28

Zanshin is extremely keyboard friendly, so you can be efficient during hacking sessions. It also integrates across numerous KDE applications as well as the Plasma Desktop. You can use it inline with KMail, KOrganizer, and KRunner.

To install, run this command:

sudo dnf install zanshin

Photo by Cathryn Lavery on Unsplash.

Episode 105 - More backdoors in open source

Posted by Open Source Security Podcast on July 16, 2018 01:08 AM
Josh and Kurt talk about some recent backdoor problems in open source packages. We touch on is open source secure, how that security works, and what it should look like in the future. This problem is never going to go away or get better, and that's probably OK.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6814252/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Xfce: Compton als Compositor nutzen

Posted by Fedora-Blog.de on July 14, 2018 05:13 PM
Bitte beachtet auch die Anmerkungen zu den HowTos!

Der Xfce4-Window-Manager xfwm4 besitzt zwar selber Compositing-Funktionen, jedoch ist Compton eine interessante Alternative dazu, die nebenbei auch einige Anzeigeprobleme, wie z.B. Tearing beseitigt.

Um Compton nutzen zu können, muss zuerst in den Xfce-Einstellungen unter „Feineinstellungen der Fensterverwaltung“ im Tab „Komposit“ das Xfwm-Compositing deaktiviert werden.

Anschließend kann man Compton mittels

su -c'dnf install compton'

aus den Fedora-Repositories installieren. Bevor wir Compton jedoch in Dienst stellen, legen wir zuerst unter /.config eine Konfigurationsdatei für Compton an

nano ~/.config/compton.conf

Als Inhalt empfiehlt sich folgendes:

#################################
#
# Backend
#
#################################

# Backend to use: "xrender" or "glx".
# GLX backend is typically much faster but depends on a sane driver.
backend = "glx";

#################################
#
# GLX backend
#
#################################

glx-no-stencil = true;

# GLX backend: Copy unmodified regions from front buffer instead of redrawing them all.
# My tests with nvidia-drivers show a 10% decrease in performance when the whole screen is modified,
# but a 20% increase when only 1/4 is.
# My tests on nouveau show terrible slowdown.
# Useful with --glx-swap-method, as well.
glx-copy-from-front = false;

# GLX backend: Use MESA_copy_sub_buffer to do partial screen update.
# My tests on nouveau shows a 200% performance boost when only 1/4 of the screen is updated.
# May break VSync and is not available on some drivers.
# Overrides --glx-copy-from-front.
# glx-use-copysubbuffermesa = true;

# GLX backend: Avoid rebinding pixmap on window damage.
# Probably could improve performance on rapid window content changes, but is known to break things on some drivers (LLVMpipe).
# Recommended if it works.
# glx-no-rebind-pixmap = true;


# GLX backend: GLX buffer swap method we assume.
# Could be undefined (0), copy (1), exchange (2), 3-6, or buffer-age (-1).
# undefined is the slowest and the safest, and the default value.
# copy is fastest, but may fail on some drivers,
# 2-6 are gradually slower but safer (6 is still faster than 0).
# Usually, double buffer means 2, triple buffer means 3.
# buffer-age means auto-detect using GLX_EXT_buffer_age, supported by some drivers.
# Useless with --glx-use-copysubbuffermesa.
# Partially breaks --resize-damage.
# Defaults to undefined.
glx-swap-method = "undefined";

#################################
#
# Shadows
#
#################################

# Enabled client-side shadows on windows.
shadow = true;
# Don't draw shadows on DND windows.
no-dnd-shadow = true;
# Avoid drawing shadows on dock/panel windows.
no-dock-shadow = true;
# Zero the part of the shadow's mask behind the window. Fix some weirdness with ARGB windows.
clear-shadow = true;
# The blur radius for shadows. (default 12)
shadow-radius = 5;
# The left offset for shadows. (default -15)
shadow-offset-x = -5;
# The top offset for shadows. (default -15)
shadow-offset-y = -5;
# The translucency for shadows. (default .75)
shadow-opacity = 0.5;

# Set if you want different colour shadows
# shadow-red = 0.0;
# shadow-green = 0.0;
# shadow-blue = 0.0;

# The shadow exclude options are helpful if you have shadows enabled. Due to the way compton draws its shadows, certain applications will have visual glitches
# (most applications are fine, only apps that do weird things with xshapes or argb are affected).
# This list includes all the affected apps I found in my testing. The "! name~=''" part excludes shadows on any "Unknown" windows, this prevents a visual glitch with the XFWM alt tab switcher.
shadow-exclude = [
    "! name~=''",
    "name = 'Notification'",
    "name = 'Plank'",
    "name = 'Docky'",
    "name = 'Kupfer'",
    "name = 'xfce4-notifyd'",
    "name *= 'VLC'",
    "name *= 'compton'",
    "name *= 'Chromium'",
    "name *= 'Chrome'",
    "name *= 'Firefox'",
    "class_g = 'Conky'",
    "class_g = 'Kupfer'",
    "class_g = 'Synapse'",
    "class_g ?= 'Notify-osd'",
    "class_g ?= 'Cairo-dock'",
    "class_g ?= 'Xfce4-notifyd'",
    "class_g ?= 'Xfce4-power-manager'"
];
# Avoid drawing shadow on all shaped windows (see also: --detect-rounded-corners)
shadow-ignore-shaped = false;

#################################
#
# Opacity
#
#################################

menu-opacity = 1;
inactive-opacity = 1;
active-opacity = 1;
frame-opacity = 1;
inactive-opacity-override = false;
alpha-step = 0.06;

# Dim inactive windows. (0.0 - 1.0)
# inactive-dim = 0.2;
# Do not let dimness adjust based on window opacity.
# inactive-dim-fixed = true;
# Blur background of transparent windows. Bad performance with X Render backend. GLX backend is preferred.
# blur-background = true;
# Blur background of opaque windows with transparent frames as well.
# blur-background-frame = true;
# Do not let blur radius adjust based on window opacity.
blur-background-fixed = false;
blur-background-exclude = [
    "window_type = 'dock'",
    "window_type = 'desktop'"
];

#################################
#
# Fading
#
#################################

# Fade windows during opacity changes.
fading = true;
# The time between steps in a fade in milliseconds. (default 10).
fade-delta = 4;
# Opacity change between steps while fading in. (default 0.028).
fade-in-step = 0.03;
# Opacity change between steps while fading out. (default 0.03).
fade-out-step = 0.03;
# Fade windows in/out when opening/closing
# no-fading-openclose = true;

# Specify a list of conditions of windows that should not be faded.
fade-exclude = [ ];

#################################
#
# Other
#
#################################

# Try to detect WM windows and mark them as active.
mark-wmwin-focused = true;
# Mark all non-WM but override-redirect windows active (e.g. menus).
mark-ovredir-focused = true;
# Use EWMH _NET_WM_ACTIVE_WINDOW to determine which window is focused instead of using FocusIn/Out events.
# Usually more reliable but depends on a EWMH-compliant WM.
use-ewmh-active-win = true;
# Detect rounded corners and treat them as rectangular when --shadow-ignore-shaped is on.
detect-rounded-corners = true;

# Detect _NET_WM_OPACITY on client windows, useful for window managers not passing _NET_WM_OPACITY of client windows to frame windows.
# This prevents opacity being ignored for some apps.
# For example without this enabled my xfce4-notifyd is 100% opacity no matter what.
detect-client-opacity = true;

# Specify refresh rate of the screen.
# If not specified or 0, compton will try detecting this with X RandR extension.
refresh-rate = 0;

# Set VSync method. VSync methods currently available:
# none: No VSync
# drm: VSync with DRM_IOCTL_WAIT_VBLANK. May only work on some drivers.
# opengl: Try to VSync with SGI_video_sync OpenGL extension. Only work on some drivers.
# opengl-oml: Try to VSync with OML_sync_control OpenGL extension. Only work on some drivers.
# opengl-swc: Try to VSync with SGI_swap_control OpenGL extension. Only work on some drivers. Works only with GLX backend. Known to be most effective on many drivers. Does not actually control paint timing, only buffer swap is affected, so it doesn’t have the effect of --sw-opti unlike other methods. Experimental.
# opengl-mswc: Try to VSync with MESA_swap_control OpenGL extension. Basically the same as opengl-swc above, except the extension we use.
# (Note some VSync methods may not be enabled at compile time.)
vsync = "opengl-swc";

# Enable DBE painting mode, intended to use with VSync to (hopefully) eliminate tearing.
# Reported to have no effect, though.
dbe = false;
# Painting on X Composite overlay window. Recommended.
paint-on-overlay = true;

# Limit compton to repaint at most once every 1 / refresh_rate second to boost performance.
# This should not be used with --vsync drm/opengl/opengl-oml as they essentially does --sw-opti's job already,
# unless you wish to specify a lower refresh rate than the actual value.
sw-opti = false;

# Unredirect all windows if a full-screen opaque window is detected, to maximize performance for full-screen windows, like games.
# Known to cause flickering when redirecting/unredirecting windows.
# paint-on-overlay may make the flickering less obvious.
unredir-if-possible = true;

# Specify a list of conditions of windows that should always be considered focused.
focus-exclude = [ ];

# Use WM_TRANSIENT_FOR to group windows, and consider windows in the same group focused at the same time.
detect-transient = true;
# Use WM_CLIENT_LEADER to group windows, and consider windows in the same group focused at the same time.
# WM_TRANSIENT_FOR has higher priority if --detect-transient is enabled, too.
detect-client-leader = true;

#################################
#
# Window type settings
#
#################################

wintypes:
{
    tooltip =
    {
        # fade: Fade the particular type of windows.
        fade = true;
        # shadow: Give those windows shadow
        shadow = false;
        # opacity: Default opacity for the type of windows.
        opacity = 0.85;
        # focus: Whether to always consider windows of this type focused.
        focus = true;
    };
}

Anschließend kann Compton mittels

compton -b

gestartet werden.

Wer Compton zukünftig automatisch starten lassen möchte, legt einfach in den Xfce-Einstellungen unter „Sitzung und Startverhalten“ im Tab „Automatisch gestartet Anwendungen“ einen entsprechenden Eintrag für Compton an.

The Flatpak BoF at Guadec

Posted by Matthias Clasen on July 14, 2018 03:54 PM

Here is a quick summary of the Flatpak BoF that happened last week at Guadec.

1.0 approaching fast

We started by going over the list of outstanding 1.0 items. It is a very short list, and they should all be included in an upcoming 0.99.3 release.

  • Alex wants to add information about renaming to desktop files
  • Owen will hold his OCI appstream work for 1.1 at this point
  • James is interested in getting more information from snapd to portal backends, but this also does not need to block 1.0, and can be pulled into a 1.1 portals release
  • Matthias will review the open portal issues and make sure everything is in good shape for a 1.0 portal release

1.0 preparation

Alex will do a 0.99.3 release with all outstanding changes for 1.0 (Update: this release has happened by now). Matthias will work with  Allan and Bastien on the press release and other materials. Nick is very interested in having information about runtime availability, lifetime and stability easily available on the website for 1.0.

We agreed to remove the ‘beta’ label from the flathub website.

Post 1.0 plans

There was a suggestion that we should have an autostart portal. This request spawned a bigger discussion of application life-cycle control, background apps and services. We need to come up with a design for these intertwined topics before adding portals for it.

After 1.0, Alex wants to focus on tests and ci for a while. One idea in this area is to have a scriptable test app that can make portal requests.

Automatic migration on renames or EOL is on Endless’ wishlist.

Exporting repositories in local networks is a feature that Endless has, but it may end up upstream in ostree instead of flatpak.

Everybody agreed that GNOME Software should merge apps from different sources in a better way.

For runtimes, the GNOME release teams aims to have the GNOME runtime built using buildstream, on top of the freedesktop 1.8 runtime. This may or may not happen in time for GNOME 3.30.

Como solucionar el problema de Netflix y Vivaldi en Linux

Posted by Alvaro Castillo on July 14, 2018 03:20 PM

En el post anterior, estuvimos hablando sobre Vilvadi. Un navegador que se liberó en contraposición del rumbo que tomó Opera con su comunidad dando origen a su primera versión el 12 de abril del 2016.

Sin embargo, hemos tenido problemas al reproducir videos con Netflix o Atres Player porque al parecer hay un problema con los códecs. Fedora por ejemplo no incorpora códecs propietarios a menos que instales un repositorio adicional y los instales. No obstante, hemos hecho un sondeo por sus foro...

F28-20180712 Updated isos released

Posted by Ben Williams on July 13, 2018 05:42 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated F28-20180712 Live ISOs, carrying the 4.17.4-200 kernel.

This set of updated isos will save about 900+ MB of updates after install.  (for new installs.)

We would also like to thank Fedora- QA  for running the following Tests on our ISOs.: https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=28&build=FedoraRespin-28-updates-20180712.0&groupid=1

These can be found at  http://tinyurl.com/live-respins .We would also like to thank the following irc nicks for helping test these isos: dowdle, and Southern_Gentlem.

As always we are always needing Testers to help with our respins. We have a new Badge for People whom help test.  See us in #fedora-respins on Freenode IRC.

Fedora on the UDOO Neo

Posted by Peter Robinson on July 13, 2018 12:00 PM

Some time ago I backed the UDOO Neo Kickstarter as it looked like a nifty, well featured, IoT device. I got the full option which came with 1Gb RAM and both wired and wireless Ethernet and some add-on sensors. It was a well run kickstarter campaign and the device was well packaged with a fab box. It has both a Cortex-A9 processor to run Fedora and a Cortex-M4 embedded processor to enable you to do Arduino style functionality which should be interesting to experiment with.

For various reasons it has sat around gathering dust, it’s been a bit of a long drawn out process with me randomly poking it as time allowed.. Primarily this was because there was no decent upstream U-Boot and kernel support, and I’d not had the time to hack that up myself from various downstream git repositories, but even without Fedora support their forked Ubuntu distro in the form of UDOObuntu has an experience that is truly terrible!

Late 2016 the problem of a lack of upstream support for U-Boot and kernel changed with initial basic support landing upstream for all three (Basic, Extended and Full) models so with a few cycles over a weekend it was time to dust it off to see if I could get Fedora 26 (did I mention this has been long running?) running on it and to see what worked.

The first thing for me to do was to setup a serial console for easy debugging. The UDOO Neo documentation is generally outstanding and the pins for the UART1 TTL are documented. Two things to note here is that the headers are female rather than the usual SBC male pins so I had to bodge my usual usb to serial TTL with some male-male jumper wires and you’ll need a ground for the TTL which is undocumented on their page, I used one of the GNDs as documented on connector J7 and all was good.

So after an initial set of fixes to the U-Boot support it saw my Fedora install and started to boot! Success! Well sort of, as mentioned above the initial support is rudimentary, it started to boot the kernel and very quickly managed to corrupt and destroy the filesystem not making it much beyond switch root. That wasn’t good. In the last week or two I’ve had a little time to look again, similar issues, it was better than it was a year or so ago but it still ended up with corruption. I reached out to one of the maintainers from NXP that deals with a bunch of the i.MX platforms and I got directed to a handful of patches, a test kernel and image later and a test boot… all the way to initial-setup! SUCCESS!

The core support for the i.MX6SX SoC and the UDOO Neo is pretty reasonable, with the MMC fixes it’s been very stable, all the core bits are working as expected, included wired and wireless network, thermal, cpufreq, crypto and it looks like the display should work fine. There’s a few quirks that I need to investigate further which should provide for a fun evening or weekend hacking. There has also been recently merged support for the i.MX6SX Cortex-M4 land upstream in Zephyr upstream for the 1.13 release, so getting that running and communication using Open-AMP between Fedora and Zephyr should also be an interesting addition. I think this will be a welcome addition to Fedora 29, and not a moment too soon!!

Share awesome Fedora content here on the Magazine

Posted by Fedora Magazine on July 13, 2018 08:00 AM

Do you know how to do something on Fedora that needs to be shared with the world? Want to share an awesome piece of Fedora news that you know? Do you or someone you know use Fedora in an interesting way? The Fedora Magazine is  always open for new contributors to write awesome, relevant content. Fedora Magazine is run by the Fedora community: users, developers, and everyone in between.

While much of our content features material for Workstation users, we also feature articles for other Fedora users: sysadmins, power users, and developers that use Fedora.

Be sure to get in contact with us even if you just have an awesome article idea, making more content that you all want to see is a primary goal of the Fedora Magazine.

How do I get started?

It’s easy to start writing for Fedora Magazine! You just need to have decent skill in written English, since that’s the language in which we publish. Our editors can help polish your work for maximum impact.

Follow this easy process to get involved.

The writers and editors use the Fedora Magazine mailing list to plan future articles. Create a new thread to that list and introduce yourself. If you have some ideas for posts, add them to your message as well. The Magazine team will then guide you through getting started. The team also hangs out on #fedora-magazine on Freenode. Drop by, and we can help you get started


 

 

 

 

Compte rendu des Rencontres Mondiales du Logiciel Libre 2018

Posted by Charles-Antoine Couret on July 13, 2018 06:00 AM

C'est ma première venue dans la ville de Strasbourg, je débarque pour promouvoir le projet Fedora et bien entendu l'association Borsalinux-fr.

gaia m'a bien rejoint et m'a accompagné toute la semaine. Merci à lui pour l'assistance et pour ces agréables moments.

RMLL 2018-Stand.jpg

Déroulement

Le samedi fut un peu chaotique niveau organisation apparemment, du coup nous n'étions pas à l'emplacement prévu tout en étant éloignés des lieux de conférences. Donc assez peu de visiteurs avec environ 5 personnes sur notre stand. J'ai croisé à l'occasion Véronique Fritière qui organisait les JM2L à Sophia-Antipolis traditionnellement. Ils m'ont invité à rejoindre l'édition de cette année (redevenu un évènement annuel au passage) le 15 décembre. J'ai passé la soirée à manger avec des membres et contributeurs de Zeste de Savoir, un site libre de tutoriels en français que je fréquente beaucoup et que je recommande.

Le dimanche, nous nous sommes installés à l'université entre Zeste de Savoir et Haiku ce qui fut bien sympathique tout le long de la semaine. Ce fut plus fourni avec 15-20 personnes utilisateurs ou non de Fedora. Dont des anglophones. On a croisé aussi le président du LUG de Remi Collet qui a tenu à passer le bonjour. Rien de plus spécifique.

Le lundi a été source d'un repas avec Jean-Baptiste, responsable traduction de Fedora en français qui est un régional de l'étape. Repas local à coup de tartes flambées suivi d'une petite visite agréable de la charmante ville hôte des lieux. Discussions intéressantes sur comment essayer d'attirer des nouveaux contributeurs. Agrémentées bien sûr les critiques habituelles de Zanata et de la procédure de traduction dans Fedora. Cela me donne de quoi travailler pour la suite. Merci pour l'accueil et ce moment sympathique. On devrait se voir plus souvent. ;-)

Le lundi s'est terminé par une discussion avec Benoît Sibaud sur le vénérable site linuxfr.org qui publie régulièrement mon contenu à propos de Fedora. Et la visite d'Adrien D et TheSuperGeek du canal IRC. Échanges intéressantes et toujours un plaisir de mettre une tête sur un pseudonyme ou une voix. Environ une dizaine de personnes se sont arrêtées chez nous.

Le mardi environ une dizaine de personnes sont venues, dont un qui a discuté du manque du paquet d'ancestris (pour faire de la généalogie) dans les dépôts de Fedora. J'en ai profité également pour faire un tour des stands présents, en particulier Mozilla dont j'apprécie beaucoup les travaux. J'ai eu le droit à une démonstration du synthétiseur / reconnaissance vocale et de WebVR. Intéressant pour la suite.

Le soir nous avons fait un tour à l'activité LAN party des RMLL. Soirée orientée rétro gaming, nous avons pu jouer à quelques jeux de la SNES mini, Supertuxkart et Flightgear accompagné d'un pilote licencié et de son matériel de simulation rendant l'expérience bien amusante.

RMLL 2018-Flightgear

Le mercredi matin a été l'occasion de présenter ma conférence sur les Apports de Fedora Workstation à l'écosystème du Logiciel Libre. Cela s'est bien passé, il y avait environ une quizaine de personnes qui y ont assistées et ils semblaient satisfaits. Un enregistrement ayant eu lieu, la vidéo devrait être disponible à un certain moment. Ensuite partage d'un repas avec gaia pour notre dernier jour dans une brasserie du coin. Durant l'après-midi le stand a accueilli une dizaine personnes encore. Nous avons généré notre seule image Live ce jour-là.

Pour finir, un petit troll sur l'EuheuheuhPC 701 avec Thierry Stoehr. La machine était en effet incroyable pour l'époque (et son prix !).

Bilan des discussions

gaia semble satisfait des changements dans la doc et les notes de version, il va nous soumettre normalement une liste d'articles ou d'infos qui manquent et qui pourrait être selon lui utile. Il souhaiterait qu'on centralise quelque part sur Peertube probablement les vidéo francophones de qualité parlant de Fedora. Et pourquoi pas en produire nous mêmes aussi.

On a eu le retour d'une utilisatrice malvoyante qui a fait des commentaires élogieux de Fedora par rapport à ce cas d'usage grâce à la disponibilité des derniers outils libres comme Orca pour ces utilisateurs. Même si elle utilise une distribution fournie par Hypra (basée sur Debian) pour bénéficier d'une meilleure synthèse et reconnaissance vocale via des outils propriétaires spécialisés.

Adrien D. semble satisfait de Fedora même s'il ne l'utilise pas au quotidien. Il n'a critiqué que la qualité de l'outil dnfdragora pour gérer les paquets graphiquement. On a évoqué la possibilité de travailler ensemble aux alentours de la sortie de Fedora 29 pour proposer une vidéo de tests avec questions / réponses à la fin. Ce serait en tout cas je pense enrichissant d'aller vers plus de vidéos. Et il accepte de nous fournir ses vidéos à propos de Fedora, ses vidéos étant dans l'ensemble de qualité ce qui est appréciable.

Une ambassadrice de Mageia a mentionné une discussion avec Emmanuel Seyman à propos de faire des commandes communes pour certains goodies afin de réduire les coûts ce qui est en effet une possibilité intéressante.

Un utilisateur professionnel de Fedora semble avoir des soucis avec VirtualBox, valgrind, ansible et VMware quand il y a gros changement de versions. Autrement il en semble satisfait, et de la communication de Fedora-fr aussi.

Un ex-utilisateur s'est plaint de l'instabilité d'Anaconda, notre installateur qui à force de cracher à empêcher l'installation. Un autre de son temps de démarrage long dans sa configuration. Enfin, une utilisatrice locale nous a pointé ses difficultés à recharger sa carte de transport en commun avec Fedora.

Pour le reste, des curieux ou des utilisateurs satisfaits dans l'ensemble de la distribution. Malgré les fortunes diverses mentionnées plus haut.

Merci en tout cas aux organisateurs pour cet évènement. Ce n'est pas évident de faire de l'évènementiel, j'en sais quelque chose. Bon courage en tout cas, cela a été dans l'ensemble sympa d'y être, de voir ce beau monde et de discuter avec des utilisateurs et contributeurs de tous les horizons.

Why you should bundle the root CAs in your image

Posted by Fabio Alessandro Locati on July 13, 2018 12:00 AM
If you have ever used Docker or any other Linux OCI container system, you inevitably have incurred in the following error: x509: failed to load system roots and no roots provided This message is remembering you that you forgot to provide Root Certificate Authorities to your application. There are two different ways to solve this: mount the /etc/ssl/certs folder from the machine where the container is running bundling the root CAs in your image As you may imagine from the title, I believe that the second option is by far better than the first one.

Slice of Cake #26

Posted by Brian "bex" Exelbierd on July 12, 2018 04:00 PM

A slice of cake

Cake BadgeLast week as the FCAIC I … ok, let’s get real. I haven’t written since February. Thanks to the amazing team of Community Leaders and pushed on by Stormy Peters, I bring you this out of sync, return to the cake updates.

  • Processed the last of the 45 funding requests received for Flock. This resulted in 28 hotel room bookings (most shared) and 33 plane tickets being booked centrally. Train tickets will come in later via reimbursements. The plane tickets were booked by an agent in all but 3 cases and the hotels were managed by me directly with the property. I’ve got a note to do a follow up post on this process so you can consider it for your next meeting.
  • The Flock team met to validate that we were still on track (we are!) including our evening activity (and associated transportation) and evening meal (and associated transportation). I need to kick the CfP to schedule party off ASAP!
  • Handed off design requests to the Fedora Design team for t-shirts, badges (real and virtual), etc.. They are an amazing group of people who have already met most of the needs and are grinding through the last of these late requests.
  • Received a ton of audio equipment that we will be using at Flock. Paul Frields (stickster) was kind enough to spec out the equipment and Matej Hursovsky is going to verify functionality and run it for us.
  • The Fedora Council is now reviewing the first draft of the proposed Code of Conduct and Response Guide for Fedora. This task was given to Marina Zhurakhinskaya (marinaz) at Flock in 2017. This is a long process and there is a lot of material to process on advances in this area. Our new Program Manager, Ben Cotton (bcotton) has drawn up a draft timeline that is being finalized. The general theory is that we need to finish it and collect Council input, then send it for legal review. After that we will open it up to the whole community for review and input. Then we will decide how to proceed with adoption.
  • Docs is proceeding at a wonderful pace. Adam Samalik (asamalik) is driving the adoption of Antora and Petr Bokoc is driving content. I am happy to be transitioning my docs responsibilities to this great team as the community rebounds in this area. My last big goal is to re-enable localization of documentation.
  • Related to docs, I organized a Friday hackfest for Antora that occurs every Friday here in Brno. We’ve not had many remotees wanting to join us, so it has morphed into the Friday Breakfast Club followed by a hack. If you’re ever in Brno, whether you want to hack or not, ping me about joining the Friday Breakfast Club.
  • Mindshare still needs to get the final processes written for the new lighter event workflow. APAC has graciously offered to let their tickets be the testing ground for getting things through the system. I’ve missed a bunch of meetings because they have fallen on “Brian in a plane” days, so I am looking forward to re-engaging with folks soon.
  • Attended several conferences, which I hope to write about in the near future.

and … drumroll please

  • Celebrated the launch of the amazing incredible and fantastic community suggested It’s a Cake Thing badge. The badge, most recently awarded two nights ago is earned by anyone who has a conversation with the FCAIC and eats or drinks anything! The badge can be earned virtually, but, not retroactively. The goal behind the badge is to foster better communication and to encourage folks to look for connections across our project. I am sure badge originators, Jona, Sachin, and Justin might tell a slightly different story about an actually piece of cake though …

À la mode

  • Actually had a bit of vacation and got myself sunburned (not horribly, but badly) in Mallorca. I strongly recommend it for everything but baking and roasting :D.

Cake Around the World

I’ll be traveling some and hope you’ll ping me for coffee if you’re nearby. If your considering attending and want to collaborate on a talk, let’s … talk :).

  • Flock from 8-11 August in Dresden, Germany
  • DevConf.us from 17-19 August in Boston, MA, USA
  • Open Source Summit Europe (OSS Europe) from 22-24 October in Edinburgh, United Kingdom

Note: My attendance at a few of the events is still tenative, but I expect most will happen.

Building QGo on RHEL 7.5

Posted by Adam Young on July 12, 2018 03:06 PM

I’ve played Go for years. I’ve found that having a graphical Go client has helped me improve my game immensely. And, unlike many distractors,. I can make a move, then switch back in to work mode without really losing my train of thought.

I always like the QGo client. I have found it to be worthwhile to build and run from the git repo. After moving to RHEL 7.5 for my desktop, I had to go through the process again. Here is the short version.

Playing Go using the the QGo Client

All of the pre-reqs can come from Yum.

For the compiler and build tools, it is easiest to use a yum group:

sudo yum groupinstall "Development and Creative Workstation"

Once those packages are installed, you need some of the Qt5 development packages. At the bottom are is the complete list I have. I did not install all of these directly, but instead recently installed:

qt5-devel
qt5-qtbase-devel
qt5-qttools-devel
qt5-qttranslations
qt5-qtmultimedia
qt5-qtmultimedia-devel

TO run the actual qmake command, things are a bit different from the README.

/usr/bin/qmake-qt5 src
make

That puts things in ../build, which took me a moment to find.

Now I can run qgo with

/home/ayoung/devel/build/qgo

Et Voila

QGo Running on RHEL 7.5

The complete list of qt packages I have installed are:

qt5-qttools-libs-designer-5.9.2-1.el7.x86_64
adwaita-qt5-1.0-1.el7.x86_64
qt5-qtmultimedia-devel-5.9.2-1.el7.x86_64
qt-settings-19-23.7.el7.noarch
qt5-qtbase-devel-5.9.2-3.el7.x86_64
qt5-qttools-5.9.2-1.el7.x86_64
qt5-qtbase-5.9.2-3.el7.x86_64
qt5-rpm-macros-5.9.2-3.el7.noarch
qt5-doctools-5.9.2-1.el7.x86_64
qt5-designer-5.9.2-1.el7.x86_64
qt5-qtbase-common-5.9.2-3.el7.noarch
highcontrast-qt5-0.1-2.el7.x86_64
qt5-qtmultimedia-5.9.2-1.el7.x86_64
qt5-qttools-libs-designercomponents-5.9.2-1.el7.x86_64
qt-4.8.7-2.el7.x86_64
qt5-qtdeclarative-devel-5.9.2-1.el7.x86_64
qt5-qttools-libs-help-5.9.2-1.el7.x86_64
qt5-qtbase-gui-5.9.2-3.el7.x86_64
qt3-3.3.8b-51.el7.x86_64
qt5-qtxmlpatterns-5.9.2-1.el7.x86_64
qt5-qttools-common-5.9.2-1.el7.noarch
qt5-qttools-devel-5.9.2-1.el7.x86_64
qt5-qtdeclarative-5.9.2-1.el7.x86_64
qt5-linguist-5.9.2-1.el7.x86_64
qt5-qttranslations-5.9.2-1.el7.noarch
qt-x11-4.8.7-2.el7.x86_64

unlabeled_t type

Posted by Dan Walsh on July 12, 2018 03:02 PM

I often see bug reports or people showing AVC messages about confined domains not able to deal with unlabeled_t files.

type=AVC msg=audit(1530786314.091:639): avc:  denied  { read } for  pid=4698 comm="modprobe" name="modules.alias.bin" dev="dm-0" ino=9115100 scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file

I just saw this AVC, which shows the openvswitch domain attempting to read a file, modules.alias.bin, with modprobe.   The usual response to this is to run restorecon on the files and everything should be fine.

But the next question I get is how did this content get the label unlabeled_t, and my response is usually I don't know, you did something.

Well lets look at how unlabeled_t files get created.

unlabeled_t really just means that the file on disk does not have an SELinux xattr indicating a file label.  Here are a few ways these files can get created

1 File was created by on a file system when the kernel was not running in SELinux mode.  If you take a system that was installed without SELinux (God forbid) or someone booted the machine with SELinux disabled, then all files created will not have labels.  This is why we force a relabel, anytime someone changes from SELinux disabled to SElinux enabled at boot time.

2. An extension on content created while the kernel is not in SELinux mode is files created in the initramfs before SELinux Policy in the kernel.  We have an issue in CoreOS Right now, where when the system boots up the initramfs is running `ignition`, which runs before systemd loads SELinux policy.  The inition scrips create files on the file system, while SELinux is not enabled in the kernel, so those files get created as unlabeled_t.  Ignition is adding a onetime systemd unit file to run restorecon on the content created.

3. People create USB Sticks with ext4 or xfs on them, on a non SELinux system, and then stick into systems with SELinux enabled and 'mv' the content onto the system.  The `mv` command actually maintains the SELinux label or lack thereof, when it moves files across file systems.  If you use a `mv -Z`, the mv command will relabel the target content, or you can just use restorecon.

4 The forth way I can think of creating unlabeled_t files it to create a brand new file system on an SELinux system.  When you create  a new file system the kernel creates the "/" (root) of the file system without a label.  So if you mound the file system on to a mount point, the directory where you mounted it will have no labels.  If an unconfined domain creates files no this new file system, then it will also create unlabeled_t files since the default behaviour of the SELinux kernel is create content based on the parents directory, which in this case is labeled unlabeled_t.  I recommend people run restorecon on the mount point as soon as you mount a new file system, to fix this behaviour.  Or you can run `restorecon -R -v MOUNTPOINT ` to cleanup all the files.

Note: The unlabeled_t type can also show up on other objects besides file system objects.  For example on labeled networks, but this blog is only concerned with file system objects.

Bottom Line:

Unlabeled file should always be cleaned up ASAP since they will cause confined domains lots of problems and restorecon is your friend.

Insights Security Hardening Rules

Posted by Red Hat Security on July 12, 2018 01:30 PM

Many users of Red Hat Insights are familiar with the security rules we create to alert them about security vulnerabilities on their system, especially concerning high-profile issues such as Spectre/Meltdown or Heartbleed. In this post, I'd like to talk about the other category of security related rules, those related to security hardening.

In all of the products we ship, we make a concerted effort to ship thoughtful, secure default settings to minimize the amount of configuration needed to do the work you want to do. With complex packages such as Apache httpd, however, every installation will require some degree of customization before it's ready for deployment to production, and with more complex configurations, there's a chance that a setting or the interaction between several settings can have security implications which aren't immediately evident. Additionally, sometimes systems are configured in a manner that aids rapid development, but those configurations aren't suitable for production environments.

With our hardening rules, we detect some of the most common security-related configuration issues and provide context to help you understand the represented risks, as well as recommendations on how to remediate the issues.

Candidate Rule Sources

We use several sources to find candidates for new hardening rules, but our primary sources are our own Red Hat Enterprise Linux Security Guides. These guides are founded on Red Hat's own knowledge of its specific environment, past customer issues, and the domain expertise of Red Hat's engineers. These guides cover a broad spectrum of security concerns ranging from physical and operational security to specific recommendations for individual packages or services.

Additionally, the Product Security Insights team reviews other industry-standard benchmarks, best-practices guides, and news sources for their perspectives on secure configurations. One example is the Center for Internet Security's CIS Benchmark for RHEL specifically and Linux in general. Another valuable asset is SANS' Information Security Resources, which provides news about new research in information security.

From these sources, we select candidates based on a number of informal criteria, such as:

  • What risk does this configuration represent? Some misconfigurations can expose confidential information, while a less serious misconfiguration might cause loss of audit log data.
  • How common are vulnerable configurations? If an issue seems rare, then it may have a lower priority. Conversely, some issues are almost ubiquitous, which suggests that even further research into where our user communication or education could be improved.
  • How likely are false reports, positive or negative? Some system configurations, especially around networking, are intrinsically complex. Being able to assess whether a system has a vulnerable firewall in isolation is challenging, as users may have shifted the responsibility for a particular security check (e.g. packet filtering) to other devices. In some cases, heuristics can be used, but this is always weighed against the inconvenience of false reports.

With these factors in mind, we can prioritize our list of candidates. We can also identify areas where more information would make possible other rules, or would improve the specificity of rule recommendations.

An Example Rule

For a concrete example, one hardening rule we offer detects potentially insecure network-related settings in sysctl. Several parameters are tested by this rule, such as:

icmp_echo_ignore_broadcasts: This setting, which is on by default, will prevent the system from responding to ICMP requests sent to broadcast addresses. A user may have changed this setting while troubleshooting network issues, but it presents an opportunity for a bad actor to stage a denial-of-service attack against the system's network segment.

tcp_syncookies: Also on by default, syncookies provide protection against TCP SYN flood attacks. In this case, there aren't many reasons why it would be disabled, but some specialized hardware, legacy software, or software in development may have a minimal network stack which doesn't support syncookies. In this case, it's important to be aware of the issue and have other methods to protect the system from SYN flood attacks.

ip_forward: This setting, which allows packet forwarding, is disabled by default. However, since it must be enabled for the system to act as a router, it's also the most commonly detected setting. In this case, to prevent false positives, the rule uses supporting data such as the firewall configuration to determine if this system may be acting as a router. If it's not, it's possible the user has a particular purpose for having the system forward packets, or it's possible the system was used as a router at one point, but its configuration wasn't completely reexamined after it was put into use elsewhere. In any case, as above, it's important that the system's user is aware that the feature is enabled, and understands the security implications.

These are only a few of the parameters this rule examines. In some cases, such as this, several different but related issues are handled by a single rule, as the locations of the configuration and the logic used to detect problems is similar. In other cases, such as with httpd configuration, the problem domain is much larger, and warrants separate rules for separate areas of concern, such as data file permissions, cryptography configuration, or services exposed to public networks.

Conclusion

This is just a brief overview of the process that goes into choosing candidates for and creating security hardening rules. It is, in practice, a topic as large as the configuration space of systems in general. That there is so much information about how to securely configure your systems is testament to that. What might be insecure in one context is the intended state in another, and in the end, only the user will have sufficient knowledge of their context to know which is the case. Red Hat Insights, however, provides users with Red Hat's breadth and depth of understanding, applied to the actual, live configuration of their systems. In this way, users benefit not only from the automated nature of Insights, but also from the Product Security Insights team's participation in the wider Information Security community.

While we have an active backlog of security hardening rules, and likely will for some time due to the necessary prioritization of vulnerabilities in our rule creation, we're always interested in hearing about your security concerns. If there are issues you've faced that you'd like Insights to be able to tell you about, please let us know. Additionally, if you've had a problem with one of our rules, we'd like to work with you to address it. We may have substantial knowledge about how Red Hat products work, but you are the most knowledgeable about how you use them, and our objective is to give you all the information we can to help you do so securely.

English

Product

Red Hat Insights

Tags

security

[Week 8] GSoC Status Report for Fedora App: Abhishek Sharma

Posted by Fedora Community Blog on July 12, 2018 12:30 PM

This is Status Report for Fedora App filled by participants on a weekly basis.

Status Report for Abhishek Sharma (thelittlewonder)

  • Fedora Account: thelittlewonder
  • IRCthelittlewonder (found in #fedora-summer-coding, #fedora-india, #fedora-design)
  • Fedora User Wiki Page

Tasks Completed

Revamped UI for Package Search

Last Week, Amitosh Implemented a new feature that allows you to search for DNF packages right from the Fedora App. Last week, we worked on the UI of Package Search View to make it more consistent with the design of the application.

Pull Request: #95

 

Empty States Components

During the last month, We had worked on the design of error and empty states of the app [Check it out, if you haven’t]. Last week, We worked on breathing life into those designs and created angular components for the screens, to increase reusability. Now we can simply throw the components wherever we want to display the empty states.

Pull Request: #96

Women and Diversity Column

The Fedora Project welcomes and encourages participation by everyone. So we added a Women and Diversity Column that showcase the latest updates of the inclusion and diversity events in the Fedora community.

Pull Request: #97

What’s going on

Fedora Podcast

We finally managed to figure out how to integrate Fedora Podcast into the app. Big shoutout to x3mboy for helping me with the Simplecast API key. We will work on the design and integration of Podcast this week.

 

That’s all for this week. 👋

 

Send your feedback at guywhodesigns[at]gmail[dot]com

The post [Week 8] GSoC Status Report for Fedora App: Abhishek Sharma appeared first on Fedora Community Blog.

Vivaldi un navegador Web impresionante

Posted by Alvaro Castillo on July 11, 2018 04:00 PM

Vivaldi es un navegador gratuito desarrollado por la empresa Vivaldi Technologies, cuya compañía fue fundada por el cofundador y ex-CEO de Opera Jon Stephenson von Tetzchner, y Tatsuki Tomita quiénes quedaron bastante descontentos por varias decisiones que tomó la empresa Opera Software hace bastante tiempo como el cierre del portal comunitario My Opera y dejando atrás las opiniones de las personas que ayudaban a su desarrollo y mejora.

¿Qué nos aporta Vivaldi?

Vivaldi es muy poco conocido p...

Fedora tackles Southeast Linux Fest 2018

Posted by Fedora Community Blog on July 11, 2018 08:30 AM

Ambassadors Report for Southeast Linux Fest -Ben and Cathy Williams, Andrew and Julie Ward, Rosnel Echervarria, and Nick Bebout

Southeast Linux Fest June 8 – 10, 2018, Charlotte North Carolina

Southeast Linux fest

Julie and Ben just after setup

The annual event reached its historical 10 year mark this year. Southeast Linux Fest has been one of the most successful and long running events when it comes to Linux and only is topped by Scale and Linux Fest Northwest. Southeast Linux Fest (SELF) an event that is held at the Sheraton Airport Hotel Charlotte North Carolina. This event was centralized to accommodate attendance (easy access from Airport) from many of the surrounding states. We saw many individuals from the Tennessee, Georgia (Atlanta/Macon area), South Carolina, and Florida, as well as the local attendees from the Charlotte area and the state of North Carolina.

Event Goals from the desk of Southeast Linux Fest Coordinator

Discussing the primary goal of this event with the Coordinator (of Southeast Linux Fest), the target was to draw from the surrounding southern states. The secondary goal was opening the event to everyone  interested in learning about free and open source software. Later in this report we will discuss some of the feedback given to the Coordinator about Southeast Linux Fest.

Friday, 8 June 2018 Day One

Fedora attendance was strong this year with 6 Ambassadors, three of which played a significant role in contributions to the event. Day one of the event began on Friday 8 June at 9 a.m. The morning setup started at 730 a.m. with four ambassadors, Ben and Cathy Williams, Andrew and Julie Ward. Between the four of us we got the booth set up rather quickly. Registration for the event began at 8 a.m. with attendees already at our booth prior to the first talk that morning. Nick Bebout shortly arrived at the booth after 8 30 a.m.  to start lending a hand. The event really did not get busy until after the first keynote speaker to start off the event. Nick gave and excellent talk on SSH Authentication using GPG smart cards that was opposite of the keynote.

Meanwhile in-between the talks we remained busy at the booth talking about the features included in Workstation release 28. I had built a small video presentation that included the new features with the latest release. It also included a section showcasing Atomic and Server with some of the features include there as well. One individual did notice the Modularity feature and asked how that worked. It peaked his interest in the distribution and was very happy to download the software. As the morning progressed on, the table became busier as more people checked in or registered. Fedora was the first stop after the check-in registration.

A very strategic advantage for us. Since everyone entering the event got to see our booth first. The first day there were some sponsors that had not arrived yet such as Linode and My SQL. This did prove to be a vantage point for Fedora.  Also that morning Cathy Williams ran the program Fiber track for those significant others that attended the event gave an opportunity for alternative events. I believe that this is an excellent way for those others that only attend to support their significant partner given an activity while their partner is involved with talks and events related to FOSS.

Southeast Linux Fest

Julie and Ben with the Swag

Southeast Linux Fest 2018

In between Talks

As the day went on we noticed that  visitors at the booth became more interested in Workstation and its features. I found it very easy to discuss the GNOME desktop environment and its capabilities. I did receive many questions on how the desktop updated the installed software. As we discovered the interested individuals were not aware how easy the software could be updated. I was surprised with individuals that were also expert Linux users were not aware of the features included and how the software updates were accomplished in Fedora.

The expert level enthusiasts all new how to use terminal, but a surprising number of them didn’t know that GNOME would also update the installed software. The experts that visited us did use Fedora at one point, but for some reason got away from the distribution, and were impressed that the feature was available. I did point out for those Novice folks that are new to Linux would find this quite useful in a desktop environment until they became proficient in typing command lines in terminal.  I found this also useful with the novice individual that did show up at the booth looking for an operating system to replace their current windows environment.

The two major selling points I made were that the software updates were easy to perform and the loaded applications were an excellent start for a windows replacement. Since comes as a primary package installed, Libre Office’s editing power easily out performs the MS Office capabilities. I also pointed out that not only is Libre Office a powerful tool, the GNOME desktop environment is very user friendly and can be easily modified to fit the needs of the user with some tools that are currently available. I also point out the software installer offers a wide variety of applications that can be installed and used almost immediately once the software can be launch.  The afternoon was steady and busy at the table. Ben and Nick were busy taking registration forms for the Amateur radio license program.

While Julie and I continued to talk about the advantages of Workstation. There were many good talks happing in the afternoon, and the crowd began to increase until the last talk of the day. We wrapped up at 5 30 p.m. with a good feeling that we accomplished what we set forth to do.

Southeast Linux fest 2018

Saturday June 9th

Saturday, 9 June Day Two

The second day (June 9th) started at about the same time for us, approximately 7 45 a.m. with Ben, Julie, and Ross (our ambassador who just arrived from Miami Florida) to help out. Ross made our ambassadors six strong. That morning we saw a few returning attendees and a long line of new registering attendees. Ben and Nick again were setting up with the amateur radio license exams completing the final registration process. Just the night before both Ben and Nick held a cram session to aid in the exam, with the exam scheduled later in the evening, it started to become quite busy at our table.

The always ever so popular OLPC raised a significant amount of questions and amazement, the unit seems to capture they eye of almost everyone that stops by. The interesting point that we discuss with the OLPC is what the operating system is. That basis we demonstrate the versatility and functionality of Workstation. The  currently set up with the latest version of Workstation. This enabled a thorough demonstration of the features associated Workstation. There were a few individuals that did ask for a copy of SOAS along with Workstation which we gladly provided. We did field some technical questions about modularity and what the benefits were, in turn I think we answered the questions to their satisfaction. I asked if they would like a copy of server provided from us, but they already had the software and were preparing to load once they left the conference.

One of the highlights of the afternoon was one man who was fed up with his computer (windows) and wasn’t sure where to go and what to do. This person was also a small business man that needed just the basic in applications. He needed to be able to check email, order items for his business, and be able to complete various small office functions.  Demonstrating Workstation on the Think Pad to this individual became surprised how easy it was to use and was practically virus free.  He gladly picked up a disk of Workstation and told us thank you for the assistance. This is what made the whole trip worthwhile. Helping one person overcome the fear of switching operating systems to Fedora is what I am proud to accomplish as well as everyone at the Fedora Table.

The day began to wind down and lots of our swag was moving quickly, we were all anxious to wrap up the day.  Our last event the Amateur Radio license exam currently scheduled immediately following the keynote talk (another late night).

Sunday, 10 June Day Three;

Day three has been historically full of talks and most of the vendors pack up and have leave the area. We set up by 9 am that morning, and quickly noticed there was a different crowd of attendees altogether. Usually we see the same individuals on Saturday as we do on Sunday. Event talks scheduled  until 3:45pm and begin to wind down for everyone in attendance. Fedora remained strong for all three days with full support of attendees and promoting our product. I am confident that this year we made a specific impact on the community with regards to workstation.

By covering all of the products features and some that are not as prevalent as others I believe the impact was substantial. USB keys were available only for those individuals that showed any vested interest in the product, and spent valuable  time asking questions about Workstation.  I had some left over from a couple of years ago that I loaded with Workstation 28 for distribution. As you will see in the data collected that there is a significant drop in media on DVD formats. Prepared to accommodate those who requested media on DVDs by giving out USB keys with Workstation loaded decreased the number of DVD products by one third.

Also, with respect to surveys, we found it extremely hard to have individuals fill out the forms. Determining whether surveys should be electronic or paper collecting data about the event and experiences  seems to make some uncomfortable. There were  great comments from those individuals that did fill out the forms.  Available data collected from the event surveys is detailed below.

Workstation F28 Friday Saturday Sunday
DVD Media 25 40 15
USB Keys 10 24 1
Totals 35 64 16
Southeast Linux Fest

Saturday 9 June

Southeast Linux Fest

Swag and Demo

Other Media

(Upon Request only)

Friday Saturday Sunday
SOAS 5 6 0
KDE 1 0 0

 

Amateur Radio License Exams Number of Examinees Technician

# Passing

General

# Passing

Extra

# Passing

Candidates 15 4 5 1
22 Exams Administered

The surveys will be provided via separate correspondence attached to this report. With respect to the questions we had a wide range of answers in all areas. The first question was how did you learn about Fedora. The responses varied from relatives, to Red Hat and some in the early days for Fedora Core. The second question was to get a feel of how many were actually using the product, with the answer of three out of the twelve surveys taken have not used Fedora. Question three was regarding the experience at our booth, all of which were very good and no negative comments. The forth question was regarding experience level of the user, this ranged from novice and hobbyist to expert.  The last question revealed a 7 out of 12 surveys collected desired involvement with Fedora.

In summary, the event proved to be very successful promoting Workstation, meanwhile, any other media that was produced was by request only and not advertised. We believe that the OLPC is what generated the SOAS requests since we did have a lot of children that stopped and played the games on the OLPC. When the parents asked if we had that software, we gladly produced and handed them a DVD. We also had a special request from a Jacksonville Florida LUG member for 40 DVDs of Workstation. Julie produced the media and delivered the DVD’s along with a little swag for the next LUG meeting in Florida. That request brought our total to 155 Workstation DVD’s produced for SELF.

We had many questions regarding the direction of the project and how it interacts with Red Hat. Spending a lot of time demonstrating  features within GNOME and Workstation,  we felt that we did meet our goals. This event was a prime target giving the opportunity for users and experts to explore Fedora Workstation.  Looking forward to next year, there was some discussion for  demonstrating an Amateur Radio Spin alongside the latest Workstation release. This will give us the opportunity to demonstrate the versatility of Fedora. Again these are just conversations between ambassadors for next year but could prove to be an interesting point. We hope to bring some new initiatives to Southeast Linux Fest in the following years to come.

The post Fedora tackles Southeast Linux Fest 2018 appeared first on Fedora Community Blog.

4 add-ons to improve your privacy on Thunderbird

Posted by Fedora Magazine on July 11, 2018 08:00 AM

Thunderbird is a popular free email client developed by Mozilla. Similar to Firefox, Thunderbird offers a large choice of add-ons for extra features and customization. This article focuses on four add-ons to improve your privacy.

Enigmail

Encrypting emails using GPG (GNU Privacy Guard) is the best way to keep their contents private. If you aren’t familiar with GPG, check out our primer right here on the Magazine.

Enigmail is the go-to add-on for using OpenPGP with Thunderbird. Indeed, Enigmail integrates well with Thunderbird, and lets you encrypt, decrypt, and digitally sign and verify emails.

Paranoia

Paranoia gives you access to critical information about your incoming emails. An emoticon shows the encryption state between servers an email traveled through before reaching your inbox.

A yellow, happy emoticon tells you all connections were encrypted. A blue, sad emoticon means one connection was not encrypted. Finally, a red, scared emoticon shows on more than one connection the message wasn’t encrypted.

More details about these connections are available, so you can check which servers were used to deliver the email.

Sensitivity Header

Sensitivity Header is a simple add-on that lets you select the privacy level of an outgoing email. Using the option menu, you can select a sensitivity: Normal, Personal, Private and Confidential.

Adding this header doesn’t add extra security to email. However, some email clients or mail transport/user agents (MTA/MUA) can use this header to process the message differently based on the sensitivity.

Note that this add-on is marked as experimental by its developers.

TorBirdy

If you’re really concerned about your privacy, TorBirdy is the add-on for you. It configures Thunderbird to use the Tor network.

TorBirdy offers less privacy on email accounts that have been used without Tor before, as noted in the documentation.

Please bear in mind that email accounts that have been used without Tor before offer less privacy/anonymity/weaker pseudonyms than email accounts that have always been accessed with Tor. But nevertheless, TorBirdy is still useful for existing accounts or real-name email addresses. For example, if you are looking for location anonymity — you travel a lot and don’t want to disclose all your locations by sending emails — TorBirdy works wonderfully!

Note that to use this add-on, you must have Tor installed on your system.


Photo by Braydon Anderson on Unsplash.

Using podman for containers

Posted by Kushal Das on July 11, 2018 07:05 AM

Podman is one of the newer tool in the container world, it can help you to run OCI containers in pods. It uses Buildah to build containers, and runc or any other OCI compliant runtime. Podman is being actively developed.

I have moved the two major bots we use for dgplug summer training (named batul and tenida) under podman and they are running well for the last few days.

Installation

I am using a Fedora 28 system, installation of podman is as simple as any other standard Fedora package.

$ sudo dnf install podman

While I was trying out podman, I found it was working perfectly in my DigitalOcean instance, but, not so much on the production vm. I was not being able to attach to the stdout.

When I tried to get help in #podman IRC channel, many responded, but none of the suggestions helped. Later, I gave access to the box to Matthew Heon, one of the developer of the tool. He identified the Indian timezone (+5:30) was too large for the timestamp buffer and thus causing this trouble.

The fix was pushed fast, and a Fedora build was also pushed to the testing repo.

Usage

To learn about different available commands, visit this page.

First step was to build the container images, it was as simple as:

$ sudo podman build -t kdas/imagename .

I reused my old Dockerfiles for the same. After this, it was just simple run commands to start the containers.

Cockpit 172

Posted by Cockpit Project on July 11, 2018 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 172.

System: Offer installation of PCP

The System page now shows an “Enable persistent metrics…” link if the PCP package is not installed. This is similar to the on-demand installation of the NFS client packages in Cockpit 169.

Install PCP on demand

Software Updates: Improve layout in mobile mode

The Software Updates page improves spacing and layout in small browser windows and mobile browsers:

PackageKit mobile mode

Remove ability to drop privileges from navigation bar

Before this release, Cockpit showed a “Locked” or “Unlocked” status in the navigation bar. It reflected the “Reuse my password for privileged tasks” checkbox on the login page. Clicking on “Unlocked” would lock the interface and drop administrator capabilities.

In general, the capability downgrade feature is not well supported across Cockpit. Most pages do not respond well to privilege changes.

A non-clickable administrative privilege badge (“Privileged”) replaces the old interactive locked/unlocked status.

The ability to start cockpit without escalating privileges remains on the login screen. Dropping privileges at runtime is still available in the user menu, under “Authentication”.

Privilege status

Introduce flow control for all channels

The Cockpit API now has flow control to reduce buffering, improve memory usage, and make the user interface more responsive.

Third-party Cockpit extensions may use the API to transfer large amounts of data.

A notable example: Welder downloads customized operating system from remote machines. Without flow control, Welder would become unresponsive and use large amounts of memory.

Python 3 support

Cockpit, along with all unit tests and most integration tests, now supports Python 3. Building with Python 2 still works, but it is now deprecated.

Try it out

Cockpit 172 is available now:

The confusing Bash configuration files

Posted by Maxim Burgerhout on July 11, 2018 12:00 AM

This blog is mostly a reminder for my future self, because I always end up forgetting this.

Bash has a bunch of configuration files it parsers through when you fire it up.

Bash reads them in this order (on Fedora, and I suppose RHEL and derivatives too) if invoked as an interactive login shell (i.e. when you log into the system on the console, or through SSH):

  • /etc/profile (if it exists)

  • ~/.bash_profile (if it exists)

  • ~/.bash_login (if it exists, and ~/.bash_profile does not exist)

  • ~/.profile (if it exists, and if the above two files do not)

When exiting, the interactive login shell executes:

  • ~/.bash_logout (if it exists)

  • /etc/bash.bash_logout (if it exists)

For an interactive non-login shell (that’s when you start gnome-terminal or tilix in X or Wayland), Bash just executes ~/.bashrc, if it exists. (So, no, /etc/bashrc is not invoked by Bash itself, but usually through ~/.bashrc, which by default sources /etc/bashrc.)

Because this is odd, the default ~/.bash_profile actually sources ~/.bashrc.

So for an interactive login shell, this happens (assuming the default config files from /etc/skel on Fedora 28):

  1. /etc/profile is read,

  2. whatever is in /etc/profile.d is included

  3. /etc/bashrc is included, and the ${BASHRCSOURCED} variable is set to Y

  4. ~/.bash_profile is read

  5. ~/.bashrc is sourced through ~/.bash_profile

  6. /etc/bashrc is sourced, again, this time through ~/.bashrc, but it’s not actually parsed again, because ${BASHRCSOURCED} was already set to Y

  7. neither ~/.bash_login, nor ~/.profile are sourced, because ~/.bash_profile exists

  8. You get your shell

Finally, when Bash is invoked as the interpreter for a shell script, it will read ${BASH_ENV}, and it will read and execute the filename it finds in there. For Fedora 28, that’s /usr/share/Modules/init/bash, owned by the environment-modules package.

Mind that this only happens if the shell script starts with the proper Bash shebang: #!/bin/bash or #!/usr/bin/bash, not with #!/bin/sh. Starting your shell script with #!/bin/sh will yield completely different results, as that will make Bash run in compatibility mode for old(er) shells.

Benchmarking MongoDB in a container

Posted by farhaan on July 10, 2018 08:48 PM

The database layer for an application is one of the most crucial part because believe it or not it effects the performance of your application, now with micro-services getting the attention I was just wondering if having a database container will make a difference.

As we have popularly seen most of the containers used are stateless containers that means that they don’t retain the data they generate but there is a way to have stateful containers and that is by mounting a host volume in the container. Having said this there could be an issue with the latency in the database request, I wanted to measure how much will this latency be and what difference will it make if the installation is done natively verses if the installation is done in a container.

I am going to run a simple benchmarking scheme I will make 200 insert request that is write request keeping all other factors constant and will plot the time taken for these request and see what comes out of it.

I borrowed a quick script to do the same from this blog. The script is simple it just uses pymongo the python MongoDB driver to connect to the database and make 200 entries in a random database.

import time
import pymongo
m = pymongo.MongoClient()

doc = {'a': 1, 'b': 'hat'}

i = 0

while (i < 200):

start = time.time()
m.tests.insertTest.insert(doc, manipulate=False, w=1)
end = time.time()

executionTime = (end - start) * 1000 # Convert to ms

print executionTime

i = i + 1

So I went to install MongoDB natively first I ran the above script twice and took the second result into consideration. Once I did that I plotted the graph with value against the number of request. The first request takes time because it requires to make connection and all the over head and the plot I got looked like this.

 

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_1533" style="width: 600px">Native<figcaption class="wp-caption-text">MongoDb Native Time taken in ms v/s Number of request</figcaption></figure>

The graph shows that the first request took about 6 ms but the consecutive requests took way lesser time.

Now it was time I try the same to do it in a container so I did a docker pull mongo and then I mounted a local volume in the container and started the container by

docker run --name some-mongo -v /Users/farhaanbukhsh/mongo-bench/db:/data/db -d mongo

This mounts the volume I specified to /data/db in the container then I did a docker cp of the script and installed the dependencies and ran the script again twice so that file creation doesn’t manipulate the time.

To my surprise the first request took about 4ms but subsequent requests took a lot of time.

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_1534" style="width: 600px">Containered<figcaption class="wp-caption-text">MongoDB running in a container(Time in ms v/s Number of Requests)</figcaption></figure>

 

And when I compared them the time time difference for each write or the latency for each write operation was ​considerable.

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_1535" style="width: 600px">MongoDB bench mark<figcaption class="wp-caption-text">Comparison between Native and Containered MongoDB</figcaption></figure>

I had this thought that there will be difference in time and performance but never thought that it would be this huge, now I am wondering what is the solution to this performance issue, can we reach a point where the containered performance will be as good as native.

Let me know what do you think about it.

Happy Hacking!

All systems go

Posted by Fedora Infrastructure Status on July 10, 2018 08:15 PM
Service 'Fedora People' now has status: good: Everything seems to be working.

The cabbage patch for linker scripts

Posted by Laura Abbott on July 10, 2018 06:00 PM

Quick quiz: what package provides ld? If you said binutils and not gcc, you are a winner! That's not actually the story, I just tend to forget which package to look at when digging into problems. This is actually a story about binutils, linker scripts, and toolchains.

Usually by -rc4, the kernel is fairly stable so I was a bit surprised when the kernel was failing on arm64:

ld: cannot open linker script file ldscripts/aarch64elf.xr: No such file or directory

There weren't many changes to arm64 so it was pretty easy to narrow down the problem to a seemingly harmless change. If you are running a toolchain on a standard system such as Fedora, you will probably expect it to "just work". And it should if everything goes to plan! binutils is a very powerful library though and can be configured to allow for emulating a bunch of less standard linkers, if you run ld -V you can see what's available:

$ ld -V
GNU ld version 2.29.1-23.fc28
  Supported emulations:
   aarch64linux
   aarch64elf
   aarch64elf32
   aarch64elf32b
   aarch64elfb
   armelf
   armelfb
   aarch64linuxb
   aarch64linux32
   aarch64linux32b
   armelfb_linux_eabi
   armelf_linux_eabi
   i386pep
   i386pe

This is what's on my Fedora system. Depending on how your toolchain is compiled, the output may be different. A common variant toolchain setup is the 'bare metal' toolchain. This is (generally) a toolchain that's designed to compile binaries to run right on the hardware without an OS. The kernel technically meets this definition and provides all its own linker scripts so in theory you should be able to compile the kernel with a properly configured bare metal toolchain. What the harmless looking change did was switch the emulation mode from linux to one that works with bare metal toolchains.

So why wasn't it working? Looking across the system, I found no trace of the file aarch64elf.xr, yet clearly it was expecting it. Because this seemed to be something internal to the toolchain, I decided to try another one. Linaro helpfully provides toolchains for compiling arm targets. Turns out the Linaro toolchain worked. strace helpfully showed where it was picking up the file1:

lstat("/opt/gcc-linaro-7.1.1-2017.08-x86_64_aarch64-linux-gnu/aarch64-linux-gnu/lib/ldscripts/aarch64elf.xr", {st_mode=S_IFREG|0644, st_size=5299, ...}) = 0

So clearly the file was supposed to be included. Looking at the build log for Fedora's binutils, I could definitely see the scripts being installed. Further down the build log, there was also a nice rm -rf removing the directory where these scripts were installed to. This very deliberately exists in the spec file for building binutils with a comment about gcc. The history doesn't make it completely clear, but I suspect this was either intended to avoid conflicts with something gcc generated or it was 'borrowed' from gcc to remove files Fedora didn't care about. Linaro, on the other hand, chose to package the files with their toolchain. Given Linaro has a strong embedded background, it would make sense for them to care about emulation modes that might be used on more traditional embedded hardware.

For one last piece of the puzzle, if all the linker scripts are rm -rf'd why does the linker work at all, shouldn't it complain? The binutils source has the answer. If you trace through the source tree, you can find a folder with all the emulation options, along with the template they use for generating the structure representation. There's a nice check for $COMPILE_IN to actually build a linker script into the binary. The file genscripts.sh is actually responsible for generating all the linker scripts and will compile in the default script. This makes sense, since you want the default case to be fast and not hit the file system.

I ended up submitting a revert of the patch since this was a regression, but it turns out Debian suffers from a similar problem. The real take away here is toolchains are tricky. Choose yours carefully.


  1. You also know a file is a bit archaic when it has a comment about the Solaris linker 

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on July 10, 2018 04:32 PM
Service 'Fedora People' now has status: scheduled: Scheduled maintenance in progress

Fun with DAC_OVERRIDE and SELinux

Posted by Dan Walsh on July 10, 2018 02:31 PM

Lately the SELinux team has been trying to remove as many SELinux Domain Types that have DAC_OVERRIDE.

man capabilities

...

       CAP_DAC_OVERRIDE

              Bypass file read, write, and execute permission checks.  (DAC is an abbreviation of "discretionary access control".)

This means a process with CAP_DAC_OVERRIDE can read any file on the system and can write any file on the system from a standard permissions point of view.  With SELinux it means that they can read all file types that SELinux allows them to read, even if they are running with a process UID that is not allowed to read the file.  Similar they are allowed to write all SELinux writable types even if they aren't allowed to write based on UID.  

Obviously most confined domains never need to have this access, but some how over the years lots of domains got added this access.  

I recently received and email asking about syslog, generating lots of AVC's.  The writer said that he understood SELinux and has set up the types for syslog to write to, and even the content was getting written properly.  But the Kernel was generating an AVC every time the service started.

Here is the AVC.

Jul 09 15:24:57

 audit[9346]: HOSTNAME AVC avc:  denied  { dac_override }  for  pid=9346 comm=72733A6D61696E20513A526567 capability=1   scontext=system_u:system_r:syslogd_t:s0  tcontext=system_u:system_r:syslogd_t:s0 tclass=capability permissive=0

Sadly the kernel is not in full debug mode so we don't know what the path that the syslog process was trying to read or write.  

Note: You can turn on full auditing using a command like: `auctl -w /etc/shadow`. But this could effect your system performances.

But I had a guess on what could be causing the AVC's.

What causes DAC_OVERRIDE AVCs

A couple of easy places that a root process needs DAC_OVERRIDE is to look at the /etc/shadow file.

 ls -l /etc/shadow
----------. 1 root root 1474 Jul  9 14:02 /etc/shadow

As you see in the permissions no UID is allowed to read or write /etc/shadow,  So the only want to examine this file is using DAC_OVERRIDE.  But I am pretty sure syslogd is not attempting to read this file.  (Other SELinux AVC's would be screaming it if was).

The other location that can easily cause DAC_OVERRIDE AVC's is attempting to create content in the /root directory.

 ls -ld /root
dr-xr-x---. 19 root root 4096 Jul  9 15:59 /root

On Fedora, RHEL, Centos boxes, the root directory is set with permissions that do not allow any process to write to it even the root process, unless it uses DAC_OVERRIDE.  This is a security measure which prevents processes running as root that drop privileges from being able to write content in /root.  If a process can write content in /root, they could modify the /root/.bashrc file.  This means later an admin logging into the system as root executing a shell would execute the .bashrc script with full privs.  By setting the privs on the /root directory to 550, the systems are a little more security and admins know that only processes with DAC_OVERRIDE can write to this directory.  

Well this causes an issue.  Turns out that starting a shell like bash, ut wants to write to the the .bash_history directory in its home dir, if the script is running as root it wants to write /root/.bash_history file.  If the file does not exists, then the shell would require DAC_OVERRIDE to write this file.  Luckily bash continues working fine if it can not write this file.  

But if you are running on an SELinux system a confined application that launches bash, will generate an AVC message to the kernel stating that the confined domain wans DAC_OVERRIDE.

I recommend that if this situation happens to just add a DONTAUDIT rule to the policy.  Then SELinux will be silent about the denial, but the process will still not gain that access.

audit2allow -D -i /tmp/avc.log
#============= syslogd_t ==============
dontaudit syslogd_t self:capability dac_override;

To generate policy

audit2allow -M mysyslog -D -i /tmp/t1
******************** IMPORTANT ***********************
To make this policy package active, execute:
semodule -i mysyslog.pp

CONCLUSION

Bottom line, DAC_OVERRIDE is a fairly dangerous access to grant and can often be granted when it is not really necessary.  So I recommend fixing the permissions on files/directories or just adding dontaudit rules.


Red Hat’s disclosure process

Posted by Red Hat Security on July 10, 2018 01:00 PM

Last week, a vulnerability (CVE-2018-10892) that affected CRI-O, Buildah, Podman, and Docker was made public before some affected upstream projects were notified. We regret that this was not handled in a way that lives up to our own standards around responsible disclosure. It has caused us to look back to see what went wrong so as to prevent this from happening in the future.

Because of how important our relationships with the community and industry partners are and how seriously we treat non-public information irrespective of where it originates, we are taking this event as an opportunity to look internally at improvements and challenge assumptions we have held.

We conducted a review and are using this to develop training around the handling of non-public information relating to security vulnerabilities, and ensuring that our relevant associates have a full understanding of the importance of engaging with upstreams as per their, and our, responsible disclosure guidelines. We are also clarifying communication mechanisms so that our associates are aware of the importance of and methods for notifying upstream of a vulnerability prior to public disclosure.

Red Hat values and recognizes the importance of relationships, be they with upstreams, downstreams, industry partners and peers, customers, or vulnerability reporters. We embrace open source development principles including trust and transparency. As we navigate through a landscape full of software that will inevitably contain security vulnerabilities we strive to manage each flaw with the same degree of care and attention, regardless of its potential impact. Our commitment is to work with other vendors of Linux and open source software to reduce the risk of security issues through responsible information sharing and peer reviews.

This event has reminded us that it is important to remain vigilant, provide consistent, clear guidance, and handle potentially sensitive information appropriately. And while our track record of responsible disclosure speaks for itself, when an opportunity presents itself to revisit, reflect, and improve our processes, we make the most of it to ensure we have the proper procedures and controls in place.

Red Hat takes its participation in open source projects and security disclosure very seriously. We have discovered hundreds of vulnerabilities and our dedicated Product Security team has participated in responsible disclosures for more than 15 years. We strive to get it right every time, but this time we didn't quite live up to the standards to which we seek to hold ourselves. Because we believe in open source principles such as accountability, we wanted to share what had happened and how we have responded to it. We are sincerely apologetic for not meeting our own standards in this instance.

English

Category

Secure

Tags

security

Monta tu LAMP+ en menos de dos minutos

Posted by Alvaro Castillo on July 10, 2018 08:25 AM

Seguro que much@s de nosotr@s cuando hemos tenido que desarrollar alguna página Web en un entorno Windows montado alguna vez la suite de software XAMPP de Apache Friends para salir del paso y tener un entorno mínimo comenzar a programar, en vez de instalar un servicio uno por uno porque además de que en Windows es uno de los sistemas en los que este trabajo resulta algo más laborioso.

Sin embargo, Linux o los sistemas UNIX-like son mucho más amigables en cuánto entornos de desarrollo tanto d...

Boost your typing with emoji in Fedora 28 Workstation

Posted by Fedora Magazine on July 09, 2018 08:00 AM

Fedora 28 Workstation ships with a feature that allows you to quickly search, select and input emoji using your keyboard. Emoji, cute ideograms that are part of Unicode, are used fairly widely in messaging and especially on mobile devices. You may have heard the idiom “A picture is worth a thousand words.” This is exactly what emoji provide: simple images for you to use in communication. Each release of Unicode adds more, with over 200 new ones added in past releases of Unicode. This article shows you how to make them easy to use in your Fedora system.

It’s great to see emoji numbers growing. But at the same time it brings the challenge of how to input them in a computing device. Many people already use these symbols for input in mobile devices or social networking sites.

[Editors’ note: This article is an update to a previously published piece on this topic.]

Enabling Emoji input on Fedora 28 Workstation

The new emoji input method ships by default in Fedora 28 Workstation. To use it, you must enable it using the Region and Language settings dialog. Open the Region and Language dialog from the main Fedora Workstation settings, or search for it in the Overview.

Region & Language settings tool

Choose the + control to add an input source. The following dialog appears:

Adding an input source

Choose the final option (three dots) to expand the selections fully. Then, find Other at the bottom of the list and select it:

Selecting other input sources

In the next dialog, find the Typing booster choice and select it:

This advanced input method is powered behind the scenes by iBus. The advanced input methods are identifiable in the list by the cogs icon on the right of the list.

The Input Method drop-down automatically appears in the GNOME Shell top bar. Ensure your default method — in this example, English (US) — is selected as the current method, and you’ll be ready to input.

Input method dropdown in Shell top bar

Using the new Emoji input method

Now the Emoji input method is enabled, search for emoji by pressing the keyboard shortcut Ctrl+Shift+E. A pop-over dialog appears where you can type a search term, such as smile, to find matching symbols.

Searching for smile emoji

Use the arrow keys to navigate the list. Then, hit Enter to make your selection, and the glyph will be placed as input.

PHP version 7.1.20RC1 and 7.2.8RC1

Posted by Remi Collet on July 09, 2018 05:14 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.2.8RC1 are available as SCL in remi-test repository and as base packages in the remi-php72-test repository for Fedora 25-27 and Enterprise Linux.

RPM of PHP version 7.1.20RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26-27 or remi-php71-test repository for Fedora 25 and Enterprise Linux.

PHP version 7.0 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.2.7RC1 is also available in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.5.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php71, php72)

Base packages (php)

Episode 104 - The Gentoo security incident

Posted by Open Source Security Podcast on July 09, 2018 12:14 AM
Josh and Kurt talk about the Gentoo security incident. Gentoo did a really good job being open and dealing with the incident quickly. The basic takeaway from all this is make sure your organization is forcing users to use 2 factor authentication. The long term solution is going to be all identity providers forcing everyone to use 2FA.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6784275/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


meson fails with "ERROR: Native dependency 'foo' not found" - and how to fix it

Posted by Peter Hutterer on July 08, 2018 11:46 PM

A common error when building from source is something like the error below:


meson.build:50:0: ERROR: Native dependency 'foo' not found
or a similar warning

meson.build:63:0: ERROR: Invalid version of dependency, need 'foo' ['>= 1.1.0'] found '1.0.0'.
Seeing that can be quite discouraging, but luckily, in many cases it's not too difficult to fix. As usual, there are many ways to get to a successful result, I'll describe what I consider the simplest.

What does it mean? Dependencies are simply libraries or tools that meson needs to build the project. Usually these are declared like this in meson.build:


dep_foo = dependency('foo', version: '>= 1.1.0')
In human words: "we need the development headers for library foo (or 'libfoo') of version 1.1.0 or later". meson uses the pkg-config tool in the background to resolve that request. If we require package foo, pkg-config searches for a file foo.pc in the following directories:
  • /usr/lib/pkgconfig,
  • /usr/lib64/pkgconfig,
  • /usr/share/pkgconfig,
  • /usr/local/lib/pkgconfig,
  • /usr/local/share/pkgconfig
The error message simply means pkg-config couldn't find the file and you need to install the matching package from your distribution or from source.

And important note here: in most cases, we need the development headers of said library, installing just the library itself is not sufficient. After all, we're trying to build against it, not merely run against it.

What package provides the foo.pc file?

In many cases the package is the development version of the package name. Try foo-devel (Fedora, RHEL, SuSE, ...) or foo-dev (Debian, Ubuntu, ...). yum and dnf provide a great shortcut to install any pkg-config dependency:


$> dnf install "pkgconfig(foo)"

$> yum install "pkgconfig(foo)"
will automatically search and install the right package, including its dependencies.
apt-get requires a bit more effort:

$> apt-get install apt-file
$> apt-file update
$> apt-file search --package-only foo.pc
foo-dev
$> apt-get install foo-dev
For those running Arch and pacman, the sequence is:

$> pacman -S pkgfile
$> pkgfile -u
$> pkgfile foo.pc
extra/foo
$> pacman -S extra/foo
Once that's done you can re-run meson and see if all dependencies have been met. If more packages are missing, follow the same process for the next file.

Any users of other distributions - let me know how to do this on yours and I'll update the post

My version is wrong!

It's not uncommon to see the following error after installing the right package:


meson.build:63:0: ERROR: Invalid version of dependency, need 'foo' ['>= 1.1.0'] found '1.0.0'.
Now you're stuck and you have a problem. What this means is that the package version your distribution provides is not new enough to build your software. This is where the simple solutions and and it all gets a bit more complicated - with more potential errors. Unless you are willing to go into the deep end, I recommend moving on and accepting that you can't have the newest bits on an older distribution. Because now you have to build the dependencies from source and that may then require to build their dependencies from source and before you know you've built 30 packages. If you're willing read on, otherwise - sorry, you won't be able to run your software today.

Manually installing dependencies

Now you're in the deep end, so be aware that you may see more complicated errors in the process. First of all you need to figure out where to get the source from. I'll now use cairo as example instead of foo so you see actual data. On rpm-based distributions like Fedora run dnf or yum:


$> dnf info cairo-devel # or yum info cairo-devel
Loaded plugins: auto-update-debuginfo, langpacks
Installed Packages
Name : cairo-devel
Arch : x86_64
Version : 1.13.1
Release : 0.1.git337ab1f.fc20
Size : 2.4 M
Repo : installed
From repo : fedora
Summary : Development files for cairo
URL : http://cairographics.org
License : LGPLv2 or MPLv1.1
Description : Cairo is a 2D graphics library designed to provide high-quality
: display and print output.
:
: This package contains libraries, header files and developer
: documentation needed for developing software which uses the cairo
: graphics library.
The important field here is the URL line - got to that and you'll find the source tarballs. That should be true for most projects but you may need to google for the package name and hope. Search for the tarball with the right version number and download it. On Debian and related distributions, cairo is provided by the libcairo2-dev package. Run apt-cache show on that package:

$> apt-cache show libcairo2-dev
Package: libcairo2-dev
Source: cairo
Version: 1.12.2-3
Installed-Size: 2766
Maintainer: Dave Beckett <dajobe>
Architecture: amd64
Provides: libcairo-dev
Depends: libcairo2 (= 1.12.2-3), libcairo-gobject2 (= 1.12.2-3),[...]
Suggests: libcairo2-doc
Description-en: Development files for the Cairo 2D graphics library
Cairo is a multi-platform library providing anti-aliased
vector-based rendering for multiple target backends.
.
This package contains the development libraries, header files needed by
programs that want to compile with Cairo.
Homepage: http://cairographics.org/
Description-md5: 07fe86d11452aa2efc887db335b46f58
Tag: devel::library, role::devel-lib, uitoolkit::gtk
Section: libdevel
Priority: optional
Filename: pool/main/c/cairo/libcairo2-dev_1.12.2-3_amd64.deb
Size: 1160286
MD5sum: e29852ae8e8e5510b00b13dbc201ce66
SHA1: 2ed3534d02c01b8d10b13748c3a02820d10962cf
SHA256: a6099cfbcc6bd891e347dd9abc57b7f137e0fd619deaff39606fd58f0cc60d27
In this case it's the Homepage line that matters, but the process of downloading tarballs is the same as above. For Arch users, the interesting line is URL as well:

$> pacman -Si cairo | grep URL
Repository : extra
Name : cairo
Version : 1.12.16-1
Description : Cairo vector graphics library
Architecture : x86_64
URL : http://cairographics.org/
Licenses : LGPL MPL
....

Now to the complicated bit: In most cases, you shouldn't install the new version over the system version because you may break other things. You're better off installing the dependency into a custom folder ("prefix") and point pkg-config to it. So let's say you downloaded the cairo tarball, now you need to run:


$> mkdir $HOME/dependencies/
$> tar xf cairo-someversion.tar.xz
$> cd cairo-someversion
$> autoreconf -ivf
$> ./configure --prefix=$HOME/dependencies
$> make && make install
$> export PKG_CONFIG_PATH=$HOME/dependencies/lib/pkgconfig:$HOME/dependencies/share/pkgconfig
# now go back to original project and run meson again
So you create a directory called dependencies and install cairo there. This will install cairo.pc as $HOME/dependencies/lib/cairo.pc. Now all you need to do is tell pkg-config that you want it to look there as well - so you set PKG_CONFIG_PATH. If you re-run meson in the original project, pkg-config will find the new version and meson should succeed. If you have multiple packages that all require a newer version, install them into the same path and you only need to set PKG_CONFIG_PATH once. Remember you need to set PKG_CONFIG_PATH in the same shell as you are running configure from.

In the case of dependencies that use meson, you replace autotools and make with meson and ninja:


$> mkdir $HOME/dependencies/
$> tar xf foo-someversion.tar.xz
$> cd foo-someversion
$> meson builddir -Dprefix=$HOME/dependencies
$> ninja -C builddir install
$> export PKG_CONFIG_PATH=$HOME/dependencies/lib/pkgconfig:$HOME/dependencies/share/pkgconfig
# now go back to original project and run meson again

If you keep seeing the version error the most common problem is that PKG_CONFIG_PATH isn't set in your shell, or doesn't point to the new cairo.pc file. A simple way to check is:


$> pkg-config --modversion cairo
1.13.1
Is the version number the one you installed or the system one? If it is the system one, you have a typo in PKG_CONFIG_PATH, just re-set it. If it still doesn't work do this:

$> cat $HOME/dependencies/lib/pkgconfig/cairo.pc
prefix=/usr
exec_prefix=/usr
libdir=/usr/lib64
includedir=/usr/include

Name: cairo
Description: Multi-platform 2D graphics library
Version: 1.13.1

Requires.private: gobject-2.0 glib-2.0 >= 2.14 [...]
Libs: -L${libdir} -lcairo
Libs.private: -lz -lz -lGL
Cflags: -I${includedir}/cairo
If the Version field matches what pkg-config returns, then you're set. If not, keep adjusting PKG_CONFIG_PATH until it works. There is a rare case where the Version field in the installed library doesn't match what the tarball said. That's a defective tarball and you should report this to the project, but don't worry, this hardly ever happens. In almost all cases, the cause is simply PKG_CONFIG_PATH not being set correctly. Keep trying :)

Let's assume you've managed to build the dependencies and want to run the newly built project. The only problem is: because you built against a newer library than the one on your system, you need to point it to use the new libraries.


$> export LD_LIBRARY_PATH=$HOME/dependencies/lib
and now you can, in the same shell, run your project.

Good luck!

Converting policy.yaml to a list of dictionaries

Posted by Adam Young on July 08, 2018 03:38 AM

The policy .yaml file generated from oslo has the following format:

# Intended scope(s): system
#"identity:update_endpoint_group": "rule:admin_required"

# Delete endpoint group.
# DELETE /v3/OS-EP-FILTER/endpoint_groups/{endpoint_group_id}
# Intended scope(s): system
#"identity:delete_endpoint_group": "rule:admin_required"

This is not very useful for anything other than feeding to oslo-policy to enforce. If you want to use these values for anything else, it would be much more useful to have each rule as a dictionary, and all of the rules in a list. Here is a little bit of awk to help out:

#!/usr/bin/awk -f
BEGIN {apilines=0; print("---")}
/#"/ {
    if (api == 1){
	printf("  ")
    }else{
	printf("- ")
    }
  split ($0,array,"\"")
  print ("rule:", array[2]);
  print ("  check:", array[4]);
  rule=0
}    
/# / {api=1;}
/^$/ {api=0; apilines=0;}
api == 1 && apilines == 0 {print ("- description:" substr($0,2))}
/# GET/  || /# DELETE/ || /# PUT/ || /# POST/ || /# HEAD/ || /# PATCH/ {
     print ("  " $2 ": " $3)
}
api == 1 { apilines = apilines +1 }

I have it saved in mungepolicy.awk. I ran it like this:

cat etc/keystone.policy.yaml.sample | ./mungepolicy.awk > /tmp/keystone.access.yaml

And the output looks like this:

---
- rule: admin_required
  check: role:admin or is_admin:1
- rule: service_role
  check: role:service
- rule: service_or_admin
  check: rule:admin_required or rule:service_role
- rule: owner
  check: user_id:%(user_id)s
- rule: admin_or_owner
  check: rule:admin_required or rule:owner
- rule: token_subject
  check: user_id:%(target.token.user_id)s
- rule: admin_or_token_subject
  check: rule:admin_required or rule:token_subject
- rule: service_admin_or_token_subject
  check: rule:service_or_admin or rule:token_subject
- description: Show application credential details.
  GET: /v3/users/{user_id}/application_credentials/{application_credential_id}
  HEAD: /v3/users/{user_id}/application_credentials/{application_credential_id}
  rule: identity:get_application_credential
  check: rule:admin_or_owner
- description: List application credentials for a user.
  GET: /v3/users/{user_id}/application_credentials
  HEAD: /v3/users/{user_id}/application_credentials
  rule: identity:list_application_credentials
  check: rule:admin_or_owner

Which is valid yaml. It might be a pain to deal with the verbs in separate keys. Ideally, that would be a list, too, but this will work for starters.

Flatpak, making contribution easy

Posted by Matthias Clasen on July 07, 2018 04:09 PM

One vision that i’ve talked abut in the past is that moving to flatpak could make it much easier to contribute to applications.

Fast-forward 3 years, and the vision is (almost) here!

Every application on flathub has a Sources extension that you can install just like anything else from a flatpak repo:

flatpak install flathub org.seul.pingus.Sources

This extension contains a flatpak manifest which lists the exact revisions of all the sources that went into the build. This lets you reproduce the build — if you can find the manifest!

Assuming you install the sources per-user, the manifest is here (using org.seul.pingus as an example):

$HOME/.local/share/flatpak/runtime/org.seul.pingus.Sources/x86_64/stable/active/files/manifest/org.seul.pingus.json

And you can build it like this:

flatpak-builder build org.seul.pingus.json

I said the vision is almost, but not quite there. Here is why: gnome-builder also has a way to kickstart a project from a manifest:

gnome-builder --manifest org.seul.pingus.json

But sadly, this currently crashes. I filed an issue for it, and it will hopefully work very soon. Next step, flip-to-hack !

Running OpenStack components on RHEL with Software Collections

Posted by Adam Young on July 07, 2018 01:50 PM

The Python world has long since embraced Python3.  However, the stability guarantees of RHEL have limited it to Python2.7 as the base OS.  Now that I am running RHEL on my laptop, I have to find a way to work with Python 3.5 in order to contribute to OpenStack.  To further constrain myself, I do not want to “pollute” the installed python modules by using PIP to mix and match between upstream and downstream.  The solution is the Software Collections version of Python 3.5.  Here’s how I got it to work.

Start by enabling the Software Collections Yum repos and refreshing:

sudo subscription-manager repos --enable rhel-workstation-rhscl-7-rpms
sudo subscription-manager refresh

Now what I need is Python 3.5.  Since I did this via trial and error, I don’t have the exact yum command I used, but I ended up with the following rpms installed, and they were sufficient.

rh-python35-python-setuptools-18.0.1-2.el7.noarch
rh-python35-python-libs-3.5.1-11.el7.x86_64
rh-python35-python-pip-7.1.0-2.el7.noarch
rh-python35-scldevel-2.0-2.el7.x86_64
rh-python35-runtime-2.0-2.el7.x86_64
rh-python35-python-six-1.10.0-1.el7.noarch
rh-python35-python-devel-3.5.1-11.el7.x86_64
rh-python35-python-3.5.1-11.el7.x86_64

To enable the software collections:

scl enable rh-python35 bash

However, thus far there is no Tox installed. I can get using pip, and I’m ok with that so long as I do a user install.  Make sure You have run the above scl enable command to do this for the right version of python.

 pip install --user --upgrade tox

This puts all the code in ~/.local/ as well as appending ~/.local/bin dir to the PATH env var.  You need to restart your terminal session to pick that up on first use.

Now I can run code in the Keystone repo.  For example, to build the sample policy.json files:

tox -e genpolicy

A Git Style change management for a Database driven app.

Posted by Adam Young on July 06, 2018 07:38 PM

The Policy management tool I’m working on really needs revision and change management.  Since I’ve spent so much time with Git, it affects my thinking about change management things.  So, here is my attempt to lay out my current thinking for implementing a git-like scheme for managing policy rules.

A policy line is composed of two chunks of data.  A Key and a Value.  The keys are in the form

  identity:create_user.

Additionally, the keys are scoped to a specific service (Keystone, Nova, etc).

The value is the check string.  These are of the form

role:admin and project_id=target.project_id

It is the check string that is most important to revision control. This lends itself to an entity diagram like this:

Whether each of these gets its own table remains to be seen.  The interesting part is the rule_name to policy_rule mapping.

Lets state that the policy_rule table entries are immutable.  If we want to change policy, we add a new entry, and leave the old ones in there.  The new entry will have a new revision value.  For now, lets assume revisions are integers and are monotonically increasing.  So, when I first upload the Keystone policy.json file, each entry gets a revision ID of 1.  In this example, all check_strings start off as are “is_admin:True”

Now lets assume I modify the identity:create_user rule.  I’m going to arbitrarily say that the id for this record is 68.  I want to Change it to:

role:admin and domain_id:target.domain_id

So we can do some scope checking.  This entry goes into the policy_rule table like so:

 

rule_name_id check_string revision
68 is_admin:True 1
68 role:admin and domain_id:target.domain_id 2

From a storage perspective this is quite nice, but from a “what does my final policy look like” perspective it is a mess.

In order to build the new view, we need sql along the lines of

select * from policy_rule where revision = ?

Lets call this line_query and assume that when we call it, the parameter is substituted for the question mark.  We would then need code like this pseudo-code:

doc = dict()
for revision in 1 to max:
    for result in line_query.execute(revision):
        index = result['rule_name_id']
        doc[index] = result.check_string()

 

This would build a dictionary layer by layer through all the revisions.

So far so good, but what happens if we decided to revert, and then to go a different direction? Right now, we have a revision chain like this:

And if we keep going, we have,

But what happens if 4 was a mistake? We need to revert to 6 and create a new new branch.

We have two choices. First, we could be destructive and delete all of the lines in revision 4, 5, and 6. This means we can never recreate the state of 6 again.

What if we don’t know that 4 is a mistake? What if we just want to try another route, but come back to 4,5, and 6 in the future?

We want this:

 

But how will we know to take the branch when we create the new doc?

Its a database! We put it in another table.

revision_id revision_parent_id
2 1
3 2
4 3
5 4
6 5
7 3
8 7
9 8

In order to recreate revision 9, we use a stack. Push 9 on the stack, then find the row with revision_id 9 in the table, push the revision_parent_id on the stack, and continue until there are no more rows.  Then, pop each revision_id off the stack and execute the same kind of pseudo code I posted above.

It is a lot.  It is kind of complicated, but it is the type of complicated that Python does well.  However, database do not do this kind of iterative querying well.  It would take a stored procedure to perform this via a single database query.

Talking through this has encouraged me decide to take another look at using git as the backing store instead of a relational database.

Upstream rebuilds with Jenkins Job Builder

Posted by Alexander Todorov on July 06, 2018 10:20 AM

I have been working on Weldr for some time now. It is a multi-component software with several layers built on top of each other as seen on the image below.

Weldr components

One of the risks that we face is introducing changes in downstream components which are going to break something up the stack! In this post I am going to show you how I have configured Jenkins to trigger dependent rebuilds and report all of the statuses back to the original GitHub PR. All of the code below is Jenkins Job Builder yaml.

bdcs is the first layer of our software stack. It provides command line utilities. codec-rpm is a library component that facilitates working with RPM packages (in Haskell). bdcs links to codec-rpm when it is compiled, bdcs uses some functions and data types from codec-rpm.

When a pull request is opened against codec-rpm and testing completes successfully I want to reuse that particular version of the codec-rpm library and rebuild/test bdcs with that.

YAML configuration

All jobs have the following structure: -trigger -> -provision -> -runtest -> -teardown. This means that Jenkins will start executing a new job when it gets triggered by an event in GitHub (commit to master branch or new pull request), then it will provision a slave VM in OpenStack, execute the test suite on the slave and destroy all of the resources at the end. This is repeated twice: for master branch and for pull requests! Here's how the -runtest jobs look:

- job-template:
    name: '{name}-provision'
    node: master
    parameters:
      - string:
          name: PROVIDER
    scm:
        - git:
            url: 'https://github.com/weldr/{repo_name}.git'
            refspec: ${{git_refspec}}
            branches:
              - ${{git_branch}}
    builders:
      - github-notifier
      - shell: |
            #!/bin/bash -ex
            # do the openstack provisioning here
        # NB: runtest_job is passed to us via the -trigger job
      - trigger-builds:
          - project: '${{runtest_job}}'
            block: true
            current-parameters: true
            condition: 'SUCCESS'
            fail-on-missing: true


- job-template:
    name: '{name}-master-runtest'
    node: cinch-slave
    project-type: freestyle
    description: 'Build master branch of {name}!'
    scm:
        - git:
            url: 'https://github.com/weldr/{repo_name}.git'
            branches:
                - master
    builders:
      - github-notifier
      - conditional-step:
          condition-kind: regex-match
          regex: "^.+$"
          label: '${{UPSTREAM_BUILD}}'
          on-evaluation-failure: dont-run
          steps:
            - copyartifact:
                project: ${{UPSTREAM_BUILD}}
                which-build: specific-build
                build-number: ${{UPSTREAM_BUILD_NUMBER}}
                filter: ${{UPSTREAM_ARTIFACT}}
                flatten: true
      - shell: |
            #!/bin/bash -ex
            make ci
    publishers:
      - trigger-parameterized-builds:
          - project: '{name}-teardown'
            current-parameters: true
      - github-notifier


- job-template:
    name: '{name}-PR-runtest'
    node: cinch-slave
    description: 'Build PRs for {name}!'
    scm:
        - git:
            url: 'https://github.com/weldr/{repo_name}.git'
            refspec: +refs/pull/*:refs/remotes/origin/pr/*
            branches:
                # builds the commit hash instead of a branch
                - ${{ghprbActualCommit}}
    builders:
      - github-notifier
      - shell: |
            #!/bin/bash -ex
            make ci
      - conditional-step:
          condition-kind: current-status
          condition-worst: SUCCESS
          condition-best: SUCCESS
          on-evaluation-failure: dont-run
          steps:
            - shell: |
                #!/bin/bash -ex
                make after_success
    publishers:
      - archive:
          artifacts: '{artifacts_path}'
          allow-empty: '{artifacts_empty}'
      - conditional-publisher:
          - condition-kind: '{execute_dependent_job}'
            on-evaluation-failure: dont-run
            action:
              - trigger-parameterized-builds:
                - project: '{dependent_job}'
                  current-parameters: true
                  predefined-parameters: |
                    UPSTREAM_ARTIFACT={artifacts_path}
                    UPSTREAM_BUILD=${{JOB_NAME}}
                    UPSTREAM_BUILD_NUMBER=${{build_number}}
                  condition: 'SUCCESS'
      - trigger-parameterized-builds:
          - project: '{name}-teardown'
            current-parameters: true
      - github-notifier


- job-group:
    name: '{name}-tests'
    jobs:
    - '{name}-provision'
    - '{name}-teardown'
    - '{name}-master-trigger'
    - '{name}-master-runtest'
    - '{name}-PR-trigger'
    - '{name}-PR-runtest'


- job:
    name: 'codec-rpm-rebuild-bdcs'
    node: master
    project-type: freestyle
    description: 'Rebuild bdcs after codec-rpm PR!'
    scm:
        - git:
            url: 'https://github.com/weldr/codec-rpm.git'
            refspec: +refs/pull/*:refs/remotes/origin/pr/*
            branches:
                # builds the commit hash instead of a branch
                - ${ghprbActualCommit}
    builders:
      - github-notifier
      - trigger-builds:
          - project: 'bdcs-master-trigger'
            block: true
            predefined-parameters: |
                UPSTREAM_ARTIFACT=${UPSTREAM_ARTIFACT}
                UPSTREAM_BUILD=${UPSTREAM_BUILD}
                UPSTREAM_BUILD_NUMBER=${UPSTREAM_BUILD_NUMBER}
    publishers:
      - github-notifier


- project:
    name: codec-rpm
    dependent_job: '{name}-rebuild-bdcs'
    execute_dependent_job: always
    artifacts_path: 'dist/{name}-latest.tar.gz'
    artifacts_empty: false
    jobs:
      - '{name}-tests'

Publishing artifacts

make after_success is responsible for creating a tarball if codec-rpm test suite passed. This tarball gets uploaded as artifact into Jenkins and we can make use of it later!

Inside -master-runtest I have a conditional-step inside the builders section which will copy the artifacts from the previous build if they are present. Notice that I copy artifacts for a particular job number, which is the job for codec-rpm PR.

Making use of local artifacts is handled inside bdcs' make ci because it is per-project specific and because I'd like to reuse my YAML templates.

Reporting statuses to GitHub

For github-notifier to be able to report statuses back to the pull request the job needs to be configured with the git repository this pull request came from. This is done by specifying the same scm section for all jobs that are related and current-parameters: true to pass the revision information to the other jobs.

This also means that if I want to report status from codec-rpm-rebuild-bdcs then it needs to be configured for the codec-rpm repository (see yaml) but somehow it should trigger jobs for another repository!

When jobs are started via trigger-parameterized-builds their statuses are reported separately to GitHub. When they are started via trigger-builds there should be only one status reported.

Trigger chain for dependency rebuilds

With all of the above info we can now look at the codec-rpm-rebuild-bdcs job.

  • It is configured for the codec-rpm repository so it will report its status to the PR
  • It is conditionally started after codec-rpm-PR-runtest finishes successfully
  • It triggers bdcs-master-trigger which in turn will rebuild & retest the bdcs component. Additional parameters specify whether we're going to use locally built artifacts or attempt to download then from Hackage
  • It uses block: true so that the status of codec-rpm-rebuild-bdcs is dependent on the status of bdcs-master-runtest (everything in the job chain uses block: true because of this)

How this looks like in practice

I have opened codec-rpm #39 to validate my configuration. The chain of jobs that gets executed in Jenkins is:

--- console.log for bdcs-master-runtest ---
Started by upstream project "bdcs-jslave-1-provision" build number 267
originally caused by:
 Started by upstream project "bdcs-master-trigger" build number 133
 originally caused by:
  Started by upstream project "codec-rpm-rebuild-bdcs" build number 25
  originally caused by:
   Started by upstream project "codec-rpm-PR-runtest" build number 77
   originally caused by:
    Started by upstream project "codec-rpm-jslave-1-provision" build number 178
    originally caused by:
     Started by upstream project "codec-rpm-PR-trigger" build number 118
     originally caused by:
      GitHub pull request #39 of commit b00c923065e367afd5b7a7cc068b049bb1ed25e1, no merge conflicts.

Statuses are reported on GitHub as follows:

example of PR statuses

default is coming from the provisioning step and I think this is some sort of a bug or misconfiguration of the provisioning job. We don't really care about this.

On the picture you can see that codec-rpm-PR-runtest was successful but codec-rpm-rebuild-bdcs was not. The actual error when compiling bdcs is:

src/BDCS/Import/RPM.hs:110:24: error:
    * Couldn't match type `Entry' with `C8.ByteString'
      Expected type: conduit-1.2.13.1:Data.Conduit.Internal.Conduit.ConduitM
                       C8.ByteString
                       Data.Void.Void
                       Data.ContentStore.CsMonad
                       ([T.Text], [Maybe ObjectDigest])
        Actual type: conduit-1.2.13.1:Data.Conduit.Internal.Conduit.ConduitM
                       Entry
                       Data.Void.Void
                       Data.ContentStore.CsMonad
                       ([T.Text], [Maybe ObjectDigest])
    * In the second argument of `(.|)', namely
        `getZipConduit
           ((,) <$> ZipConduit filenames <*> ZipConduit digests)'
      In the second argument of `($)', namely
        `src
           .|
             getZipConduit
               ((,) <$> ZipConduit filenames <*> ZipConduit digests)'
      In the second argument of `($)', namely
        `runConduit
           $ src
               .|
                 getZipConduit
                   ((,) <$> ZipConduit filenames <*> ZipConduit digests)'
    |
110 |                     .| getZipConduit ((,) <$> ZipConduit filenames
    |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^...

That is because PR #39 changes the return type of Codec.RPM.Conduit::payloadContentsC from Entry to C8.ByteString.

Thanks for reading and happy testing!

social image CC by https://pxhere.com/en/photo/226978

Using Ansible to set up a workstation

Posted by Fedora Magazine on July 06, 2018 08:00 AM

Ansible is  an extremely popular open-source configuration management and software automation project. While IT professionals almost certainly use Ansible on a daily basis, its influence outside the IT industry is not as wide. Ansible is a powerful and flexible tool. It is easily applied to a task common to nearly every desktop computer user: the post-installation “checklist”.

Most users like to apply one “tweak” after a new installation. Ansible’s idempotent, declarative syntax lends itself perfectly to describing how a system should be configured.

Ansible in a nutshell

The ansible program itself performs a single task against a set of hosts. This is roughly conceptualized as:

for HOST in $HOSTS; do
    ssh $HOST /usr/bin/echo "Hello World"
done

To perform more than one task, Ansible defines the concept of a “playbook”. A playbook is a YAML file describing the state of the targeted machine. When run, Ansible inspects each host and performs only the tasks necessary to enforce the state defined in the playbook.

- hosts: all
  tasks:
    - name: Echo "Hello World"
      command: echo "Hello World"

Run the playbook using the ansible-playbook command:

$ ansible-playbook ~/playbook.yml

Configuring a workstation

Start by installing ansible:

dnf install ansible

Next, create a file to store the playbook:

touch ~/post_install.yml

Start by defining the host on which to run this playbook. In this case, “localhost”:

- hosts: localhost

Each task consists of a name field and a module field. Ansible has a lot of modules. Be sure to browse the module index to become familiar with all Ansible has to offer.

The package module

Most users install additional packages after a fresh install, and many like to remove some shipped software they don’t use. The package module provides a generic wrapper around the system package manager (in Fedora’s case, dnf).

- hosts: localhost
  tasks:
    - name: Install Builder
      become: yes
      package:
        name: gnome-builder
        state: present
    - name: Remove Rhythmbox
      become: yes
      package:
        name: rhythmbox
        state: absent
    - name: Install GNOME Music
      become: yes
      package:
        name: gnome-music
        state: present
    - name: Remove Shotwell
      become: yes
      package:
        name: shotwell
        state: absent

This playbook results in the following outcomes:

  • GNOME Builder and GNOME Music are installed
  • Rhythmbox is removed
  • On Fedora 28 or greater, nothing happens with Shotwell (it is not in the default list of packages)
  • On Fedora 27 or older, Shotwell is removed

This playbook also introduces the become: yes directive. This specifies the task must be run by a privileged user (in most cases, root).

The DConf Module

Ansible can do a lot more than install software. For example, GNOME includes a great color-shifting feature called Night Light. It ships disabled by default, however the Ansible dconf module can very easily enable it.

- hosts: localhost
  tasks:
    - name: Enable Night Light
      dconf:
        key: /org/gnome/settings-daemon/plugins/color/night-light-enabled
        value: true
    - name: Set Night Light Temperature
      dconf:
        key: /org/gnome/settings-daemon/plugins/color/night-light-temperature
        value: uint32 5500

Ansible can also create files at specified locations with the copy module. In this example, a local file is copied to the destination path.

- hosts: localhost
  tasks:
    - name: Enable "AUTH_ADMIN_KEEP" for pkexec
      become: yes
      copy:
        src: files/51-pkexec-auth-admin-keep.rules
        dest: /etc/polkit-1/rules.d/51-pkexec-auth-admin-keep.rules

The Command Module

Ansible can still run commands even if no specialized module exists (via the aptly named command module). This playbook enables the Flathub repository and installs a few Flatpaks. The commands are crafted in such a way that they are effectively idempotent. This is an important behavior to consider; a playbook should succeed each time it is run on a machine.

- hosts: localhost
  tasks:
    - name: Enable Flathub repository
      become: yes
      command: flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
    - name: Install Fractal
      become: yes
      command: flatpak install --assumeyes flathub org.gnome.Fractal
    - name: Install Spotify
      become: yes
      command: flatpak install --assumeyes flathub com.spotify.Client

Combine all these tasks together into a single playbook and, in one command, Ansible will customize a freshly installed workstation. Not only that, but 6 months later, after making changes to the playbook, run it again to bring a “seasoned” install back to a known state.

$ ansible-playbook -K ~/post_install.yml

This article only touched the surface of what’s possible with Ansible. A follow-up article will go into more advanced Ansible concepts such as roles, configuring multiple hosts with a divided set of responsibilities.

Call for Fedora Women’s Day 2018 proposals

Posted by Fedora Community Blog on July 05, 2018 11:59 PM

Fedora Women’s Day (FWD) is a day to celebrate and bring visibility to female contributors in open source projects including Fedora. The initiative is led by Fedora’s Diversity and Inclusion team. The call for proposals for event organizers is now open until Thursday, 16 August 2018!

During September, in collaboration with other members of different open source communities including Fedora, women in tech groups or hacker spaces, we plan to organize talks, workshops and meetups around the world. These events highlight and celebrate the women in open source communities like Fedora and their invaluable contributions to their projects and community. They also give a good opportunity for women to learn about free and open source software and jump-start their journey in open source as a user or a contributor. They also give a platform for women to connect, learn and be inspired by other women in open source communities and beyond.

This year yet again, we are looking forward to organizing FWD across different locations around the world. We are looking to collaborate with members of different open source communities including Fedora, women in tech groups or hacker spaces to organize Fedora Women’s Day!

Here are our top 5 reasons on why you should attend a Fedora Women’s Day!

  • Curious about open source software? Get an introduction to open source with an idea of what projects you could take part in and get more involved with their local communities.
  •  Want to meet inspiring and talented people from across the world?  Open source projects run on the power of a talented community. At FWD. you will have a platform to meet supportive, inspiring and talented open source community members and connect with them.
  • Looking to build skills or resume ? Contributing to open source is a great way to build skills and practice teamwork and planning.  Learn from experienced open source contributors about getting started in the open source project.
  • Have some free time and want to make a real impact on a global project? Open source is more than technology. It involves documentation, finding and reporting bugs, writing blogs, marketing, financing, and design.  Open source lets you get involved in anything that interests you, regardless of your background or experience.
  • Want to connect with other women involved in similar areas? If you are someone who gains inspiration from other talented women, then FWD is the right place for you. You will have a platform to grow at whatever you feel inclined to and meet supportive mentors.

Why should you organize a Fedora Women’s Day in your community?

If you are an individual,

  • Do you want to share your knowledge and experience in open source with others and spread the message of open source?
  • Do you want to  help grow and build your local community?
  • Do you want to find interested users or contributors for your project?
  • Do you want to get more involved with Fedora community ?
  • Do you want to develop your leadership and creative skills?
  • Do you want to empower women in your local community through open source tools and help them learn and build their skills ?

If you answered YES to any of these,  we encourage you to organize a Fedora Women’s Day(FWD) event in your local community. We especially encourage applications from Fedora community members and encourage them to apply irrespective of their gender, background or experience.  We are also very excited to connect with members of local open source or tech communities, hacker spaces and women in tech groups and support them to host a FWD event in their community.  

If you are a community or a hacker space,

  • Share information about benefits of open source and different open source tools and alternatives.
  • Connect with developers, designers and people from diverse backgrounds and grow your community!
  • Empower the women in your local community through open source tools and enable them to learn and build their skills while contributing to a global project.

If you cannot organize a Fedora Women’s Day this year but know an individual or a community (e.g. PyLadies, Women Who Code, etc.) who might be interested in organizing a  FWD event, feel free to let us know through our mailing list!

Steps to organize FWD event

Cannot find a FWD in your region? Organize one! It’s simple with these steps.

Identify your goals

  • Find out the motivations and interests of your local community.
  • Do they know about Fedora and open source? If not, use your event to create awareness about Fedora and open source.
  • Are they interested in contributing to Fedora or open source? Make your content specific to their interests, if possible.
  • Are they interested in networking? We can help you find local open source contributors from your region.
  • Brainstorm and share ideas with the Diversity & Inclusion Team that you feel make an impact to your audience and help them learn to contribute to Fedora and open source.

Tell us about it

  • Use this wiki to read more about the proposal process
  • Submit a proposal ticket by Thursday, 16 August 2018 to the fedora-womens-day repository on Pagure using the ticket template
    • See this example proposal
    • You can also request budget for your event to be reimbursed after writing an event report

Spread the word

Start early! Spread the word before and after the event. Gather as many participants as you can for your event to maximize the impact of efforts you put in. You can also invite fellow Fedora contributors based in your area to collaborate with you.

It is important to estimate your audience in advance to plan and ask for a suitable budget.

Get your proposal accepted

Review the Diversity & Inclusion Team goals for Fedora Women’s Day while planning your event. Connect with other hacker-spaces and tech communities to enhance your proposal by involving other communities. Understand the need of your audience and make a personalized proposal that fits the best. Identify any resources you might need to conduct a Fedora Women’s Day event and let us know if you need any help. We accept proposals on a rolling basis till the deadline however, we encourage early submissions to better support organizers in planning their event. (write about swag delivery etc reasons? better phrase it)

Once your proposal is accepted, work together with Diversity and Inclusion team to organize a successful event!

Diversity and Inclusion team can provide organizers with support, guidelines and resources to organize a successful event. After you are done with the event, let others have an idea of fun you had, by clicking some interesting group pictures and writing an event report.

Important dates

  • Submission deadline: Thursday, 16 August 2018
  • Acceptance deadline: Friday, 24 August 2018
    • This is the latest date you hear from the D&I Team if your proposal is accepted
  • Suggested FWD dates: 22-23 September 2018

Note: There is flexibility on dates for organizing FWD. Thus, an event can be organized on any dates throughout September if the suggested dates are inconvenient for you. Proposals are reviewed on a rolling basis, so the earlier you send a proposal, the better the chances are of getting feedback and support early.

The post Call for Fedora Women’s Day 2018 proposals appeared first on Fedora Community Blog.

Gnome: execute a script at login

Posted by alciregi on July 04, 2018 07:54 PM

How to execute a bash script when you log in to GNOME.

How to enable and use screen sharing on Wayland

Posted by Jan Grulich on July 04, 2018 08:26 AM

Two days ago I wrote about our work on screen sharing in web browsers. While there was a lot of work done recently on this area, it’s not still in the state where everything would just work out of the box. There are few steps necessary to make this work for you and here is a brief summary what you need. This is not a distro specific how to, but given I use Fedora 28 and I know that everything you need is there, it’s most likely you will need to figure out the differences for your distribution or build it yourself.

PipeWire

PipeWire is the core technology used behind all of this. In Fedora you just need to install it, it’s available for Fedora 27 and newer. Once PipeWire is installed, you can just start it using “pipewire” command. If you want to see what’s going on, you can use “PIPEWIRE_DEBUG=4 pipewire”  to start PipeWire with debug information. For Fedora 29, there is a feature planned for PipeWire which should make it to start automatically.

Xdg-desktop-portal and xdg-desktop-portal-[kde,gtk]

We use xdg-desktop-portal (+ backend implementation) for communication between the app requesting to share a screen and between desktop (Plasma or Gnome). You need xdg-desktop-portal, which is the middle man between the app and backend implementation, compiled with screencast portal. This portal will be build automatically when PipeWire is present during the build. In Fedora you should be already covered when you install it. For backend implementation, if you are using Plasma, you need xdg-desktop-portal-kde from Plasma 5.13.x, again compiled with screencast portal, which is build when PipeWire is present. For Fedora 28+, you can use this COPR repository and you are ready to go. I highly recommend using Plasma 5.13.2, where I have some minor fixes and if you have a chance, try to compile upcoming 5.13.3 version from git (Plasma/5.13 branch), as I rewrote how we connect to PipeWire. Previously our portal implementation worked only when PipeWire was started first, now it shouldn’t matter. If you use Gnome, you can just install xdg-desktop-portal-gtk from Fedora repository or build it yourself. You again need to build screencast portal.

Enabling screen sharing in your desktop

Both Plasma and Gnome need some adjustments to enable screen sharing, as in both cases it’s an experimental feature. For Gnome you can follow this guide, just enable screen-cast feature using gsettings. For Plasma, you need to get KWin from Plasma 5.13.x, which is available for Fedora in the COPR repository mentioned above. Then you need to set and export KWIN_REMOTE=1 env variable before KWin starts. There is also one more thing needed for Gnome at this moment, you need to backport this patch to Mutter, otherwise it won’t be able to match PipeWire stream configuration with the app using different framerate, e.g. when using Firefox.

Edit: It seems that exporting KWIN_REMOTE=1 is not necessary, it probably was only during the time when this feature was not merged yet. Now it should work without it. You still need KWin from Plasma 5.13.

Start with screen sharing

Now you should be all set and ready to share a screen on Gnome/Plasma Wayland session. You can now try Firefox for Fedora 28 or Rawhide from this COPR repository. For Firefox there is a WebRTC test page, where you can test this screen share functionality. Another option is to use my  test application for Flatpak portals or use gnome-remote-desktop app.

Edit: I didn’t realize that not everyone knows about xdg-desktop-portal or PipeWire, below are some links where you can get an idea what is everything about. I should also mention that while xdg-desktop-portals is primarily designed for flatpak, its usage has been expanded over time as it perfectly makes sense to use it for e.g. Wayland, where like in sandbox, where apps don’t have access to your system, on Wayland apps don’t know about other apps or windows and communication can by done only through compositor.

 

Install an NVIDIA GPU on almost any machine

Posted by Fedora Magazine on July 04, 2018 08:00 AM

Whether for research or recreation, installing a new GPU can bolster your computer’s performance and enable new functionality across the board. This installation guide uses Fedora 28’s brand-new third-party repositories to install NVIDIA drivers. It walks you through the installation of both software and hardware, and covers everything you need to get your NVIDIA card up and running. This process works for any UEFI-enabled computer, and any modern NVIDIA GPU.

Preparation

This guide relies on the following materials:

  • A machine that is UEFI capable. If you’re uncertain whether your machine has this firmware, run sudo dmidecode -t 0.  If “UEFI is supported” appears anywhere in the output, you are all set to continue. Otherwise, while it’s technically possible to update some computers to support UEFI, the process is often finicky and generally not recommended.
  • A modern, UEFI-enabled NVIDIA card
  • A power source that meets the wattage and wiring requirements for your NVIDIA card (see the Hardware & Modifications section for details)
  • Internet connection
  • Fedora 28

NOTE: This guide only covers hardware installation for desktop computers, although the NVIDIA driver installation will be relevant for laptops as well.

Example setup

This example installation uses:

Hardware and modifications

PSU

Open up your desktop case and check the maximum power output printed on your power supply. Next, check the documentation on your NVIDIA GPU and determine the minimum recommended power (in watts). Further, take a look at your GPU and see if it requires additional wiring, such as a 6-pin connector. Most entry-level GPUs only draw power directly from the motherboard, but some require extra juice. You’ll need to upgrade your PSU if:

  1. Your power supply’s max power output is below the GPU’s suggested minimum power. Note: According to some NVIDIA card manufacturers, pre-built systems may require more or less power than recommended, depending on the system’s configuration. Use your discretion to determine your requirements if you’re using a particularly power-efficient or power-hungry setup.
  2. Your power supply does not provide the necessary wiring to power your card.

PSUs are straightforward to replace, but make sure to take note of the wiring layout before detaching your current power supply. Additionally, make sure to select a PSU that fits your desktop case.

CPU

Although installing a high-quality NVIDIA GPU is possible in many old machines, a slow or damaged CPU can “bottleneck” the performance of the GPU. To calculate the impact of the bottlenecking effect for your machine, click here. It’s important to know your CPU’s performance to avoid pairing a high-powered GPU with a CPU that can’t keep up. Upgrading your CPU is a potential consideration.

Motherboard

Before proceeding, ensure your motherboard is compatible with your GPU of choice. Your graphics card should be inserted into the PCI-E x16 slot closest to the heat-sink. Ensure that your setup contains enough space for the GPU. In addition, note that most GPUs today employ PCI-E 3.0 technology. Though these GPUs will run best if mounted on a PCI-E 3.0 x16 slot,  performance should not suffer significantly with an older version slot.

Installation

1. First, open up a terminal, and update your package-manager (if you have not done so already), by running:

sudo dnf update

2. Next, reboot with the simple command:

reboot

3. After reboot, install the Fedora 28 workstation repositories:

sudo dnf install fedora-workstation-repositories

4. Next, enable the NVIDIA driver repository:

sudo dnf config-manager --set-enabled rpmfusion-nonfree-nvidia-driver

5. Then, reboot again.

6. After the reboot, verify the addition of the repository via the following command:

sudo dnf repository-packages rpmfusion-nonfree-nvidia-driver info

If several NVIDIA tools and their respective specs are loaded, then proceed to the next step. If not, you may have encountered an error when adding the new repository and you should give it another shot.

7. Login, connect to the internet, and open the software app. Click Add-ons> Hardware Drivers> NVIDIA Linux Graphics Driver> Install.

If you’re using an older GPU or plan to use multiple GPUs, check the RPMFusion guide for further instructions. Finally, to ensure a successful reboot, set “WaylandEnable=false” in /etc/gdm/custom.conf, and make sure to avoid using secure boot.

8. Once this process is complete, close all applications and shut down the computer. Unplug the power supply to your machine. Then, press the power button once to drain any residual power to protect yourself from electric shock. If your PSU has a power switch, switch it off.

9. Finally, install the graphics card. Remove the old GPU and insert your new NVIDIA graphics card into the proper PCI-E x16 slot. When you have successfully installed the new GPU, close your case, plug in the PSU, and turn the computer on. It should successfully boot up.

NOTE: To disable the NVIDIA driver repository used in this installation, or to disable all Fedora workstation repositories, consult The Fedora Wiki Page.

Verification

1. If your newly installed NVIDIA graphics card is connected to your monitor and displaying correctly, then your NVIDIA driver has successfully established a connection to the GPU.

If you’d like to view your settings, or verify the driver is working (in the case that you have two GPUs installed on the motherboard), open up the NVIDIA X Server Settings app again. This time, you should not be prompted with an error message, and information on the X configuration file and your NVIDIA GPU should be available (see screenshot below).

NVIDIA X Server Settings

Through this app, you may alter your X configuration file should you please, and may monitor the GPU’s performance, clock speed, and thermal information.

2. To ensure the new card is working at capacity, a GPU performance test is needed. GL Mark 2, a benchmarking tool that provides information on buffering, building, lighting, texturing, etc, offers an excellent solution. GL Mark 2 records frame rates for a variety of different graphical tests, and outputs an overall performance score (called the glmark2 score).

Note: glxgears will only test the performance of your screen or monitor, not the graphics card itself. Use GL Mark 2 instead.

To run GLMark2:

  1. Open up a terminal and close all other applications
  2. sudo dnf install glmark2
  3. glmark2
  4. Allow the test to run to completion for best results. Check to see if the frame rates match your expectation for your NVIDA card. If you’d like additional verification, consult the web to determine if a glmark2 benchmark has been previously conducted on your NVIDA card model and published to the web. Compare scores to assess your GPUs performance.
  5. If your framerates and/or glmark2 score are below expected, consider potential causes. CPU-induced bottlenecking? Other issues?

Assuming the diagnostics look good, enjoy using your new GPU.

References:

Editor’s note: This article was co-authored by Matthew Kenney and Justice del Castillo.

All systems go

Posted by Fedora Infrastructure Status on July 03, 2018 11:52 PM
New status good: Everything seems to be working. for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package Database, Package maintainers git repositories, Fedora Container Registry, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on July 03, 2018 09:01 PM
New status scheduled: Planned infrastructure-wide outage for updates for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package Database, Package maintainers git repositories, Fedora Container Registry, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

[Week 7] GSoC Status Report for Fedora App: Abhishek Sharma

Posted by Fedora Community Blog on July 03, 2018 02:34 PM

This is Status Report for Fedora App filled by participants on a weekly basis.

Status Report for Abhishek Sharma (thelittlewonder)

  • Fedora Account: thelittlewonder
  • IRCthelittlewonder (found in #fedora-summer-coding, #fedora-india, #fedora-design)
  • Fedora User Wiki Page

 

Tasks Completed

The Week 7 was slightly unproductive for me since my laptop’s SSD failed and I had to spend the week visiting Apple Store and recovering from data loss. But now we are set and running. Let’s move on to the updates.

More Tab in the Application

In the last few weeks, we revamped the different views of the application. We finally pushed the final nail in the coffin, the last view. There’s a More Tab to show the new updated features we are planning to add to the Fedora App – Mainly Fedora Podcast, Bookmarks, and Package Search. These features are in addition to the existing core functionality of the application and aim to improve the usefulness of the app.

About Fedora Section in Application

We added an About Fedora Section for the new contributors and users to learn more about the Fedora and its principles.

What’s happening

New Loader Integration

We will be working on a new loader integration in the application to show while the app content is being fetched from API. The caching algorithm has been reworked so the code needs to be updated.

Fedora Women and Diversity Section

In the last released version of the app, we had a “Women” Section with no content(basically, it was coming soon thingy). In the upcoming week, we plan to revamp that section to include content from Fedora Women Days, Outreachy etc.

Fedora Podcast Integration

Since Soundcloud has stopped giving its API key, We are still stuck on how to go about integrating Fedora Podcast into the Application. Hopefully this week we will manage to find out a way.

 

That’s all for this week. 👋

 

Send your feedback at guywhodesigns[at]gmail[dot]com

The post [Week 7] GSoC Status Report for Fedora App: Abhishek Sharma appeared first on Fedora Community Blog.

Why do you see DAC_OVERRIDE SELinux denials?

Posted by Lukas Vrabec on July 03, 2018 08:58 AM

Hello everyone!

You could have seen SELinux denials (AVC messages) in your system in the recent release of Fedora 28 and of course Fedora Rawhide. I removed lot of rules allowing DAC_OVERRIDE capability for the process domain to bring the tightened security on SELinux enabled systems. In many cases, DAC_OVERRIDE capability is not needed and there is issue with handling UNIX permissions on objects stored in the system.

But what does DAC_OVERRIDE capability means?
In capabilities(7) man page you can find explanation. “If process has DAC_OVERRIDE capability, it can bypass file read, write and execute permissions check.”

What does it mean in reality?
Process could read, write and execute files even when there are no proper flags set on the file.

And this is a solid security hole.

Dan Walsh mentioned on his blog, that there is a myth that root is all powerful. This is not completely true, because on SELinux enabled systems, even processes ran under root user must have DAC_OVERRIDE capability allowed by SELinux policy. Aaand this is the problem for many cases on Fedora system!

Lot of daemons run as root:root user and group permissions and are accessing several files/directories in the system, but these files have too tight permissions.

Let’s make an example:

Directory below is owned by mpd user.

# ll -aZ /var/run/ | grep lirc
drwxr-xr-x. 2 lirc lirc system_u:object_r:lircd_var_run_t:s0 80 Jul 3 10:18 lirc


Following process is trying to access this directory and write logs files.


# ps -efZ | grep lircd
system_u:system_r:lircd_t:s0 root 6404 1 0 10:18 ? 00:00:00 /usr/sbin/lircd --nodaemon
system_u:system_r:lircd_t:s0 root 6405 6404 0 10:18 ? 00:00:00 [uname]

As we can see the process is owned by user root and group root. Which means that kernel will look on “others” group in UNIX permissions, and there is no written access for others.

This action should be terminated by kernel because permissions are too tight, but in discretionary access control root could bypass all permissions and access the file in the system. However, this is not allowed in mandatory access control which is implemented by SELinux.

For this reason processes owned by root needs DAC_OVERRIDE capability, or changed permissions on files/directories. In most cases this is a bug in the application package.

Dan Walsh also wrote a nice blog about this issue. He’s describing same situation on dovecot example.

The post Why do you see DAC_OVERRIDE SELinux denials? appeared first on Lukas Vrabec.