Fedora People

Le blog déménage !

Posted by Guillaume Kulakowski on February 21, 2018 08:40 PM

Non, ce blog n'est pas mort !

Je vais bientôt le remettre en activité. Je commence déjà par le faire voler de ses propres ailles et par l'héberger sur son propre VPS. J'ai fais le choix pour ça d'un VC1M de chez Scaleway. Je reviendrais dans un prochain billet sur le choix de cet hébergement made in Online.fr a.k.a Free.fr mais également sur le pourquoi 4Go de RAM pour un simple blog (teaser: il n'y a pas que ça ;-)).

Pour ce qui est de l'architecture, comme pour famas, je suis passé sur du 100% Docker via mes containers migrés pour l'occasion de CentOS vers Alpine. Pour finir, j'ai également fait le choix du full https via Let's encrypt.

Voila, maintenant il ne reste plus qu'à bloguer à nouveau, j'ai déjà quelques idées d'articles autour de la domotique, de Docker voir peut-être un peu de Trail.

Fedora/RISC-V: ssh and dnf working

Posted by Richard W.M. Jones on February 21, 2018 04:00 PM


$ ssh -p 10000 root@localhost
root@localhost's password: riscv
[root@stage4 ~]# uname -a
Linux stage4 4.15.0-rc9-00064-gf923ce3a29af #1 SMP Thu Feb 15 10:59:13 GMT 2018 riscv64 riscv64 riscv64 GNU/Linux
[root@stage4 ~]# dnf install glibc-devel
Last metadata expiration check: 0:03:38 ago on Wed 21 Feb 2018 15:24:07 UTC.
Dependencies resolved.
 Package                  Arch          Version               Repository   Size
 glibc-devel              riscv64       2.27-4.fc28           local       1.0 M
Installing dependencies:
 glibc-headers            riscv64       2.27-4.fc28           local       442 k
 kernel-headers           noarch        4.15.0-1.fc27         local       1.1 M
 libpkgconf               riscv64       1.4.1-1.fc27          local        74 k
 libxcrypt-devel          riscv64       4.0.0-4.fc28          local        15 k
 pkgconf                  riscv64       1.4.1-1.fc27          local        35 k
 pkgconf-m4               noarch        1.4.1-1.fc27          local        15 k
 pkgconf-pkg-config       riscv64       1.4.1-1.fc27          local        14 k

Transaction Summary
Install  8 Packages

Total download size: 2.6 M
Installed size: 7.4 M
Is this ok [y/N]: 

JDK approach to address deserialization Vulnerability

Posted by Red Hat Security on February 21, 2018 02:30 PM

Java Deserialization of untrusted data has been a security buzzword for the past couple of years with almost every application using native Java serialization framework being vulnerable to Java deserialization attacks. Since it's inception, there have been many scattered attempts to come up with a solution to best address this flaw. This article focuses on Java deserialization vulnerability and explains how Oracle provides a mitigation framework in it's latest Java Development Kit (JDK) version.


Let's begin by reviewing the Java deserialization process. Java Serialization Framework is JDK's built-in utility that allows Java objects to be converted into byte representation of the object and vice versa. The process of converting Java objects into their binary form is called serialization and the process of reading binary data to construct a Java object is called deserialization. In any enterprise environment the ability to save or retrieve the state of the object is a critical factor in building reliable distributed systems. For instance, a JMS message may be serialized to a stream of bytes and sent over the wire to a JMS destination. A RESTful client application may serialize an OAuth token to disk for future verification. Java's Remote Method Invocation (RMI) uses serialization under the hood to pass objects between JVMs. These are just some of the use cases where Java serialization is used.

Inspecting the Flow

When the application code triggers the deserialization process, ObjectInputStream will be initialized to construct the object from the stream of bytes. ObjectInputStream ensures the object graph that has been serialized is recovered. During this process, ObjectInputStream matches the stream of bytes against the classes that are available in the JVM's classpath.

So, what is the problem?

During deserialization process, when readObject() takes the byte stream to reconstruct the object, it looks for the magic bytes relevant to the object type that has been written to the serialization stream, to determine what object type (e.g. enum, array, String, etc.) it needs to resolve the byte stream to. If the byte stream can not be resolved to one of these types, then it will be resolved to an ordinary object (TC_OBJECT), and finally the local class for that ObjectStreamClass will be retrieved from the JVM's classpath. If the class is not found then an InvalidClassException will be thrown.

The problem arises when readObject() is presented with a byte stream that has been manipulated to leverage classes that have a high chance of being available in the JVM's classpath, also known as gadget classes, and are vulnerable to Remote Code Execution (RCE). So far a number of classes have been identified to be vulnerable to RCE, however research is still ongoing to discover more of such classes. Now you might ask, how these classes can be used for RCE? Well, depending on the nature of the class, the attack can be materialized by constructing the state of that particular class with a malicious payload, which is serialized and is fed at the point in which serialized data is exchanged (i.e. Stream Source) in the above work flow. This tricks JDK to believe this is the trusted byte stream, and it will be deserialized by initializing the class with the payload. Depending on the payload this can have disastrous consequences.

JVM vulnerable classes

Of course the challenge for the adversary is to be able to access the stream source for this purpose, of which the details are outside the scope of this article. A good tool to review for further information on the subject is ysoserial, which is arguably the best tool for generating payloads.

How to mitigate against deserialization?

Loosely speaking, mitigation against a deserialization vulnerability is accomplished by implementing a LookAheadObjectInputStream strategy. The implementation needs to subclass the existing ObjectInputStream to override the resolveClass() method to verify if the class is allowed to be loaded. This approach appears to be an effective way of hardening against deserialization and usually consists of two implementation flavors: whitelist or blacklist. In whitelist approach, implementation only includes the acceptable business classes that are allowed to be deserialized and blocks other classes. Blacklist implementation on the other hand holds a set of well-known classes that are vulnerable and blocks them from being serialized.

Both whitelist and blacklist have their own pros and cons, however, whitelist-based implementation proves to be a better way to mitigate against a deserialization flaw. It effectively follows the principle of checking the input against the good values which have always been a part of security practices. On the other hand, blacklist-based implementation heavily relies on the intelligence gathered around what classes have been vulnerable and gradually include them in the list which is easy enough to be missed or bypassed.

protected Class<?> resolveClass(ObjectStreamClass desc)
                throws IOException, ClassNotFoundException {
      String name = desc.getName();

      if(isBlacklisted(name) ) {
              throw new SecurityException("Deserialization is blocked for security reasons");

      if(isWhitelisted(name) ) {
              throw new SecurityException("Deserialization is blocked for security reasons");

      return super.resolveClass(desc);

JDK's new Deserialization Filtering

Although ad hoc implementations exist to harden against a deserialization flaw, the official specification on how to deal with this issue is still lacking. To address this issue, Oracle has recently introduced serialization filtering to improve the security of deserialization of data which seems to have incorporated both whitelist and blacklist scenarios. The new deserialization filtering is targeted for JDK 9, however it has been backported to some of the older versions of JDK as well.

The core mechanism of deserialization filtering is based on an ObjectInputFilter interface which provides a configuration capability so that incoming data streams can be validated during the deserialization process. The status check on the incoming stream is determined by Status.ALLOWED, Status.REJECTED, or Status.UNDECIDED arguments of an enum type within ObjectInputFilter interface. These arguments can be configured depending on the deserialization scenarios, for instance if the intention is to blacklist a class then the argument will return Status.REJECTED for that specific class and allows the rest to be deserialized by returning the Status.UNDECIDED. On the other hand if the intention of the scenario is to whitelist then Status.ALLOWED argument can be returned for classes that match the expected business classes. In addition to that, the filter also allows access to some other information for the incoming deserializing stream, such as the number of array elements when deserializing an array of class (arrayLength), the depth of each nested objects (depth), the current number of object references (references), and the current number of bytes consumed (streamBytes). This information provides more fine-grained assertion points on the incoming stream and return the relevant status that reflects each specific use cases.

Ways to configure the Filter

JDK 9 filtering supports 3 ways of configuring the filter: custom filter, process-wide filter also known as global filter, and built-in filters for the RMI registry and Distributed Garbage Collection (DGC) usage.

Case-based Filters

The configuration scenario for a custom filter occurs when a deserialization requirement is different from any other deserialization process throughout the application. In this use case a custom filter can be created by implementing the ObjectInputFilter interface and override the checkInput(FilterInfo filterInfo) method.

static class VehicleFilter implements ObjectInputFilter {
        final Class<?> clazz = Vehicle.class;
        final long arrayLength = -1L;
        final long totalObjectRefs = 1L;
        final long depth = 1l;
        final long streamBytes = 95L;

        public Status checkInput(FilterInfo filterInfo) {
            if (filterInfo.arrayLength() < this.arrayLength || filterInfo.arrayLength() > this.arrayLength
                    || filterInfo.references() < this.totalObjectRefs || filterInfo.references() > this.totalObjectRefs
                    || filterInfo.depth() < this.depth || filterInfo.depth() > this.depth || filterInfo.streamBytes() < this.streamBytes
                    || filterInfo.streamBytes() > this.streamBytes) {
                return Status.REJECTED;

            if (filterInfo.serialClass() == null) {
                return Status.UNDECIDED;

            if (filterInfo.serialClass() != null && filterInfo.serialClass() == this.clazz) {
                return Status.ALLOWED;
            } else {
                return Status.REJECTED;

JDK 9 has added two methods to the ObjectInputStream class allowing the above filter to be set/get for the current ObjectInputStream:

public class ObjectInputStream
    extends InputStream implements ObjectInput, ObjectStreamConstants {

    private ObjectInputFilter serialFilter;
    public final ObjectInputFilter getObjectInputFilter() {
        return serialFilter;

    public final void setObjectInputFilter(ObjectInputFilter filter) {
        this.serialFilter = filter;

Contrary to JDK 9, latest JDK 8 (1.8.0_144) seems to only allow filter to be set on ObjectInputFilter.Config.setObjectInputFilter(ois, new VehicleFilter()); at the moment.

Process-wide (Global) Filters

Process-wide filter can be configured by setting jdk.serialFilter as either a system property or a security property. If the system property is defined then it is used to configure the filter; otherwise the filter checks for the security property (i.e. jdk1.8.0_144/jre/lib/security/java.security) to configure the filter.

The value of jdk.serialFilter is configured as a sequence of patterns either by checking against the class name or the limits for incoming byte stream properties. Patterns are separated by semicolon and whitespace is also considered to be part of a pattern. Limits are checked before classes regardless of the order in which the pattern sequence is configured. Below are the limit properties which can be used during the configuration:

- maxdepth=value // the maximum depth of a graph
- maxrefs=value // the maximum number of the internal references
- maxbytes=value // the maximum number of bytes in the input stream
- maxarray=value // the maximum array size allowed

Other patterns match the class or package name as returned by Class.getName(). Class/Package patterns accept asterisk (*), double asterisk (**), period (.), and forward slash (/) symbols as well. Below are a couple pattern scenarios that could possibly happens:

// this matches a specific class and rejects the rest

 // this matches all classes in the package and all subpackages and rejects the rest 
- "jdk.serialFilter=org.example.**;!*" 

// this matches all classes in the package and rejects the rest 
- "jdk.serialFilter=org.example.*;!*" 

 // this matches any class with the pattern as a prefix
- "jdk.serialFilter=*;

Built-in Filters

JDK 9 has also introduced additional built-in, configurable filters mainly for RMI Registry and Distributed Garbage Collection (DGC) . Built-in filters for RMI Registry and DGC white-list classes that are expected to be used in either of these services. Below are classes for both RMIRegistryImpl and DGCImp:





In addition to these classes, users can also add their own customized filters using sun.rmi.registry.registryFilter and sun.rmi.transport.dgcFilter system or security properties with the property pattern syntax as described in previous section.

Wrapping up

While Java deserialization is not a vulnerability itself, deserialization of untrusted data using JDK's native serialization framework is. It is important to differentiate between the two, as the latter is introduced by a bad application design rather than being a flaw. Java deserialization framework prior to JEP 290 however, did not have any validation mechanism to verify the legitimacy of the objects. While there are a number of ways to mitigate against JDK's lack of assertion on deserializing objects, there is no concrete specification to deal with this flaw within the JDK itself. With JEP 290, Oracle introduced a new filtering mechanism to allow developers to configure the filter based on a number of deserialization scenarios. The new filtering mechanism seems to have made it easier to mitigate against deserialization of untrusted data should the need arises.






Cockpit 162

Posted by Cockpit Project on February 21, 2018 10:30 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 162.

Show pod name and disks of VMs running in Kubernetes

The KubeVirt Virtual Machines overview now shows the pod of running VMs. Clicking on the name navigates to the pod’s detail page.

The new “Disks” tab shows information about the emulated QEMU storage devices in the VM, similar to the Machines page.

KubeVirt pod name

KubeVirt Disks

Thanks to Marek Libra for this feature!

Tighten up the default Content-Security-Policy

Cockpit’s pages now further restrict their Content-Security-Policy to prevent forms and links from accidentally leaking data off-host.

An additional benefit is improved privacy, as Referrer: headers are no longer sent when following a link in Cockpit to an external site. (One common place where Cockpit links externally is on changelogs in the Software Updates page.)

Note that this is not an actual security device - once a malicious page runs in Cockpit, it can use the Cockpit API to run arbitrary code on the host. This change is intended as a defense against programming errors.

Drop cockpit-subscriptions and cockpit-integration-tests on Fedora

There is a new package “subscription-manager-cockpit” now which supersedes the “cockpit-subscriptions” package that was previously shipped by Cockpit.

The cockpit-integration-tests package had been an experiment, was never used in Fedora CI, and requires additional files from Cockpit’s upstream git tree to work.

Try it out

Cockpit 162 is available now:

Oxidizing Fedora: Try Rust and its applications today

Posted by Fedora Magazine on February 21, 2018 08:00 AM

In recent years, it has become increasingly important to develop software that minimizes security vulnerabilities. Memory management bugs are a common cause of these vulnerabilities. To that end, the Mozilla community has spent the last several years building the Rust language and ecosystem which focuses primarily on eliminating those bugs. And Rust is available in Fedora today, along with a few applications in Fedora 27 and higher, as seen below.

Introducing Rust

The Rust programming language bills itself as “a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.” But what does that mean?

Unlike most popular programming languages like Python, Ruby, and Java, Rust is compiled to machine code instead of being executed in a runtime environment. This means all code written in Rust must be processed long before execution. The Rust compiler checks the code as it compiles to ensure that problems are eliminated in the compiled code. Examples of such problems are:

  • Data races: contention between threads accessing the same memory location while one is trying to write there
  • Memory access violations: attempts to access/manipulate data in memory that aren’t permitted or no longer exist

Of course, checks alone aren’t enough to make this work. Rust features a long list of language enhancements specifically tailored to develop “safe” code in the first place.

If you want to learn more about the language, the Rust website offers excellent documentation. You can also read about Rust in this previous Magazine article.

Rust crates

Rust modules are also known as crates. The main index of crates is crates.io. Developers manage crates and their build and runtime dependencies using the cargo tool.

Today, Fedora includes over 300 Rust crates in Rawhide, which at the time of this writing will become Fedora 28. Over 120 Rust crates are included in Fedora 27. This selection includes nearly all of the popular crates that work with the Rust stable compiler on Linux. And even more crates will become available in Fedora over time!

Rust applications

Today, Fedora includes a few interesting Rust-based applications. All of these are available in Fedora 27 and newer and can be installed via DNF. Here are some examples:


ripgrep is a command line tool in the tradition of grep. It is optimized for searching large directories of files using parallelism out of the box. It recursively searches your current directory by default. This utility automatically skips files matching a pattern in your .gitignore files (this option can be disabled). ripgrep supports file type filtering and the full gamut of Unicode, while remaining fast. It can even automatically search compressed files. In most cases, it is faster than grep, ag, git grep, ucg, pt and shift.

For example, a simple comparison of searching through an 8GB file with ripgrep and then GNU grep:

$ /tmp env LC_ALL=C time rg -n -w 'Sherlock [A-Z]\w+' OpenSubtitles2018.raw.en | wc -l
2.16user 0.33system 0:02.50elapsed 99%CPU (0avgtext+0avgdata 8151736maxresident)k
0inputs+0outputs (0major+127911minor)pagefaults 0swaps
$ /tmp env LC_ALL=C time egrep -n -w 'Sherlock [A-Z]\w+' OpenSubtitles2018.raw.en | wc -l
7.98user 1.21system 0:09.21elapsed 99%CPU (0avgtext+0avgdata 2624maxresident)k
752inputs+0outputs (3major+223minor)pagefaults 0swaps

GNU grep takes 9.2 seconds whereas ripgrep takes only 2.5 seconds. Impressive, isn’t it? If you are a user of any of the mentioned tools, you might want to look at the feature comparison table of ack, ag, git-grep, GNU grep and ripgrep. To install ripgrep, run this command using sudo:

sudo dnf install ripgrep


exa is a modern replacement for ls. It uses colors for information by default, which helps you distinguish properties such as file type and ownership. It also has extra features not present in the original ls, such as viewing git status for a directory, or recursively listing files through sub-directories with a tree view.

To install the exa utility, use this command:

sudo dnf install exa


Tokei is a tool that analyzes code in a project and offers project statistics. It shows you exactly how many code, comments, or blank lines you have. It was originally inspired by the Perl script cloc. However, cloc is slower and more error-prone as it encounters cases that cause it to miscount code as comments and vice versa. One of Tokei’s biggest selling points is speed. The following example shows how Tokei performs on Fedora’s source of Firefox 58. This codebase has over 175,000 files containing over 26 million lines of code.

$ time tokei firefox-58.0.1/
 Language            Files        Lines         Code     Comments       Blanks
 Ada                    10         2840         1681          560          599
 Assembly              497       281559       235805        15038        30716
 Autoconf              417        59137        43862         7533         7742
 BASH                    3          342          257           43           42
 Batch                  48         4005         3380          101          524
 C                    3761      2567794      1864710       402801       300283
 C Header            14258      3034649      1830164       782437       422048
 CMake                  72        10811         7263         2009         1539
 C#                      9         1615          879          506          230
 C Shell                 2           72           34           14           24
 CoffeeScript            4           64           34           12           18
 C++                 11055      5812950      4449843       607276       755831
 C++ Header             92        41014        32622         4627         3765
 CSS                  1401       123014        95702         8047        19265
 D                       1           34            8           22            4
 Dockerfile             76         1983         1290          320          373
 Emacs Lisp              2          338          258           38           42
 Elm                     2          542          399           29          114
 Fish                    2          152           94           26           32
 GLSL                 2952       144792        57711        68029        19052
 Go                      2          485          314          101           70
 Handlebars             17          212          211            0            1
 Happy                   3         2008         2008            0            0
 HTML                62132      3479735      2955995       140901       382839
 Java                 2872       511312       324521       120016        66775
 JavaScript          55028      5576166      3572186      1199464       804516
 JSON                 1078       803571       803571            0            0
 JSX                     6          886          706           62          118
 Makefile              723        46698        25789        12197         8712
 Markdown              572        62395        62395            0            0
 Module-Definition      52         5118         3865         1173           80
 MSBuild                 3          223          165           48           10
 Objective C            60         4055         2889          527          639
 Objective C++         238        73816        54479         8071        11266
 Org                     2           54           42            0           12
 Pascal                  5         1569         1122          210          237
 Perl                   96        21520        15188         2987         3345
 PHP                     2          864          440          284          140
 Prolog                  1           17           15            0            2
 Protocol Buffers       24         5184         1988         2466          730
 Python               4165       787017       592691        66138       128188
 R                       1           38           12           18            8
 Rakefile                1           11            9            0            2
 ReStructuredText      388        51423        51423            0            0
 Ruby                    4          181          153            5           23
 Rust                 3250      1095452       833476       163020        98956
 Sass                    6          215          157           16           42
 Scala                   1          195          164            2           29
 Scons                   1           25           18            1            6
 Shell                 652        91256        64023        15520        11713
 SVG                  3885       152642       126545        18540         7557
 Swift                   1            9            7            0            2
 TeX                     2        11081         6860         3236          985
 Plain Text           2992      1444524      1444524            0            0
 TOML                  445        10738         8291         1102         1345
 TypeScript             21        32983        28256         4544          183
 Vim Script              5            5            5            0            0
 XML                  2259       225666       204439         6691        14536
 YAML                  154        34415        31155          560         2700
 Total              175813     26621471     19846093      3667368      3108010
22.36user 4.79system 0:07.61elapsed 356%CPU (0avgtext+0avgdata 136184maxresident)k
1864920inputs+0outputs (0major+48004minor)pagefaults 0swaps

Here’s the same exercise using the original cloc utility:

$ time cloc firefox-58.0.1/
  220532 text files.
  209900 unique files.                                          
Unescaped left brace in regex is deprecated here (and will be fatal in Perl 5.30), passed through in regex; marked by <-- HERE in m/^(.*?){ <-- HERE [^$]/ at /usr/bin/cloc line 4850.
Unescaped left brace in regex is deprecated here (and will be fatal in Perl 5.30), passed through in regex; marked by <-- HERE in m/^(.*?){ <-- HERE [^$]/ at /usr/bin/cloc line 4850.
   49681 files ignored.

github.com/AlDanial/cloc v 1.72  T=627.95 s (276.9 files/s, 40329.3 lines/s)
Language                             files          blank        comment           code
C++                                  10964         755278         602036        4450883
JavaScript                           52752         795983        1200658        3557155
HTML                                 58973         374714         133514        2819513
C                                     3684         299543         406289        1857360
C/C++ Header                         14119         415207         776222        1805495
Rust                                  3145          97425         160172         822947
JSON                                  1228            516              0         802139
Python                                3886         124724         142265         495765
Java                                  2866          66713         120321         323965
Assembly                               469          30686          32627         217460
XML                                   2190          14459           6621         202914
INI                                   8252          53608            159         202694
Bourne Shell                           672          24242          26559         151447
IDL                                   1295          15749              0         119280
XHTML                                 2399          10646           4857         100383
CSS                                   1053          18805           7891          92729
Objective C++                          222          11245           8032          54356
NAnt script                           1378           8371              0          48827
Markdown                               569          17225              4          44998
MSBuild script                          28              1              0          44320
GLSL                                  1827          12943          42011          42461
m4                                      77           4151            827          35890
YAML                                   296           3082            703          34810
make                                   634           7630          10330          27522
Perl                                   103           3632           3690          16208
DTD                                    176           3698           4696          14297
CMake                                   72           1539           2009           7263
TeX                                      2            985           3236           6860
Windows Module Definition               48             54           1161           3617
DOS Batch                               44            492             87           3200
SKILL                                    4             68              2           2419
HLSL                                    33            409            285           2045
Protocol Buffers                        24            730           2472           1982
Windows Resource File                   48            442            575           1864
Objective C                             37            459            514           1823
yacc                                     3            173             85           1750
Ada                                     10            599            560           1681
XSLT                                    26            168            142           1437
Pascal                                   8            260            504           1405
Cython                                   1             59            158           1310
Dockerfile                              74            367            315           1266
Groovy                                  14            249            316           1194
lex                                      4            237             82           1074
diff                                    14            530           2038            963
C#                                       9            230            506            879
MATLAB                                  11            162            147            874
JSX                                      6            118             62            706
Bourne Again Shell                      22            126            196            676
Jam                                     27            170            379            586
Korn Shell                               5             83            165            526
Expect                                   6            105            164            506
PHP                                      2            140            288            436
Elm                                      2            114             29            399
Ant                                      2             27            107            389
Go                                       2             70            101            314
TypeScript                              13             73             72            268
Lisp                                     2             42             38            258
Handlebars                              12              1              0            199
Mako                                     3             14              0            168
Scala                                    1             29              2            164
Ruby                                     4             25              4            163
awk                                      2             41              8            154
Sass                                     4             36             15            144
Haxe                                     2             25              5            137
Vuejs Component                          2              6              0            122
Visual Basic                             1             17             15             84
sed                                      7             16             27             73
PowerShell                               2             17            110             46
SAS                                      1             14             22             32
C Shell                                  2             13              7             28
CoffeeScript                             3             13              8             25
Prolog                                   1              2              0             15
R                                        1              8             18             12
MUMPS                                    3              2              0              9
D                                        1              4             22              8
Mathematica                              2              1              0              7
Swift                                    1              2              0              7
Freemarker Template                      3              0              0              6
Stylus                                   1              2              0              5
vim script                               1              0              0              1
SUM:                                173892        3179844        3707542       18437397
266.51user 306.34system 10:28.37elapsed 91%CPU (0avgtext+0avgdata 446552maxresident)k
6704096inputs+0outputs (12major+44888834minor)pagefaults 0swaps

On average, tokei takes 8 seconds whereas cloc takes 12 minutes. To install tokei, use this command:

sudo dnf install tokei


Ternimal is a program that draws a glowing, animated lifeform in the terminal using Unicode block symbols. It’s not a typical Linux utility but something like an experiment in digital art. It does have niche applications like benchmarking terminal emulators or making cool or scary SSH greeting messages by combining with timeout.

You can customize almost every aspect of the rendered “animal” with command line arguments. Currently, the only documentation available for those is the (well commented) source code and some examples. Ternimal is written in pure Rust and carefully profiled and optimized. It’s quite resource efficient and can render fluid, complex animations with minimal CPU usage.

To install Ternimal, run this command:

sudo dnf install ternimal

Fedora Rust SIG

Over the last year, a lot of work has gone into smoothly integrating the Rust ecosystem into Fedora. Fedora now has a special interest group (SIG) for supporting the Rust ecosystem. The Fedora Rust SIG is the general steward of Rust in Fedora, including crates and application packaging.

The Rust SIG’s approach towards packaging and supporting the Rust ecosystem is different from some SIGs in Fedora. As Rust was a brand new ecosystem, the Rust SIG was treading on new ground. The members decided early on to make this a truly cross-distribution effort, and harmonize packaging of Rust code across Linux distributions using the RPM package manager. The SIG worked closely with members of Mageia and openSUSE to ensure  the tooling worked for everyone. As a consequence, today you can make Fedora, Mageia, and openSUSE compliant packaging of Rust crates and applications using rust2rpm in any of those distros.


Of course, none of this would have been possible without the efforts of several people:

  • Without the assistance of Michael Schröder from openSUSE, many of the fundamental pieces we needed in the dependency resolver stack would not be possible. Florian Festi from the RPM team deserves special thanks for reviewing all of this work and making sure it is all sane in RPM.
  • A lot of the refinement of Rust compiler packaging and finding/fixing weird bugs related to that can be credited to Josh Stone from Red Hat and Rémi Verschelde from Mageia. Both of them maintain the core Rust stack in their respective distributions (Fedora/EPEL and Mageia, respectively) and regularly share work with each other to ensure Rust and Cargo are available in the best form possible.
  • Adam Miller (Fedora Release Engineering), Dusty Mabe (Project Atomic), Kushal Das (Fedora Cloud), Patrick Uiterwijk (Fedora Release Engineering), and Randy Barlow (Fedora Release Engineering) deserve a ton of credit for revamping the infrastructure to help support the new features that were needed to bring a first-class experience with the Rust ecosystem to Fedora.
  • Last but not least, the Rust community has been fantastic. Engaging with them is a great experience and Rust and its ecosystem exist thanks to them and their efforts.

Photo by James Sutton on Unsplash.

Episode 83 - XKCD + CVE = XKCVE

Posted by Open Source Security Podcast on February 21, 2018 12:08 AM
Josh and Kurt talk about the XKCD CVE comic and a flight simulator stealing credentials.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/6283065/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/87A93A/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

F27-20180217 updated isos Released

Posted by Ben Williams on February 20, 2018 11:40 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated 27 Live ISOs, carrying the 4.15.3-300 kernel.

This set of updated isos will save about 894 MB of updates after install.  (for new installs.)

Build Directions: https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

We would also like to thank Fedora- QA  for running the following Tests on our ISOs.: https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=27&build=FedoraRespin-27-updates-20180204.0&groupid=1

These can be found at  http://tinyurl.com/live-respins .We would also like to thank the following irc nicks for helping test these isos: dowdle, and Southern_Gentlem.

Fedora 27 : selinux and getfattr attributes.

Posted by mythcat on February 20, 2018 11:07 PM
In this tutorial I show you how to use the getfattr command to get extended attributes of filesystem objects and security selinux.
One attribute is selinux.security for selinux and other like:

  • security.capability - the security.capability files stores Linux capabilities for the related file and is applies to binaries which are provided one or more capabilities via this file.
  • security.ima - for the Integrity Measurement Architecture (IMA), the file security.ima stores a hash or digital signature.
  • security.evm - this is similar to security.ima, the Extended Verification Module (EVM) stores a hash/HMAC or digital signature in this file ( the different with IMA is that it protects the metadata of the file, not the contents).

Now, about selinux.security :
You can use for example the getfattr command to perform specific security selinux tasks:

# getfattr -m security.selinux -d /etc/passwd
getfattr: Removing leading '/' from absolute path
# file: etc/passwd
# getfattr -m security.selinux -d /etc/shadow
# getfattr -m security.selinux -d /var/www d /var/www
Both getfattr and setfattr commands has provided by the POSIX ACL package (Portable Operating Systems Interface).

Hacking at EPFL Toastmasters, Lausanne, tonight

Posted by Daniel Pocock on February 20, 2018 11:39 AM

As mentioned in my earlier blog, I give a talk about Hacking at the Toastmasters club at EPFL tonight. Please feel free to join us and remember to turn off your mobile device or leave it at home, you never know when it might ring or become part of a demonstration.

Annobin – Storing Extra Information in Binaries

Posted by RHEL Developer on February 20, 2018 11:00 AM


Compiled files, often called binaries, are a mainstay of modern computer systems. But it is often hard for system builders and users to find out more than just very basic information about these files. The Annobin project exists as means to answer questions like:

  • How was this binary built?
  • What testing was performed on the binary?
  • What sources were used to make the binary ?

The Annobin project is an implementation of the Watermark specification , which details how to record extra information in a binary. One important feature of this specification is that it includes an address range for the information stored. This makes it possible to record the fact that part of a binary was compiled with one set of options and another part was recorded with a different set of options.

How It Works

The information is stored in a binary as a series of ELF Notes, and held in a special section. ELF Notes were chosen because they are a well-defined structure, recognizable to any tool that manipulates ELF files, and because they do not get stripped out of files when debug information is removed. The section containing the notes is also marked as not being loadable, so it does not take up any space in the run-time image of the program.

The Watermark specification is designed so that when binary files are linked together the notes can just be concatenated, and they will remain valid. The specification also includes a set of rules for merging the notes, in order to reduce their size, if that becomes an issue for the user.

The notes can be generated by anything, although in the case of the Annobin project they are created by a plugin to GNU Compiler Collection (GCC). The plugin records most of the notes when it starts up by scanning the GCC command line and the compilation state. But it also inserts itself into the compilation process, so that it can monitor changes to how individual functions are compiled, and if relevant, it can record those changes too.

To extract the notes from a compiled binary the readelf program is used. This decodes the information and displays it in a human readable form.

How To Use It

To enable the Annobin plugin, use the GCC command line option: -fplugin=annobin

If GCC cannot find the plugin, then it may be necessary to add the -iplugindir option as well: -iplugindir=<path/to/dir/containing/annobin>

Note: for Fedora package maintainers – the Annobin plugin is enabled automatically if you are using the standard rpm build macros.

This should be all that is necessary to start recording information in a binary. In order to see if the plugin is working, the readelf program can be used to examine the notes:

readelf –notes –wide <file>

Most binary files already contain other types of notes, so in order to find the ones created by Annobin, look for ones whose “Owner” field starts with the letters “GA”:

Owner                                          Data size                   Description
GA$<version>3p4                      0x00000010  OPEN  Applies to region from 0x7da to 0x838
GA$<tool>gcc 7.2.1 20170915  0x00000000  OPEN  Applies to region from 0x7da to 0x838

Older versions of readelf have trouble understanding the notes, so the output might look like this:

Owner                             Data size         Description
GA$3p4                           0x00000010   Unknown note type: (0x00000100)
GA$gcc 7.2.1 20170915  0x00000000  Unknown note type: (0x00000100)

The Annobin project includes some example scripts that demonstrate how these notes might be used to perform various checks. The scripts are documented in the Annobin’s info file, and inside the scripts themselves. Here is a quick overview:


Tries to determine which tool compiled the binary.  Uses the notes if possible, but tries several
other methods as well.


Checks the binary to see if it has been built with object files that have different ABIs (and hence might not be compatible).


Checks the binary to see if it has been built with the expected set of hardening options.

These are just examples. Other scripts can be written and other notes can be recorded in binaries.

How to Build It

The sources are available in compressed tarball form from here: https://nickc.fedorapeople.org/annobin-X.X.tar.xz

Where X.X is the latest version number (currently 3.4).

Alternatively, the very latest sources can be found in the Annobin git repository git://sourceware.org/git/annobin.git. Annobin also exists as a pre-built rpm in the Fedora distribution (from Fedora 27 onwards) and can be installed with the command: dnf install annobin

The sources are divided up into several sub-directories:

  • plugin – The sources for the GCC plugin.
  • scripts – The example scripts.
  • tests – A testsuite for the plugin and scripts.
  • docs – Documentation.
  • config – Files necessary to configure the sources.

Only the plugin actually needs to be built, and the usual “configure; make” sequence should suffice. The plugin has several dependencies though, although the only special one is that it needs to be built by a version of GCC that supports plugins and provides the header files that they need.

How To Extend It

The Watermark specification is designed to be extensible. Arbitrary notes can be added by anyone, or by any tool. They can be added at the time the binary is created, or at a later date. The easiest way is to create an assembler file with the note(s) to be added, and then assemble it to an object file. The file can then be included in the final link of the binary, or added to it by using the objcopy program (with its –merge-notes option).

A note in the assembler source file might look something like this:

.section .gnu.build.attributes
.dc.l .Lname_end – .Lname_start  # length of name field
.dc.l 0                                           # length of description field
.dc.l 0x100                                   # type = OPEN
.asciz “GA$<your-text-here>”    # name field

Or, if the note needs to cover a specific address range:

.section .gnu.build.attributes
.dc.l .Lname_end – .Lname_start   # length of name field
.dc.l 16                                           # length of description field
.dc.l 0x100                                     # type = OPEN
.asciz “GA$<your-text-here>”     # name field
.quad start_symbol                      # description field
.quad end_symbol

Future Steps

The Annobin project is still in development.  Future plans include:

  • Adding the ability for the assembler to insert annotation notes of its own. This will allow notes to be recorded for files that are not compiled by GCC (for example, assembler source files or files compiled with LLVM).
  • Adding the ability to record source code hashes. During compilation each input file (header and source code) is hashed (using SHA-256?) and its name and hash value are stored in the compiled binary. A consumer can then use the stored hash values to verify that the source code they have is the same source code that was used to compile the binary.



The post Annobin – Storing Extra Information in Binaries appeared first on RHD Blog.

Fedora 27 Release Party at Mexico City

Posted by Fedora Community Blog on February 20, 2018 08:30 AM

On December 8, 2017, the ambassadors in Mexico City, Efren Robledo (srkraken) and Alberto Rodríguez (bt0dotninja) hosted a Fedora 27 Release Party. The party took place on the UAM Azcapotzalco in the basic sciences and engineering division. We had three main activities: Introducing Fedora 27 talk , Q&A session and a little trivia session with some gifts.

Mexico City F27 release party poster


Introducing Fedora 27

A small but dynamic talk by Efren about the news in Fedora 27 from the perspective of casual user, developer and system administrator. Also he did a quick but very demonstrative installation of Fedora 27.

Introducing Fedora 27 talk

SrKraken Talk

Q&A Session

The audience, mostly engineering students and teachers, asked many questions about various use cases including topics on scientific calculation, web services and comparisons on different Linux distributions. This session ended with details of the creation of a FAS account and a trip into WCIDFF and the Fedora Developer Portal.

Q&A session

Q&A session


We prepared Fedora / Linux / FOSS-related trivia with questions like:

  • What country has the first Tux monument?
  • What are Fedora’s four foundations?
  • Which page can guide you for becoming a Fedora contributor?

We give away  “I  <3  Fedora” t-shirts, here the winners:

Trivia winners


We are very glad about this release party because Fedora is becoming very popular on the campus. We have at least four computer labs dedicated to teaching computer science, networks and electronics topics using Fedora 27 and this kind of event contributes to arousing curiosity about the Fedora Project and engage new contributors.

I would like to thank all the people who came for this first time, and to our speaker Efren Robledo (srkraken)  for his great effort and commitment and give us a great talk about the Fedora Project in general and this release in particular. I really enjoyed organizing this release party and I am really hoping that this event becomes a tradition. See you in the F28 Release party.

The post Fedora 27 Release Party at Mexico City appeared first on Fedora Community Blog.

Accessing libvirt VMs via telnet

Posted by Lukas "lzap" Zapletal on February 20, 2018 12:00 AM

Accessing libvirt VMs via telnet

There are many “tricks” floating around how to connect to VM when you have a networking issue and ssh is not available. The idea is to use serial console to get shell access. Here is how to do this properly with RHEL7 host and guest.

First, create a VM with serial console configured with remote TCP server. There are multiple options, I find TCP server in ‘telnet’ mode the most flexible configuration because most scripting languages has the protocol built-in. You can use virt-manager or virt-install to do that:

  <serial type="tcp">
    <source mode="connect" host="" service="4555"/>
    <protocol type="telnet"/>
    <target port="0"/>

Boot the VM and then enable getty:

$ systemctl enable serial-getty@ttyS0.service
$ systemctl start serial-getty@ttyS0.service

That’s all, access the console interactively:

$ telnet localhost 4555

To access ‘raw’ console (protocol type in the XML snippet above), use netcat or similar tool. Other options in libvirt are logfile, UDP, pseudo TTY, named pipe, unix socket or null. You get the idea.

When creating multiple serial devices, only the first one (ttyS0) is allowed for root access by default. To enable second one, do:

$ echo ttyS1 >> /etc/securetty

That’s all for today.

Conexión RS-232-USB

Posted by Alvaro Castillo on February 19, 2018 06:08 PM


En el siguiente artículo, vamos a ver cómo podemos conectarnos mediante fuera de banda a dispositivos de red como puede ser un switch o un router desde Linux.

¿Qué es conexión fuera de banda?

Es un enlace entre un dispositivo (host) hacia el otro dispositivo que queramos acceder como un router o un switch haciendo uso de un cable especial como un RS-232, USB-MicroUSB, o un adaptador RS-232-USB. Esta conexión no requiere ningún tipo de direccionamiento IP y sólo puede acce...

EMEA Ambassadors: 2017 Year in Review

Posted by Fedora Community Blog on February 19, 2018 08:30 AM

As 2018 is in full session now, people, companies and organizations are taking stock not only of what’s worked during the past year, but of budding trends and approaches to handling daily business. We also can let this chance pass by knowing that it could help us in our undertakings this year.

All through 2017, the Fedora community in the EMEA region was active promoting Fedora in local events especially at the release party. It was a joy to read out the event reports.

Key Highlights

Fedora Women Day 2017

Photo credit: Bara Bühnová

Photo credit: Bara Bühnová

The Fedora Women Day aims to inspire, educate and connect women and people from underrepresented communities interested in open source software, including Fedora Project. Join us for presentations; information and discussions about contributing to open source and Fedora, career opportunities in open source and how to pursue them; and networking opportunities including connecting with female contributors in open source communities.

By and large the were a few in the region.


FOSDEM is a two-day event organised by volunteers to promote the widespread use of free and open source software. Taking place in the beautiful city of Brussels (Belgium), FOSDEM is widely recognised as the largest such conference in Europe.

It was a very busy conference. This edition featured 610 speakers, 669 events, and 55 tracks (24 speakers from Fedora). During the event, the live streaming page updated every few minutes to show you what was currently scheduled in each room.

Find Fedora at FOSDEM 2017!

<iframe class="wp-embedded-content" data-secret="CbiDQBHpcR" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/find-fedora-at-fosdem-2017/embed/#?secret=CbiDQBHpcR" title="“Find Fedora at FOSDEM 2017!” — Fedora Magazine" width="600"></iframe>

Notable mention: The sponsors, volunteers, planners in fact everyone who made it there is a winner.

PyCon SK 2017: Fedora was there!

PyCon SK 2017 is a community-organized conference for the Python programming language. After successful organization in more than 40 countries, PyCon was in Slovakia in 2016. Second PyCon SK was be held in Bratislava from the  10th to the 12th March, 2017. In short: it was a success.

Fedora was at PyCon SK 2017

<iframe class="wp-embedded-content" data-secret="8JXKuPcfhl" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/fedora-pycon-sk-2017/embed/#?secret=8JXKuPcfhl" title="“Fedora was at PyCon SK 2017” — Fedora Community Blog" width="600"></iframe>

Notable mention: Jona Azizaj. You are just awesome!

Top Goals for 2018

  1. More events in African region: Apart from release events, there was little to no action on the ‘major’ event front for us. Ambassadors on the continent need to link up with EMEA.
  2. Encourage report writing: So we have something to fall back on and relive the moments and lessons.
  3. Better tracking of basic metrics: For example, involvement within the community, and a wiki page for things data.


In wrapping up, much has been done and there’s still much to do, so this is a call to all of us to put in more, step up from 2017 and let’s get it done. I want to thank every Ambassador in the region. Special mentions to Mitzelos, Nemanja, Sylvia, Jona, Justin, Giannis, Jiri, Hax, Rhea, Miro, and every event owner and volunteer.

PS: See you all at the next ambassadors meeting!








The post EMEA Ambassadors: 2017 Year in Review appeared first on Fedora Community Blog.

Slice of Cake #25

Posted by Brian "bex" Exelbierd on February 19, 2018 08:30 AM

A slice of cake

Last week as the FCAIC I:

  • Lots of paperwork and finances. It’s fiscal year-end so it is important that we process as much of our planned spending as possible. Anything we don’t land on time has to come off the top of any budget we are allocated for next year. The fiscal year ends on 28 February.
  • Spent some time with a colleague, Pavel Valena, talking about ruby and how to implement AsciiDoc crossref support in AsciiBinder. The short version is that crossrefs break because of the way AsciiBinder calls AsciiDoctor. This calling sequence can’t be changed without destroying the value proposition of AsciiBinder, so we brainstormed ideas for how to work around it. I think we have a good plan, now to write some code.
  • Lots of one-on-one conversations queuing up cool stuff for the new year. Also lots of preparation for being out of town next week and solely focused on documentation.

À la mode

  • Occupational Health and Safety training is critical. Doing it with translated materials is challenging as they were translated but could not really be localized because of the heterogenous nature of the target audience.
  • Picked up my Czech equivalent of a W2. This was way more exciting to me than this sentence reads to you. :)
  • Attended my first concert of the year, Milky Chance, in Prague. I am not normally a fan of live music, so it was interesting to do this again.

Cake Around the World

I’ll be traveling some and hope you’ll ping me for coffee if you’re nearby. If your considering attending and want to collaborate on a talk, let’s … talk :).

  • Fedora Docs and Fedora Mindshare FADs from 26 February - 7 March in Seville, Spain and Bolzano, Italy
  • Red Hat Summit from 8-10 May in San Francisco, CA, USA
  • OSCAL from 19-20 May in Tirana, Albania
  • Open Source Summit Japan (OSS Japan) from 20-22 June in Tokyo, Japan
  • LinuxCon + ContainerCon + CloudOpen (LC3) from 25-27 June in Beijing, China
  • DevConf.us from 17-19 August in Boston, MA, USA
  • Open Source Summit Europe (OSS Europe) from 22-24 October in Edinburgh, United Kingdom

Note: My attendance at a few of the events is still tenative, but I expect most will happen.

Learn to code with Thonny — a Python IDE for beginners

Posted by Fedora Magazine on February 19, 2018 08:00 AM

Learning to program is hard. Even when you finally get your colons and parentheses right, there is still a big chance that the program doesn’t do what you intended. Commonly, this means you overlooked something or misunderstood a language construct, and you need to locate the place in the code where your expectations and reality diverge.

Programmers usually tackle this situation with a tool called a debugger, which allows running their program step-by-step. Unfortunately, most debuggers are optimized for professional usage and assume the user already knows the semantics of language constructs (e.g. function call) very well.

Thonny is a beginner-friendly Python IDE, developed in University of Tartu, Estonia, which takes a different approach as its debugger is designed specifically for learning and teaching programming.

Although Thonny is suitable for even total beginners, this post is meant for readers who have at least some experience with Python or another imperative language.

Getting started

Thonny is included in Fedora repositories since version 27. Install it with sudo dnf install thonny or with a graphical tool of your choice (such as Software).

When first launching Thonny, it does some preparations and then presents an empty editor and the Python shell. Copy following program text into the editor and save it into a file (Ctrl+S).

n = 1
while n < 5:
    print(n * "*")
    n = n + 1

Let’s first run the program in one go. For this press F5 on the keyboard. You should see a triangle made of periods appear in the shell pane.

A simple program in Thonny

A simple program in Thonny

Did Python just analyze your code and understand that you wanted to print a triangle? Let’s find out!

Start by selecting “Variables” from the “View” menu. This opens a table which will show us how Python manages program’s variables. Now run the program in debug mode by pressing Ctrl+F5 (or Ctrl+Shift+F5 in XFCE). In this mode Thonny makes Python pause before each step it takes. You should see the first line of the program getting surrounded with a box. We’ll call this the focus and it indicates the part of the code Python is going to execute next.

Thonny debugger focus

Thonny debugger focus

The piece of code you see in the focus box is called assignment statement. For this kind of statement, Python is supposed to evaluate the expression on the right and store the value under the name shown on the left. Press F7 to take the next step. You will see that Python focused on the right part of the statement. In this case the expression is really simple, but for generality Thonny presents the expression evaluation box, which allows turning expressions into values. Press F7 again to turn the literal 1 into value 1. Now Python is ready to do the actual assignment — press F7 again and you should see the variable n with value 1 appear in the variables table.

Thonny with variables table

Thonny with variables table

Continue pressing F7 and observe how Python moves forward with really small steps. Does it look like something which understands the purpose of your code or more like a dumb machine following simple rules?

Function calls

Function call is a programming concept which often causes great deal of confusion to beginners. On the surface there is nothing complicated — you give name to a code and refer to it (call it) somewhere else in the code. Traditional debuggers show us that when you step into the call, the focus jumps into the function definition (and later magically back to the original location). Is it the whole story? Do we need to care?

Turns out the “jump model” is sufficient only with the simplest functions. Understanding parameter passing, local variables, returning and recursion all benefit from the notion of stack frame. Luckily, Thonny can explain this concept intuitively without sweeping important details under the carpet.

Copy following recursive program into Thonny and run it in debug mode (Ctrl+F5 or Ctrl+Shift+F5).

def factorial(n):
    if n == 0:
        return 1
        return factorial(n-1) * n


Press F7 repeatedly until you see the expression factorial(4) in the focus box. When you take the next step, you see that Thonny opens a new window containing function code, another variables table and another focus box (move the window to see that the old focus box is still there).

Thonny stepping through a recursive function

Thonny stepping through a recursive function

This window represents a stack frame, the working area for resolving a function call. Several such windows on top of each other is called the call stack. Notice the relationship between argument 4 on the call site and entry n in the local variables table. Continue stepping with F7 and observe how new windows get created on each call and destroyed when the function code completes and how the call site gets replaced by the return value.

Values vs. references

Now let’s make an experiment inside the Python shell. Start by typing in the statements shown in the screenshot below:

Thonny shell showing list mutation

Thonny shell showing list mutation

As you see, we appended to list b, but list a also got updated. You may know why this happened, but what’s the best way to explain it to a beginner?

When teaching lists to my students I tell them that I have been lying about Python memory model. It is actually not as simple as the variables table suggests. I tell them to restart the interpreter (the red button on the toolbar), select “Heap” from the “View” menu and make the same experiment again. If you do this, then you see that variables table doesn’t contain the values anymore — they actually live in another table called “Heap”. The role of the variables table is actually to map the variable names to addresses (or ID-s) which refer to the rows in the heap table. As assignment changes only the variables table, the statement b = a only copied the reference to the list, not the list itself. This explained why we see the change via both variables.

Thonny in heap mode

Thonny in heap mode

(Why do I postpone telling the truth about the memory model until the topic of lists? Does Python store lists differently compared to floats or strings? Go ahead and use Thonny’s heap mode to find this out! Tell me in the comments what do you think!)

If you want to understand the references system deeper, copy following program to Thonny and small-step (F7) through it with the heap table open.

def do_something(lst, x):

a = [1,2,3]
n = 4
do_something(a, n)

Even if the “heap mode” shows us authentic picture, it is rather inconvenient to use. For this reason, I recommend you now switch back to normal mode (unselect “Heap” in the View menu) but remember that the real model includes variables, references and values.


The features I touched in this post were the main reason for creating Thonny. It’s easy to form misconceptions about both function calls and references but traditional debuggers don’t really help in reducing the confusion.

Besides these distinguishing features, Thonny offers several other beginner friendly tools. Please look around at Thonny’s homepage to learn more!

SwissPost putting another nail in the coffin of Swiss sovereignty

Posted by Daniel Pocock on February 18, 2018 10:17 PM

A few people have recently asked me about the SwissID, as SwissPost has just been sending spam emails out to people telling them "Link your Swiss Post user account to SwissID".

This coercive new application of technology demands users email addresses and mobile phone numbers "for security". A web site coercing people to use text messages "for security" has quickly become a red flag for most people and many blogs have already covered why it is only an illusion of security, putting your phone account at risk so companies can profit from another vector for snooping on you.

SwissID is not the only digital identity solution in Switzerland but as it is run by SwissPost and has a name similar to another service it is becoming very well known.

In 2010 they began offering a solution which they call SuisseID (notice the difference? They are pronounced the same way.) based on digital certificates and compliant with Swiss legislation. Public discussion focussed on the obscene cost with little comment about the privacy consequences and what this means for Switzerland as a nation.

Digital certificates often embed an email address in the certificate.

With SwissID, however, they have a web site that looks like little more than vaporware, giving no details at all whether certificates are used. It appears they are basically promoting an app that is designed to harvest the email addresses and phone numbers of any Swiss people who install it, lulling them into that folly by using a name that looks like their original SuisseID. If it looks like phishing, if it feels like phishing and if it smells like phishing to any expert takes a brief sniff of their FAQ, then what else is it?

The thing is, the original SuisseID runs on a standalone smartcard so it doesn't need to have your mobile phone number, have permissions to all the data in your phone and be limited to working in areas with mobile phone signal.

The emails currently being sent by SwissPost tell people they must "Please use a private e-mail address for this purpose" but they don't give any information about the privacy consequences of creating such an account or what their app will do when it has access to read all the messages and contacts in your phone.

The actions you can take that they didn't tell you about

  • You can post a registered letter to SwissPost and tell them that for privacy reasons, you are immediately retracting the email addresses and mobile phone numbers they currently hold on file and that you are exercising your right not to give an email address or mobile phone number to them in future.
  • If you do decide you want a SwissID, create a unique email address for it and only use that email address with SwissPost so that it can't be cross-referenced with other companies. This email address is also like a canary in a coal mine: if you start receiving spam on that email address then you know SwissPost/SwissID may have been hacked or the data has been leaked or sold.
  • Don't install their app and if you did, remove it and you may want to change your mobile phone number.

Oddly enough, none of these privacy-protecting ideas were suggested in the email from SwissPost. Who's side are they on?

Why should people be concerned?

SwissPost, like every postal agency, has seen traditional revenues drop and so they seek to generate more revenue from direct marketing and they are constantly looking for ways to extract and profit from data about the public. They are also a huge company with many employees: when dealing with vast amounts of data in any computer system, it only takes one employee to compromise everything: just think of how Edward Snowden was able to act alone to extract many of the NSA's most valuable secrets.

SwissPost is going to great lengths to get accurate data on every citizen and resident in Switzerland, including deploying an app to get your mobile phone number and demanding an email address when you use their web site. That also allows them to cross-reference with your IP addresses.

  • Any person or organization who has your email address or mobile number may find it easier to get your home address.
  • Any person or organization who has your home address may be able to get your email address or mobile phone number.
  • When you call a company from your mobile phone and their system recognizes your phone number, it becomes easier for them to match it to your home address.
  • If SwissPost and the SBB successfully convince a lot of people to use a SwissID, some other large web sites may refuse to allow access without getting you to link them to your SwissID and all the data behind it too. Think of how many websites already try to coerce you to give them your mobile phone number and birthday to "secure" your account, but worse.

The Google factor

The creepiest thing is that over seventy percent of people are apparently using Gmail addresses in Switzerland and these will be a dependency of their registration for SwissID.

Given that SwissID is being promoted as a solution compliant with ZertES legislation that can act as an interface between citizens and the state, the intersection with such a powerful foreign actor as Gmail is extraordinary. For example, if people are registering to vote in Switzerland's renowned referendums and their communication is under the surveillance of a foreign power like the US, that is a mockery of democracy and it makes the allegations of Russian election hacking look like child's play.

Switzerland's referendums, decentralized system of Government, part-time army and privacy regime are all features that maintain a balance between citizen and state: by centralizing power in the hands of SwissID and foreign IT companies, doesn't it appear that the very name SwissID is a mockery of the Swiss identity?

Yellow in motion

No canaries were harmed in the production of this blog.

Java on Port 443

Posted by Adam Young on February 18, 2018 02:54 AM

I’ve been working on setting up a Java based SAML provider. This means that the application needs to handle request and response over HTTPS. And, since often this is deployed in data centers where non-standard ports are blocked, it means that the HTTPS really needs to be supported on the proper port, which is 443. Here are the range of options.

Lets assume the app is being served by Tomcat, although this goes for any HTTP server, especially the interpreter based ones.

You have two choices.

  1. Run Tomcat to listen and serve on port 443 directly
  2. run a proxy in front of it.

For proxies, you have three easy choices, and many others, if you are running on Fedora/RHEL/Centos.

  1. IP tables. Listen on port 443, forward to the local port that Tomcat is listening on for HTTPS
  2. Apache  forwarding either HTTP or AJP
  3. HA Proxy forwarding HTTP

Those each have configuration specific issues.  I am not going to go deep in to them here.

Lets return to the case where you want Tomcat to be able to directly listen and respond on port 443.

Your first, and worst, option is to run as root.  Only root is able to listen to ports under 1000 on a default Linux setup.

Apache (and the others) does something like this.  But it uses Unix specific mechanisms to drop privileges.   So when you run PS, you can see that the HTTPD process is running as apache or nobody or httpd depending on your distro.  Basically, to process runs as root, listens on port 443, and then tells the Kernel to downgrade its userid to the less priviledged one.  It might change groups too, depending on how its coded.

Java could, potentially do this, but it would take a JNI call to make the appropriate System call.  Tomcat can’t really handle that.  It also prevents you from re-opening a closed connection.  While Apache tends to fork a new process to handle that problem, Tomcat is not engineered that way.  You might be coding yourself into a corner.

It turns out that the application does not need access to everything that roots does.  And this is a pattern that is not restricted to network listeners. Thus, a few Kernel versions ago, they added “capabilites” to the Kernel. This seems like a better solution. Specifically, our application needs

Bind a socket to Internet domain privileged ports
(port numbers less than 1024).

Can we add this to a Tomcat app?

Lets do a little test.  Instead of Tomcat, we can use something simpler:  The EchoServer code used a Princeton Computer Science Class.  Download EchoServer.java, In.java and Out.java.

Compile using

javac EchoServer.

And run using

java EchoServer 4444

In another window, you can telnet into the server and type in a struing, which will be echoed back to you.

$ telnet localhost 4444
Trying ::1...
Connected to localhost.
Escape character is '^]'.

If you Ctrl C the echo server, you will close the connection.

Ok, what happens if we try this on a port under 1000?  Lets try.  First, edit EchoServer.java so it is listening on port 400, not 4444.

$ diff -u EchoServer.java.orig EchoServer.java
--- EchoServer.java.orig 2018-02-17 18:09:42.846674768 -0500
+++ EchoServer.java 2018-02-17 18:09:57.211684501 -0500
@@ -26,7 +26,7 @@
 public static void main(String[] args) throws Exception {
 // create socket
- int port = 4444;
+ int port = 400;
 ServerSocket serverSocket = new ServerSocket(port);
 System.err.println("Started server on port " + port);

Recompile and run:

$ java EchoServer 400
Exception in thread "main" java.net.BindException: Permission denied (Bind failed)
 at java.net.PlainSocketImpl.socketBind(Native Method)
 at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
 at java.net.ServerSocket.bind(ServerSocket.java:375)
 at java.net.ServerSocket.<init>(ServerSocket.java:237)
 at java.net.ServerSocket.<init>(ServerSocket.java:128)
 at EchoServer.main(EchoServer.java:30)

How can we add CAP_NET_BIND_SERVICE?  We have the setcap utility:

setcap – set file capabilities

However…it turns out that this is set on an executable?  What executable?  Can’t be a shell script, as the capabilites are dropped wen the shell executes the embeedded interpreter.  We would have to set it on the Java executable itself.  This is, obviously, a dangers approach, as it means ANY Java program can listen on any port under 1000, but lets see if it works.

First we need to find the Java executable:

[ayoung@ayoung541 echo]$ which java
[ayoung@ayoung541 echo]$ ls -a\l `which java`
lrwxrwxrwx. 1 root root 22 Feb 15 09:10 /usr/bin/java -> /etc/alternatives/java
[ayoung@ayoung541 echo]$ ls -al /etc/alternatives/java
lrwxrwxrwx. 1 root root 72 Feb 15 09:10 /etc/alternatives/java -> /usr/lib/jvm/java-1.8.0-openjdk-

Can we use this directly?  Lets see:

$ /usr/lib/jvm/java-1.8.0-openjdk- EchoServer 
Exception in thread "main" java.net.BindException: Permission denied (Bind failed)
 at java.net.PlainSocketImpl.socketBind(Native Method)
 at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
 at java.net.ServerSocket.bind(ServerSocket.java:375)
 at java.net.ServerSocket.<init>(ServerSocket.java:237)
 at java.net.ServerSocket.<init>(ServerSocket.java:128)
 at EchoServer.main(EchoServer.java:30)

Looks OK.  Lets try setting the capability:

[ayoung@ayoung541 echo]$ /usr/lib/jvm/java-1.8.0-openjdk- EchoServer 
/usr/lib/jvm/java-1.8.0-openjdk- error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory
[ayoung@ayoung541 echo]$ java EchoServer 
java: error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory

Something does not like that capability.  We an unset it and get the same result as before.

[ayoung@ayoung541 echo]$ sudo /sbin/setcap cap_net_bind_service=-ep /usr/lib/jvm/java-1.8.0-openjdk-
[ayoung@ayoung541 echo]$ java EchoServer 
Exception in thread "main" java.net.BindException: Permission denied (Bind failed)
 at java.net.PlainSocketImpl.socketBind(Native Method)
 at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
 at java.net.ServerSocket.bind(ServerSocket.java:375)
 at java.net.ServerSocket.<init>(ServerSocket.java:237)
 at java.net.ServerSocket.<init>(ServerSocket.java:128)
 at EchoServer.main(EchoServer.java:30)

Seems I am not the first person to hit this, and a step by step is laid out in the answer here.

Once I add exactly the path:

 $ cat /etc/ld.so.conf.d/java.conf

And run

[ayoung@ayoung541 echo]$ java EchoServer 
Started server on port 400

OK, that works.

Would you want to do that?  Probably not.  If you did, you would probably want a special, limited JDK availalbe only to the application.

It is also possible to build a binary that kicks off the Java process, add the capability to that, and further limit what could call this code.  There is still the risk of someone running it with a different JAVA_PATH and getting different code in place, and using that for a privilege elevation. The only secure path would be to have a custom classloader that read the Java code from a segment of the file; static linking of a Jar file, if you will. And that might be too much. However, all these attacks are still possible with the Java code the way it is set up now, just that we would expect a system administrator to lock down what code could be run after configuring, say, an HTTPD instance as a reverse proxy.

Fleet Commander is looking for a GSoC student to help us take over the world

Posted by Alberto Ruiz on February 18, 2018 12:14 AM

Fleet Commander has seen quite a lot of progress recently, of which I should blog about soon. For those unaware, Fleet Commander is an effort to make GNOME great for IT administrators in large deployments, allowing them to deploy desktop and application configuration profiles across hundreds of machines with ease through a web administration UI based on Cockpit. It is mostly implemented in Python.

One important aspect of large deployments is their identity management systems to handle large numbers of users, groups and hosts. On the free software end, FreeIPA is the project that we’ve integrated Fleet Commander with. FreeIPA is an administrative interface and sets of APIs that integrates LDAP directory, DNS and other related services together. Another way to describe FreeIPA is as the Linux’s counterpart of Microsoft’s Active Directory.
And that’s precisely what the GSoC idea we want a student for is about, we think that the best way to encourage GNOME usage in large organizations is to have tools that ease the migration, many organizations may have an existing Microsoft Windows deployment managed by Active Directory, so we want Fleet Commander to be able to use Active Directory as well as FreeIPA as the identity management system and data store for the profile data (by using Group Policy Objects).

This project would be mostly implemented in Python and it will require talking to AD’s LDAP server and CIFS/Samba storage, we are fairly confident that it can be achieved during the GSoC term.

If you are looking for a fun GSoC project, you’re skilled in Python and are interested in becoming a GNOME contributor by helping it reach a larger user base and take it one step closer to world domination and make some money in the process you should apply!

We’re hanging around in the #fleet-commander IRC channel in irc.freenode.net if you want to approach us to get a better understanding of the idea, look for ogutierrez, fidencio and aruiz if you have any questions.

Fedora Atomic Workstation for development

Posted by Matthias Clasen on February 16, 2018 11:47 PM

I’m frequently building GTK+.  Since I am using Fedora Atomic Workstation now, i have to figure out how to do GTK+ development in this new environment. GTK+ may be a good example for the big middle ground of things that are not desktop applications, but also not part of the OS itself.

Image result for project atomic logo

Last week I figured out how to use a buildah container to build release tarballs for GNOME modules, and I actually used that setup to produce a GTK+ release as well.

But for fixing bugs and other development, I generally need to run test cases and demo apps, like the venerable gtk-demo. Running these outside the container does not work, since the GTK+ libraries I built are linked against libraries that are installed inside the container and not present on the host, such as libvulkan. I could of course resort to package layering to install them on the host, but that would miss the point of using Atomic Workstation.

The alternative is running the demo apps inside the container, which should work – its the same filesystem that they were built in. But they can’t talk to the compositor, since the Wayland socket is on the outside: /run/user/1000/wayland-0. I tried to work around this by making the socket visible in the container, but my knowledge of container tools and buildah is too limited to make it work. My apps still complain about not being able to open a display connection.

What now ? I decided that while GTK+ is not a desktop application, I can treat my test apps like one and write a flatpak manifest for them. This way, I can use GNOME builders awesome flatpak support to build and run them, like I already did for  GNOME recipes.

Here is a minimal  flatpak manifest that works:

  "id" : "org.gtk.gtk-demo",
  "runtime" : "org.gnome.Sdk",
  "runtime-version" : "master",
  "sdk" : "org.gnome.Sdk",
  "command" : "gtk4-demo",
  "finish-args" : [
  "modules" : [
     "name" : "graphene",
     "buildsystem" : "meson",
     "builddir" : true,
     "sources" : [
         "type" : "git",
         "url" : "https://github.com/ebassi/graphene.git"
     "name" : "gtk+",
     "buildsystem" : "meson",
     "builddir" : true,
     "sources" : [
         "type" : "git",
         "url" : "https://gitlab.gnome.org/GNOME/gtk.git"

After placing this json file into the toplevel directory of my  GTK+ checkout, it appears as a new build configuration in GNOME builder:

If you look closely, you’ll notice that I added another manifest, for gtk4-widget-factory. You can have multiple manifests in your tree, and GNOME builder will let you switch between them in the Build Preferences.

After all this preparation, I can now hit the play button and have my demo app run right from inside GNOME builder. Note that the application is running inside a flatpak sandbox, using the runtime that was specified in the Build Preferences, so it is cleanly separated from the OS. And I can easily build and run against different runtimes, to test compatibility with older GNOME releases.

This may be the final push that makes me switch to GNOME Builder for day-to-day development on Fedora Atomic Workstation: It just works!

On Python Shebangs

Posted by Michael Catanzaro on February 16, 2018 08:21 PM

So, how do you write a shebang for a Python program? Let’s first set aside the python2/python3 issue and focus on whether to use env. Which of the following is correct?

#!/usr/bin/env python

The first option seems to work in all environments, but it is banned in popular distros like Fedora (and I believe also Debian, but I can’t find a reference for this). Using env in shebangs is dangerous because it can result in system packages using non-system versions of python. python is used in so many places throughout modern systems, it’s not hard to see how using #!/usr/bin/env in an important package could badly bork users’ operating systems if they install a custom version of python in /usr/local. Don’t do this.

The second option is broken too, because it doesn’t work in BSD environments. E.g. in FreeBSD, python is installed in /usr/local/bin. So FreeBSD contributors have been upstreaming patches to convert #!/usr/bin/python shebangs to #!/usr/bin/env python. Meanwhile, Fedora has begun automatically rewriting #!/usr/bin/env python to #!/usr/bin/python, but with a warning that this is temporary and that use of #!/usr/bin/env python will eventually become a fatal error causing package builds to fail.

So obviously there’s no way to write a shebang that will work for both major Linux distros and major BSDs. #!/usr/bin/env python seems to work today, but it’s subtly very dangerous. Lovely. I don’t even know what to recommend to upstream projects.

Next problem: python2 versus python3. By now, we should all be well-aware of PEP 394. PEP 394 says you should never write a shebang like this:

#!/usr/bin/env python

unless your python script is compatible with both python2 and python3, because you don’t know what version you’re getting. Your python script is almost certainly not compatible with both python2 and python3 (and if you think it is, it’s probably somehow broken, because I doubt you regularly test it with both). Instead, you should write the shebang like this:

#!/usr/bin/env python2
#!/usr/bin/env python3

This works as long as you only care about Linux and BSDs. It doesn’t work on macOS, which provides /usr/bin/python and /usr/bin/python2.7, but still no /usr/bin/python2 symlink, even though it’s now been six years since PEP 394. It’s hard to understate how frustrating this is.

So let’s say you are WebKit, and need to write a python script that will be truly cross-platform. How do you do it? WebKit’s scripts are only needed (a) during the build process or (b) by developers, so we get a pass on the first problem: using /usr/bin/env should be OK, because the scripts should never be installed as part of the OS. Using #!/usr/bin/env python — which is actually what we currently do — is unacceptable, because our scripts are python2 and that’s broken on Arch, and some of our developers use that. Using #!/usr/bin/env python2 would be dead on arrival, because that doesn’t work on macOS. Seems like the option that works for everyone is #!/usr/bin/env python2.7. Then we just have to hope that the Python community sticks to its promise to never release a python2.8 (which seems likely).


Bodhi 3.3.0 released

Posted by Bodhi on February 16, 2018 07:49 PM


  • Test gating status is now polled whenever an update is created or edited (#1514).
  • Check the state of updates when they are missing signatures during bodhi-push (#1781).
  • There is now a web interface that displays the status of running composes (#2022).
  • There is now an API for waiving test results (d52cc1a).
  • Update states are now documented (6f4a48a).
  • Testing documentation was written (f1f2d01).
  • A man page for bodhi-expire-overrides was written (e4402a3).
  • A man page for bodhi-manage-releases was written (84d0166).
  • Update status and request fields are now indexed for more performant searching
  • updateinfo.xml now includes the severity level on security updates (8c9c1bf).
  • Only request the global_component field for critpath PDC lookups (46f3588).
  • Newer updates are checked first by bodhi-check-policies (c894255).


  • Ensure that issued_date and updated_date are always present in metadata (#2137).
  • A link describing ffmarkdown syntax was fixed (70895e5).

Development improvements

  • Some validation code was cleaned up to share code (9f17b6c).
  • The database now has a content type enum for containers (#2026).
  • Docblocks were written for more code.


The following developers contributed to Bodhi 3.3.0:

  • Matt Jia
  • Jonathan Lebon
  • Yadnyawalkya Tale
  • Patrick Uiterwijk
  • Till Maas
  • Ken Dreyer
  • Randy Barlow

Comandos básicos de Alcatel OmniSwitch 6850

Posted by Alvaro Castillo on February 16, 2018 07:00 PM

Guía de comandos básicos de Alcatel OmniSwitch 6850

Bienvenidos a este nuevo post en el que hablaremos sobre como trabajar con switches de Alcatel. Esta empresa lleva más de 13 años en el mercado de las telecomunicaciones y de otros dispositivos como cámaras IP, tablets, smartphones.


Si bien el modelo principal es OS6850 podemos encontrar los siguientes sufijos adicionales que destacan ciertas propiedades de ese modelo 6850 en particular además de identificar el número de pu...

glib2 will use native libs to print dates

Posted by Robert Antoni Buj Gelonch on February 16, 2018 02:13 PM

glib2 will use native libs to print dates when this change occurred. After glibc update (AKA libc6), it will be a great improvement in GNOME, but also in other desktop environments like XFCE or MATE, because they also use glib2 to display dates.

Translators should update their translations where needed, and developers adapt their code (minor changes needed, or none).

glibc 2.27 & periphrastic genitive in date

Posted by Robert Antoni Buj Gelonch on February 16, 2018 01:44 PM

glibc 2.27 adds the missing support to print dates using the periphrastic genitive form according to your locale, which is available in other libc implementations like BSD libc.

To add support to your locale you should fill a bug like this, and review the date modifiers in translations:

  • %O* – don’t use the periphrastic genitive form according to your locale.
    • %OB: “abril” in the example
    • %Ob: “abr.” in the example
  • %B – full month name according to your locale (it’s for use with a periphrastic genitive form).
    • “d’abril” in the example
  • %b – abbreviated month name according to your locale (it’s for use with a periphrastic genitive form).
    • “d’abr.” in the example
Python 3.6.4 (default, Feb  1 2018, 11:03:59) 
[GCC 8.0.1 20180127 (Red Hat 8.0.1-0.6)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from time import gmtime, strftime, mktime
>>> import locale
>>> locale.setlocale(locale.LC_TIME, "ca_ES.utf8")
>>> t = (2018, 4, 21, 20, 19, 18, 1, 48, 0)
>>> t = mktime(t)
>>> strftime("%A, %d %B de %Y a les %H:%M:%S", gmtime(t))
'dissabte, 21 d’abril de 2018 a les 19:19:18'
>>> strftime("%a %d %b de %Y", gmtime(t))
'ds. 21 d’abr. de 2018'
>>> strftime("%OB de %Y", gmtime(t))
'abril de 2018'
>>> strftime("%Ob de %Y", gmtime(t))
'abr. de 2018'

LVFS will block old versions of fwupd for some firmware

Posted by Richard Hughes on February 16, 2018 11:59 AM

The ability to restrict firmware to specific versions of fwupd and the existing firmware version was added to fwupd in version 0.8.0. This functionality was added so that you could prevent the firmware being deployed if the upgrade was going to fail, either because:

  • The old version of fwupd did not support the new hardware quirks
  • If the upgraded-from firmware had broken upgrade functionality

The former is solved by updating fwupd, the latter is solved by following the vendor procedure to manually flash the hardware, e.g. using a DediProg to flash the EEPROM directly. Requiring a specific fwupd version is used by the Logitech Unifying receiver update for example, and requiring a previous minimum firmware version is used by one (soon to be two…) laptop OEMs at the moment.

Although fwupd 0.8.0 was released over a year ago it seems people are still downloading firmware with older fwupd versions. 98% of the downloads from the LVFS are initiated from gnome-software, and 2% of people using the fwupdmgr command line or downloading the .cab file from the LVFS using a browser manually.

At the moment, fwupd is being updated in Ubuntu xenial to 0.8.3 but it is still stuck at the long obsolete 0.7.4 in Debian stable. Fedora, or course, is 100% up to date with 1.0.5 in F27 and 0.9.6 in F26 and F25. Even RHEL 7.4 has 0.8.2 and RHEL 7.5 will be 1.0.1.

Detecting the fwupd version also gets slightly more complicated, as the user agent only gives us the ‘client version’ rather than the ‘fwupd version’ in most software. This means we have to use the minimum fwupd version required by the client when choosing if it is safe to provide the file. GNOME Software version 3.26.0 was the first version to depend on fwupd ≥ 0.8.0 and so anything newer than that would be safe. This gives a slight problem, as Ubuntu will be shipping an old gnome-software 3.20.x and a new-enough fwupd 0.8.x and so will be blacklisted for any firmware that requires a specific fwupd version. Which includes the Logitech security update…

The user agent we get from gnome-software is gnome-software/3.20.1 and so we can’t do anything very clever. I’m obviously erring on not bricking a tiny amount of laptop hardware rather than making a lot of Logitech hardware secure on Ubuntu 16.04, given the next LTS 18.04 is out on April 26th anyway. This means people might start getting a detected fwupd version too old message on the console if they try updating using 16.04.

A workaround for xenial users might be if someone at Canonical could include this patch that changes the user agent in gnome-software package to be gnome-software/3.20.1 fwupd/0.8.3 and I can add a workaround in the LVFS download code to parse that. Comments welcome.

කණ්ඩියපිටවැව නාලන්දාවට වල් අලින්ගෙන් තෑග්ගක්

Posted by Danishka Navin on February 16, 2018 10:52 AM
පසුගිය කාලයේ රටවටේ විවිධ වැඩසටහන් සඳහා එක් වුවත් ඊයේ පෙරෙදා උන සිදු වීම නම් ලියන්නම ඔන කියල හිතුනා.

කණ්ඩියපිටවැව, නාලන්දා පාසලට පසුගිය ජූලි 19 දින ගියේ මීට පෙර කිසි දිනෙක නොදුට පිරිසක් සමඟ..

උදේ පාන්දර 3.30ට පමණ ගෙදරින් පිටත්වෙලා, වල් අලි කරදර වලින් දුක්විදින ජනතාවගෙ ජීවන තත්ත්වය උසස් කිරීමට සහාය වන Born Free Foundation ආයතනයේ මිතුරන් සමඟ   කොළඹදි එක් විමෙන් අනතුරුව බලංගොඩ තණමල්විම මාර්ගයේ ගමන් කරන කොට හම්බවෙන හම්බේගමුව ප්‍රාදේශීය ලේකම් කාර්යයාලයට අයත් ප්‍රදේශයට අපි ලගා උනේ දවල් 1ත් පහුවෙද්දි. අපි යද්දි පාසල් වාර විභාගය ලියමින් සිටි ලමයි ගෙදර යන්න කලින් වදිනව.. එවගේම පොඩි පැංචො ටිකක් නම් ඒ වෙද්දිත් අපි එන අතරමග...

පාසලට ඇතුල් වන විටම ලැබුණු සිනා සංග්‍රහය

මේ වෙද්දි පරිගණක විද්‍යාගාරය (මෙය මුලින් ගොඩනගල තියෙන්නෙ පුස්තකාලයකට) විදුලිය ලබා දි තිබුනෙ නැ, එවගේම අවශ්‍ය අභ්‍යන්තර රැහැන් ඇදීමක්ද තිබුනෙ නැ.  පෙරෙදා පාසලෙ තිබුණු උත්සවේකදි කියවුනෙ ඒ ගොඩනැගිල්ලට ආණ්ඩුවෙන් 50,000ක මුදලක් ලැබුනත් ඉස්කොලෙ ලොකු සර්, ගුරුවරු, පාසල් ලමුන් සහ ගම්මු එකතු වෙලා තවත් බාහිර පිරිස් සමඟ මේ වැඩේ අහවර කරගෙන. කොටින්ම ගොඩනැගිල්ලට අවශ්‍ය ගඩොල් දෙමාපියක් එකතු කරලා දීල..
 ගුරුවරුන් පළමු වරට OLPC XO උපාංගය විවෘත කිරීමට උත්සාහ දරමින්...

අගෝස්තු 1දා සවස් වන විට කණ්ඩියපිටවැව නාන්දා පාසලට පරිගණක ගෙන ගිය මොහොතෙ සිට පරිගණක විද්‍යාගාරය නම් කිසිම විටක තනි නොකරන්න පාසලෙ පුංචි පැටව් වග බලාගත්ත. හන්තාන මෙහෙයුම් පද්ධතිය ස්ථාපනයටත් කිහිප දෙනෙක් එක් වුනා විතරක් නෙමෙ රාත්‍රී 9.30 පමණ වන තෙක් අප සමඟ රැදුණු ඔවුන් බලෙන්ම වගේ ගෙවල් වලට පිටත් කරන්න උනේ ඔවුන්ගෙම ආරක්ෂාව නිසයි.

ලග පාත ගෙවල්වල ළමයි ඔක්කොම හවස පාසලට දුවන් ආව...

ඉස්සෙල්ලම වැඩ කරන්න හැමෝම තමන්ගෙ වාරයනතෙක් බලන් ඉන්නව...

ලෙසි නැ ගණන් හදන සෙල්ලම,... බැරිම තැන ඇගිලි ගනිමින්...

හන්තාන ලිනක්ස් 17 (නවතම නිකුතුව) ස්ථාපනය වෙමින්....

අප බලාපොරොත්තු වූ සේම සහයෝගයෙන්, එක්ව අධ්‍යනයේ යෙදීමට පළමු දිනයේ සිටම පාසල් සිසුන් කටයුතු කරන අයුරු දක්නට පුලුවන් උනා..

තමන්දැනගත් දේ අන් අයට කියා දීමටත් වෙනත් කෙනෙක් දන්නා දේ අසා දැනගැණීමටත් ඔවුන් පැකිලුනේ නැ

මේ කාර්යයෙදි ලැප්ටොප් පරිගණක ලබා දුන්නෙ යොහාන් සුමතිපාල, ඔහු සිංහල කතා කිරීමට නොදන්නා වුවත් තමන්ගෙ රට ගැන කැක්කුමක් තියෙන දැනට ඇමරිකාවෙ වාසය කරන අවුරුදු 17ක සිසුවෙක්.  පසුගිය අගෝස්තු මස දෙවැනිදා පැවැත්වුණු උත්සවයට යොහාන් ඔහුගෙ මැණියන්, සහෝදරයා සමඟ සහෝදරිය සමඟ එක් වුනා. ඌව පලාත් අධ්‍යාපන කාර්යයාලයේ, වැල්ලවාය කලාප අධ්‍යාපන කාර්යයාලයේ නිළධරින් සහ අවට පාසල් වල විදුහල් පතිවරුන් මෙන්ම ගම්වාසීන් මෙම වැඩසටහනට එක්ව සිටියා.

කණ්ඩියපිටවැව නාලන්ද පාසලෙ දරුවන්ගෙන් සහ ගුරුවරුන්ගෙන් උණුසුම් පිළිගැණීමක්

පන්සිල් සමාදන්වෙමින්...

පිළිගැණීමෙ ගීතය

 යොහාන් සුමතිපාල  (වෛද්‍ය දීපානි ජයන්තා විසින් යොහාන්ගෙ අදහස් සිංහල බසට පෙරලනලදි)


ලොකු ලොකු අයගෙ හරබර කතා අතරතුරු පොඩිත්තන්ගෙ ලස්සන රැගුම් වැඩසටහන තවත් ලස්සන කලා.

පරිගණක විද්‍යාගාරයේ උපරිම ප්‍රයෝජනය ගුරු සිසු දෙපිරිස ලබා ගන්නා බව කැට තියල කියන්න පුලුවන්.. මොකද ලැබ් එක ඇරපු මුල්ම දවසේ මූලික වැඩසටහනෙන් පසුව සවස 2 සිට සවස 5.30 දක්වා ගුරුවරුන් සඳහා වැඩසටහනක් ක්‍රියාත්මක උනා.. නමුත් වැවට ගිහින් නාගෙන එද්දි ලොකු සර් නිකමට වගේ අහනව අපිට රෑටත් ටිකක් පුරුදු වෙන්න පුලුවන් නේද කියල. ඔහුගෙ උනන්දුව නිසාම අවට නිවෙස් වල සිටි  ගුරුවරු 5 දෙනෙක් මැදියම් රැය වෙනකන් දිවා කාලයෙ ඉගෙන ගත් දැ යලි පුහුණු වීමටත් දරුවන්ගෙ අධ්‍යාපන කටයුතු සඳහා හන්තාන පද්ධතියෙන් වැදගත්වන පාඩම් සොයා බැලීමටත් එක් වුනා.

කණ්ඩියපිටවැව නාලන්දා විදුලේ තොරතුරු තාක්‍ෂණ ප්‍රජාව...

අපිට අවශ්‍ය දේ අපිම ඉගෙන ගන්න ඔනා කියන හැගිමෙන් එක් කෙනෙක් ඉගෙන ගන්න දේ අනිකාටත් කියා දීමට ඔවුන් පසුබට උනෙ නැ.

පහුවදා උදැසන යලිත් පාසල්දරුවන්ට  කෙටි වැඩසටහනකින්  පළමුපුහුණු  වාරය නිම කළේ මේ අගෝස්තු නිවාඩුවෙ යළිත් හන්තාන කණ්ඩායමේ සාමාජිකයෙක් එහි යන දිනයක්දස්තිවරව  පවසමින්.  මෙතෙක් හන්තාන ලිනක්ස් කණ්ඩායම සිදු නොකල, එක් පාසලකට මාස  6ක් තිස්සේ වැඩසටහන් ක්‍රියාත්මක කිරීමක් මෙම පාසල ඉලක්ක කරගෙන සිදු කරන්නෙ යොහාන්ගෙ සහ born free මිතුරන්ගෙ, ගම්වාසින්ගෙ, පාසලෙ මහන්සිය අපතෙ නොයා, හැකි උපරිම ඵලදායිතාවයකින් යුතුව පරිගණක විද්‍යාගාරය භාවිතයට මග පෙන්වීමටයි.

මෙම පාසලට සාමාන්‍ය ලැප්ටොප් පරිගණක වලට අමතරව OLPC (One Laptop Per Child) ව්‍යාපෘතිය මඟින් XO පරිගණක දෙකක් ලැබුණු අතර අනෙක් ලැප්ටොප් පරිගණක මඟින්ම OLPC හි ඇති Sugar වැඩතලය ඇති නිසා ප්‍රාථමික සිසුන්ටත් මෙම පරිගණක විද්‍යාගාරයේ ඉඩක් ලැබෙනු ඇත.

දැන් මේ සටහන කිව්ව ඔබහිතනව ඇති මොන හරුපයක්ද, අද මාතෘකාවට දාල තියෙන්නෙ කියල. :D ඇත්තටම කියනව නම් මාතෘකාවෙ කිසි වැරද්දක් නැ...

මොකද මේ වැඩසටහන සිද්ධ උනේ අලි කරදර වලින් දුක්විදින ජනතාවගෙ ජීවන තත්ත්වය උසස් කිරීමට සහාය වන Born Free Foundation ආයතනය මේ ගමටත් ගිය නිසා සහ එම ආයතනය කරන වැඩ ගැන පැහැදුනු යොහාන් සුමතිපාල නාලන්දාවෙ දරුවන්ට පරිගණක ලබා දිමෙ කටයුත්තට අත ගැසීමත් සියල්ල සිදු උනේ වල් අලි නිසා නේද? මෙම අහන්නෙ born free ශ්‍රී ලංකා කණ්ඩයම. :)

Born Free මිතුරන් (නිර්මල, වෛද්‍ය දීපානි ජයන්තා, සමීර) 

මීට සවර කිහිපයකට පෙර විසකුරු සතුන් පිළිබඳ මෙම පාසලේදිම පැවැත්වුණු වැඩසටහනකදි සමීරට නයෙක් ගැහුව කියල පස්සෙයි ආරංචි උනෙ.. :-)
කොච්චර බඩගිනිවුනත් පොඩි උන්ටික රැ 9.30ට එලියට යනකන්, උපරිම සහාය දෙමින්, දවල්ටත් නොකා සහයට සිටි සමීර මිත්‍රයට තුති!

Libreoffice6 no Fedora 27

Posted by Daniel Lara on February 16, 2018 10:27 AM
Uma dica rápida para o pessoal que já quer usar o Libreoffice6 no seu Fedora

Método 1 

Vamos usar uma repo do copr e usar a versão do usuário Itamar

Ative o repo do copr

$ sudo  dnf copr enable itamarjp/libreoffice6


# dnf copr enable itamarjp/libreoffice6

Agora instale o libreoffice6

$ sudo dnf install libreoffice -y


# dnf install libreoffice -y

Método 2

Via site Oficial

Baixe o Libreoffice 6
$ wget http://download.documentfoundation.org/libreoffice/stable/6.0.1/rpm/x86_64/LibreOffice_6.0.1_Linux_x86-64_rpm_langpack_pt-BR.tar.gz

Pacote de tradução

$ wget http://download.documentfoundation.org/libreoffice/stable/6.0.1/rpm/x86_64/LibreOffice_6.0.1_Linux_x86-64_rpm.tar.gz

Descompacta os mesmo

$ tar -zxvf LibreOffice_6.0.1_Linux_x86-64_rpm.tar.gz

$ tar -zxvf LibreOffice_6.0.1_Linux_x86-64_rpm_langpack_pt-BR.tar.gz

Agora vamos instalar o Libreoffice

$ cd LibreOffice_6.0.1.1_Linux_x86-64_rpm

$ cd RPMS

$ sudo dnf install *.rpm

$ cd ..

$ cd ..

$ cd LibreOffice_6.0.1.1_Linux_x86-64_rpm_langpack_pt-BR

$ cd RPMS

$ sudo dnf install *.rpm

Pronto já esta instalado

Listen to the new Fedora podcast

Posted by Fedora Magazine on February 16, 2018 08:00 AM

The Fedora Marketing Team is proud to announce the Fedora Podcast. This ongoing series will feature interviews and talks with people who make the Fedora community awesome. These folks work on new technologies found in Fedora. Or they produce the distro itself. Some work on putting Fedora in the hands of users. There’s so much going on in Fedora, it takes a whole podcast series. The podcast will be released bi-weekly and already has seven episodes planned.

Episode #1

Matthew Miller, the Fedora Project Leader (FPL), talks about the Fedora Project, the Fedora community and other related topics. He touches on the history of Fedora, the relationship with Red Hat, community structure and more.

<iframe frameborder="no" height="300" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/400123170&amp;color=%23324c77&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;show_teaser=true&amp;visual=true" style="width: 100%;" width="100%"></iframe>

The episode is also available here:

Future Fedora Podcast topics

  • Ambassadors: “The face of Fedora”: A discussion about the contributors that serve as the face of Fedora. They talk to the general public, organize events, and bring people into the community.
  • Mindshare: “The outreach leadership in Fedora”: Mindshare aims to help Fedora teams reach their targets in a more effective way. The goal is to unify and share their work through optimized, standardized communication.
  • Fedora Magazine: The news portal of the Fedora Community”: The editors of the magazine share information about how the Magazine is made.
  • Atomic: “The Fedora’s next-generation cloud product”: This team works to integrate new OS technology and tools from Project Atomic into Fedora. Here they talk about their amazing technologies.
  • Infrastructure: “The team behind the distro”: Who are the people behind the iron? These incredible people keep everything running, so the whole community can do their work.
  • Modularization: Fedora’s Modularity initiative makes it simple for packagers to create alternative versions of software. It also allows users to consume those streams easily. Listen in as they explain how it all works.

Subscribe to the podcast

You can subscribe to the podcast in Simplecast, follow the Fedora Project on Soundcloud, or periodically check the author’s site on fedorapeople.org.


This podcast is made with the following free software: GNU/Ring, Audacity, and espeak.

The following audio files are also used: Soft echo sweep by bay_area_bobThe Spirit of Nøkken by johnnyguitar01, and Fussion Sound by pilinox.


PHP version 7.1.15RC1 and 7.2.3RC1

Posted by Remi Collet on February 16, 2018 04:54 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.2.3RC1 are available as SCL in remi-test repository and as base packages in the remi-php72-test repository for Fedora 25-27 and Enterprise Linux.

RPM of PHP version 7.1.15RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26-27 or remi-php71-test repository for Fedora 24-25 and Enterprise Linux.

PHP version 7.0 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.2.2RC1 is also available in Fedora rawhide and version 7.1.15RC1 in updates-testing for Fedora 27, for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.4.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php71, php72)

Base packages (php)

Java and Certmonger Continued

Posted by Adam Young on February 16, 2018 04:52 AM

Now that I know that I can do things like read the Keys from a Programmatic registered provider and properly set up SELinux to deal with it, I want to see if I can make this work for a pre-compiled application, using only environment variables.

I’ve modified the test code to just try and load a provider.

import java.util.Enumeration;
import java.security.KeyStore;
import java.security.PrivateKey;
import java.security.Provider;
import java.security.Security;

import sun.security.pkcs11.SunPKCS11;

public class ReadNSSProps{

    public static char[] password = new char[0];

    public static void main(String[] args) throws Exception{

         for (Provider p: Security.getProviders()){
        Provider p = Security.getProvider("SunPKCS11-NSScrypto");
        KeyStore ks = KeyStore.getInstance("PKCS11", p); //p is the provider created above
        ks.load(null, password);
        for (Enumeration<string> aliases = ks.aliases(); aliases.hasMoreElements();){

        KeyStore.ProtectionParameter protParam =
           new KeyStore.PasswordProtection(password);

        KeyStore.PrivateKeyEntry pkEntry = (KeyStore.PrivateKeyEntry)
            ks.getEntry("RHSSO", protParam);

        PrivateKey pkey =  pkEntry.getPrivateKey();

The pkcs11.cfg file still is pretty much the same:

# cat pkcs11.cfg 
name = NSScrypto
nssModule = keystore
nssDbMode = readOnly
nssLibraryDirectory = /lib64/
nssSecmodDirectory = /etc/opt/rh/rh-sso7/keycloak/standalone/keystore

Call the code like this:

java  -Djava.security.properties=$PWD/java.security.properties  ReadNSSProps

And…lots of output including a dump of the private key.

Thanks to these two articles for pointing the way.

Next up is trying to use these to provide the keystore for HTTPS.

virgl caps - oops I messed.up

Posted by Dave Airlie on February 16, 2018 12:11 AM
When I designed virgl I added a capability system to pass some info about the host GL to the guest driver along the lines of gallium caps. The design was at the virtio GPU level you have a number of capsets each of which has a max version and max size.

The virgl capset is capset 1 with max version 1 and size 308 bytes.

Until now we've happily been using version 1 at 308 bytes. Recently we decided we wanted to have a v2 at 380 bytes, and the world fell apart.

It turned out there is a bug in the guest kernel driver, it asks the host for a list of capsets and allows guest userspace to retrieve from it. The guest userspace has it's own copy of the struct.

The flow is:
Guest mesa driver gives kernel a caps struct to fill out for capset 1.
Kernel driver asks the host over virtio for latest capset 1 info, max size, version.
Host gives it the max_size, version for capset 1.
Kernel driver asks host to fill out malloced memory of the max_size with the
caps struct.
Kernel driver copies the returned caps struct to userspace, using the size of the returned host struct.

The bug is the last line, it uses the size of the returned host struct which ends up corrupting the guest in the scenario where the host has a capset 1 v2, size 380, but the host is still running old userspace which understands capset v1, size 308.

The 380 bytes gets memcpy over the 308 byte struct and boom.

Now we can fix the kernel to not do this, but we can't upgrade every kernel in an existing VM. So if we allow the virglrenderer process to expose a v2 all older sw will explode unless it is also upgraded which isn't really something you want in a VM world.

I came up with some virglrenderer workarounds, but due to another bug where qemu doesn't reset virglrenderer when it should, there was no way to make it reliable, and things like kexec old kernel from new kernel would blow up.

I decided in the end to bite the bullet and just make capset 2 be a repaired one. Unfortunately this needs patches in all 4 components before it can be used.

1) virglrenderer needs to expose capset 2 with the new version/size to qemu.
2) qemu needs to allow the virtio-gpu to transfer capset 2 as a virgl capset to the host.
3) The kernel on the host needs fixing to make sure we copy the minimum of the host caps and the guest caps into the guest userspace driver, then it needs to
provide a way that guest userspace knows the fixed version is in place.
4) The guest userspace needs to check if the guest kernel has the fix, and then query capset 2 first, and fallback to querying capset 1.

After talking to a few other devs in virgl land, they pointed out we could probably just never add a new version of capset 2, and grow the struct endlessly.

The guest driver would fill out the struct it wants to use with it's copy of default minimum values.
It would then call the kernel ioctl to copy over the host caps. The kernel ioctl would copy the minimum size of the host caps and the guest caps.

In this case if the host has a 400 byte capset 2, and the guest still only has 380 byte capset 2, the new fields from the host won't get copied into the guest struct
and it will be fine.

If the guest has the 400 byte capset 2, but the host only has the 380 byte capset 2, the guest would preinit the extra 20 bytes with it's default values (0 or whatever) and the kernel would only copy 380 bytes into the start of the 400 bytes and leave the extra bytes alone.

Now I just have to got write the patches and confirm it all.

Thanks to Stephane at google for creating the patch that showed how broken it was, and to others in the virgl community who noticed how badly it broke old guests! Now to go write the patches...

spectre-meltdown-checker in Fedora and EPEL repositories

Posted by Reto Gantenbein on February 15, 2018 09:14 PM

The recently disclosed Spectre and Meltdown CPU vulnerabilities are some of the most dramatic security issues in the recent computer history. Fortunately even six weeks after public disclosure sophisticated attacks exploiting these vulnerabilities are not yet common to observe. Fortunately, because the hard- and software vendors are still stuggling to provide appropriate fixes.

If you happen to run a Linux system, an excellent tool for tracking your vulnerability as well as the already active mitigation strategies is the spectre-meltdown-checker script originally written and maintained by Stéphane Lesimple.

Within the last month I set myself the target to bring this script to Fedora and EPEL so it can be easily consumed by the Fedora, CentOS and RHEL users. Today it finally happend that the spectre-meltdown-checker package was added to the EPEL repositories after it is already available in the Fedora stable repositories since one week.

On Fedora, all you need to do is:

dnf install spectre-meltdown-checker

After enabling the EPEL repository on CentOS this would be:

yum install spectre-meltdown-checker

The script, which should be run by the root user, will report:

    • If your processor is affected by the different variants of the Spectre and Meltdown vulnerabilities.
    • If your processor microcode tries to mitigate the Spectre vulnerability or if you run a microcode which
      is known to cause stability issues.
    • If your kernel implements the currently known mitigation strategies and if it was compiled with a compiler which is hardening it even more.
    • And eventually if you’re (still) affected by some of the vulnerability variants.
  • On my laptop this currently looks like this (Note, that I’m not running the latest stable Fedora kernel yet):

    # spectre-meltdown-checker                                                                                                                                
    Spectre and Meltdown mitigation detection tool v0.33                                                                                                                      
    Checking for vulnerabilities on current system                                       
    Kernel is Linux 4.14.14-200.fc26.x86_64 #1 SMP Fri Jan 19 13:27:06 UTC 2018 x86_64   
    CPU is Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz                                      
    Hardware check                            
    * Hardware support (CPU microcode) for mitigation techniques                         
      * Indirect Branch Restricted Speculation (IBRS)                                    
        * SPEC_CTRL MSR is available:  YES    
        * CPU indicates IBRS capability:  YES  (SPEC_CTRL feature bit)                   
      * Indirect Branch Prediction Barrier (IBPB)                                        
        * PRED_CMD MSR is available:  YES     
        * CPU indicates IBPB capability:  YES  (SPEC_CTRL feature bit)                   
      * Single Thread Indirect Branch Predictors (STIBP)                                                                                                                      
        * SPEC_CTRL MSR is available:  YES    
        * CPU indicates STIBP capability:  YES                                           
      * Enhanced IBRS (IBRS_ALL)              
        * CPU indicates ARCH_CAPABILITIES MSR availability:  NO                          
        * ARCH_CAPABILITIES MSR advertises IBRS_ALL capability:  NO                                                                                                           
      * CPU explicitly indicates not being vulnerable to Meltdown (RDCL_NO):  UNKNOWN    
      * CPU microcode is known to cause stability problems:  YES  (Intel CPU Family 6 Model 61 Stepping 4 with microcode 0x28)                                                
    The microcode your CPU is running on is known to cause instability problems,         
    such as intempestive reboots or random crashes.                                      
    You are advised to either revert to a previous microcode version (that might not have
    the mitigations for Spectre), or upgrade to a newer one if available.                
    * CPU vulnerability to the three speculative execution attacks variants
      * Vulnerable to Variant 1:  YES 
      * Vulnerable to Variant 2:  YES 
      * Vulnerable to Variant 3:  YES 
    CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
    * Mitigated according to the /sys interface:  NO  (kernel confirms your system is vulnerable)
    > STATUS:  VULNERABLE  (Vulnerable)
    CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
    * Mitigated according to the /sys interface:  YES  (kernel confirms that the mitigation is active)
    * Mitigation 1
      * Kernel is compiled with IBRS/IBPB support:  NO 
      * Currently enabled features
        * IBRS enabled for Kernel space:  NO 
        * IBRS enabled for User space:  NO 
        * IBPB enabled:  NO 
    * Mitigation 2
      * Kernel compiled with retpoline option:  YES 
      * Kernel compiled with a retpoline-aware compiler:  YES  (kernel reports full retpoline compilation)
      * Retpoline enabled:  YES 
    > STATUS:  NOT VULNERABLE  (Mitigation: Full generic retpoline)
    CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
    * Mitigated according to the /sys interface:  YES  (kernel confirms that the mitigation is active)
    * Kernel supports Page Table Isolation (PTI):  YES 
    * PTI enabled and active:  YES 
    * Running as a Xen PV DomU:  NO 
    > STATUS:  NOT VULNERABLE  (Mitigation: PTI)
    A false sense of security is worse than no security at all, see --disclaimer

    The script also supports a mode which outputs the result as JSON, so that it can easily be parsed by any compliance or monitoring tool:

    # spectre-meltdown-checker --batch json 2>/dev/null | jq
        "NAME": "SPECTRE VARIANT 1",
        "CVE": "CVE-2017-5753",
        "VULNERABLE": true,
        "INFOS": "Vulnerable"
        "NAME": "SPECTRE VARIANT 2",
        "CVE": "CVE-2017-5715",
        "VULNERABLE": false,
        "INFOS": "Mitigation: Full generic retpoline"
        "NAME": "MELTDOWN",
        "CVE": "CVE-2017-5754",
        "VULNERABLE": false,
        "INFOS": "Mitigation: PTI"

    For those who are (still) using a Nagios-compatible monitoring system, spectre-meltdown-checker also supports to be run as NRPE check:

    # spectre-meltdown-checker --batch nrpe 2>/dev/null ; echo $?
    Vulnerable: CVE-2017-5753

    I just mailed to Stéphane and he will soon release version 0.35 with many new features and fixes. As soon as it will be released I’ll submit a package update, so that you’re always up to date with the latest developments.

    Certmonger, SELinux and Keystores in random locations

    Posted by Adam Young on February 15, 2018 03:45 PM

    In my last post, SELinux was reporting AVCs when certmonger tried to access an NSS Database in a non-standard location. To get rid of the AVC, and get SELinx to allow the operations, we need to deal with the underlying cause of the AVC.

    Bottom Line Up Front:

    Run these commands.

    [root@sso standalone]# semanage fcontext -a -t cert_t $PWD/"keystore(/.*)?"
    [root@sso standalone]# restorecon -R -v keystore
    scontext=system_u:system_r:certmonger_t:s0 tcontext=unconfined_u:object_r:etc_t:s0 tclass=file

    Thanks to OZZ for that.

    Here’s How I got there.


    The original error was:

    type=AVC msg=audit(1518668324.903:6506): avc:  denied  { write } for  pid=15310 comm="certmonger" name="cert9.db" dev="vda1" ino=17484324 scontext=system_u:system_r:certmonger_t:s0 tcontext=unconfined_u:object_r:etc_t:s0 tclass=file

    Since I created the NSS database without a relabel or other operation, it is still in its default form. Looking at the whole subdirectory:

    [root@sso standalone]# ls -Z keystore
    -rw-------. root root unconfined_u:object_r:etc_t:s0   cert8.db
    -rw-------. root root unconfined_u:object_r:etc_t:s0   cert9.db
    -rw-------. root root unconfined_u:object_r:etc_t:s0   key3.db
    -rw-------. root root unconfined_u:object_r:etc_t:s0   key4.db
    -rw-------. root root unconfined_u:object_r:etc_t:s0   pkcs11.txt
    -rw-------. root root unconfined_u:object_r:etc_t:s0   secmod.db

    Compare with a properly configure system

    Lets contrast this with an NSS Database that is properly labeled. For example, on my IPA server, where SELInux is enforcing, I can look at certmonger and see where it is tracking files.

    $ ssh cloud-user@idm.ayoung.rdusalab 
    Last login: Wed Feb 14 22:53:20 2018 from
    [cloud-user@idm ~]$ sudo -i
    [root@idm ~]# getcert list
    Number of certificates and requests being tracked: 9.
    Request ID '20180212165505':
    	status: MONITORING
    	stuck: no
    	key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
    	certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'

    So looking at

    [root@idm ~]# ls -Z /etc/httpd/alias
    -rw-r-----. root apache unconfined_u:object_r:cert_t:s0  cert8.db
    -rw-r-----. root apache unconfined_u:object_r:cert_t:s0  cert8.db.orig
    -rw-------. root root   unconfined_u:object_r:cert_t:s0  install.log
    -rw-------. root root   system_u:object_r:ipa_cert_t:s0  ipasession.key
    -rw-r-----. root apache unconfined_u:object_r:cert_t:s0  key3.db
    -rw-r-----. root apache unconfined_u:object_r:cert_t:s0  key3.db.orig
    lrwxrwxrwx. root root   system_u:object_r:cert_t:s0      libnssckbi.so -> /usr/lib64/libnssckbi.so
    -rw-------. root apache unconfined_u:object_r:cert_t:s0  pwdfile.txt
    -rw-r-----. root apache unconfined_u:object_r:cert_t:s0  secmod.db
    -rw-r-----. root apache unconfined_u:object_r:cert_t:s0  secmod.db.orig

    The interesting value here is cert_t. From man ls

    Display security context so it fits on most displays. Displays only mode, user, group, security context and file name.

    The Security context is unconfined_u:object_r:cert_t:s0 which is in user:role:type:level format. What we want to do, then, is change the type on our NSS Database files. We could use chcon to test out the change temporarily, and then semanage fcontext to make the change permanent.


    Lets get a method in place to make changes and confirm they happen. I use two terminals. In one I’ll type command, but in the second, I’ll use tail -f to see changes to the log.

    [root@sso ~]# tail -f  /var/log/audit/audit.log | grep AVC

    Once I request a cert, I will see a line like this added to the output

    type=AVC msg=audit(1518708370.985:6639): avc:  denied  { write } for  pid=16459 comm="certmonger" name="cert8.db" dev="vda1" ino=17484343 scontext=system_u:system_r:certmonger_t:s0 tcontext=unconfined_u:object_r:etc_t:s0 tclass=file

    In the coding window, I can run commands like this to trigger output from the log;

    [root@sso standalone]# ipa-getcert  request  -w -d dbm:$PWD/keystore -D $HOSTNAME -K RHSSO/$HOSTNAME -n RHSSO
    New signing request "20180215152610" added.
    [root@sso standalone]# getcert stop-tracking  -i 20180215152610
    Request "20180215152610" removed.


    Now that I have a baseline, I’m going to try chcon to ensure that I have the type correct.

    [root@sso standalone]# sudo chcon -t cert_t keystore keystore/*
    [root@sso standalone]# ls -Z keystore
    -rw-------. root root unconfined_u:object_r:cert_t:s0  cert8.db
    -rw-------. root root unconfined_u:object_r:cert_t:s0  cert9.db
    -rw-------. root root unconfined_u:object_r:cert_t:s0  key3.db
    -rw-------. root root unconfined_u:object_r:cert_t:s0  key4.db
    -rw-------. root root unconfined_u:object_r:cert_t:s0  pkcs11.txt
    -rw-------. root root unconfined_u:object_r:cert_t:s0  secmod.db

    Lets run the test again:


    # ipa-getcert  request  -w -d dbm:$PWD/keystore -D $HOSTNAME -K RHSSO/$HOSTNAME -n RHSSO
    New signing request "20180215153108" added.

    Produces no new output from our log. We also see that the cert is being tracked.

    [root@sso standalone]# getcert list
    Number of certificates and requests being tracked: 1.
    Request ID '20180215153108':
    	status: MONITORING


    Lets try this again but with SELinux enforcing. First cleanup from our last run

    [root@sso standalone]# getcert stop-tracking  -i 20180215153108
    Request "20180215153108" removed.
    [root@sso standalone]# getcert list
    Number of certificates and requests being tracked: 0.

    And now:

    [root@sso standalone]# getenforce 
    [root@sso standalone]# setenforce 1
    [root@sso standalone]# getenforce 
    [root@sso standalone]# ipa-getcert  request  -w -d dbm:$PWD/keystore -D $HOSTNAME -K RHSSO/$HOSTNAME -n RHSSO
    New signing request "20180215153334" added.
    [root@sso standalone]# getcert list
    Number of certificates and requests being tracked: 1.
    Request ID '20180215153334':
    	status: MONITORING

    And the only thing we see in our log is a warning about switching enforcement.

    type=USER_AVC msg=audit(1518708789.490:6646): pid=2501 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  received setenforce notice (enforcing=1)  exe="?" sauid=81 hostname=? addr=? terminal=?'


    OK, so lets make this change permanent. First, restore it so we know we are having the desired effect.

    [root@sso standalone]# restorecon -R -v keystore
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/pkcs11.txt context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/cert9.db context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/key4.db context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/secmod.db context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/cert8.db context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/key3.db context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
    [root@sso standalone]# ls -Z keystore
    -rw-------. root root unconfined_u:object_r:etc_t:s0   cert8.db
    -rw-------. root root unconfined_u:object_r:etc_t:s0   cert9.db
    -rw-------. root root unconfined_u:object_r:etc_t:s0   key3.db
    -rw-------. root root unconfined_u:object_r:etc_t:s0   key4.db
    -rw-------. root root unconfined_u:object_r:etc_t:s0   pkcs11.txt
    -rw-------. root root unconfined_u:object_r:etc_t:s0   secmod.db

    Now use semanage to make the change persist:

    [root@sso standalone]# semanage fcontext -a -t cert_t $PWD/"keystore(/.*)?"
    [root@sso standalone]# restorecon -R -v keystore
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/pkcs11.txt context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/cert9.db context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/key4.db context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/secmod.db context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/cert8.db context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
    restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/key3.db context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0

    Do another list to check the current state of the file:

    [root@sso standalone]# ls -Z keystore
    -rw-------. root root unconfined_u:object_r:cert_t:s0  cert8.db
    -rw-------. root root unconfined_u:object_r:cert_t:s0  cert9.db
    -rw-------. root root unconfined_u:object_r:cert_t:s0  key3.db
    -rw-------. root root unconfined_u:object_r:cert_t:s0  key4.db
    -rw-------. root root unconfined_u:object_r:cert_t:s0  pkcs11.txt
    -rw-------. root root unconfined_u:object_r:cert_t:s0  secmod.db

    One last time, stop tracking the existing cert, and request a new one:

    [root@sso standalone]# getcert stop-tracking  -i 20180215153334
    Request "20180215153334" removed.
    [root@sso standalone]# ipa-getcert  request  -w -d dbm:$PWD/keystore -D $HOSTNAME -K RHSSO/$HOSTNAME -n RHSSO
    New signing request "20180215154055" added.
    [root@sso standalone]# getcert list
    Number of certificates and requests being tracked: 1.
    Request ID '20180215154055':
    	status: MONITORING
    	stuck: no
    	key pair storage: type=NSSDB,location='dbm:/etc/opt/rh/rh-sso7/keycloak/standalone/keystore',nickname='RHSSO',token='NSS Certificate DB'
    	certificate: type=NSSDB,location='dbm:/etc/opt/rh/rh-sso7/keycloak/standalone/keystore',nickname='RHSSO',token='NSS Certificate DB'

    Fedora/RISC-V: Runnable stage 4 disk images

    Posted by Richard W.M. Jones on February 15, 2018 03:15 PM

    We’ve now got:

    1. An autobuilder.
    2. A multithreaded QEMU.
    3. A Fedora RPMs repository.
    4. A bootable disk image.

    It’s unpolished and minimal at the moment, but what you can do today (if you have a Fedora 27+ x86_64 host):

    1. Enable the rjones/riscv copr and install riscv-qemu.
    2. Download the stage4-disk.img, and bbl and uncompress the disk image.
    3. Run this command:
      qemu-system-riscv64 \
          -nographic -machine virt -m 2G -smp 4 \
          -kernel bbl \
          -append "console=ttyS0 ro root=/dev/vda init=/init" \
          -device virtio-blk-device,drive=hd0 \
          -drive file=stage4-disk.img,format=raw,id=hd0 \
          -device virtio-net-device,netdev=usernet \
          -netdev user,id=usernet
    4. Inside the guest drop a repo file into /etc/yum.repos.d containing:
    5. Use tdnf --releasever 27 install ... to install more packages.

    util-linux v2.32 -- what's new?

    Posted by Karel Zak on February 15, 2018 01:37 PM
    This release (rc1 now) is without dramatic changes and game-changing improvements.

    We have again invested our time and love to make cal(1) more usable. The most visible change is possibility to specify calendar system.

    The current (backwardly compatible) default is to use Gregorian calendar and Julian calendar for dates before September 1752 (British Empire calendar reform). Unfortunately, this default is pretty frustrating if you want to use cal(1) for old dates before 1752 and you don't want to follow UK calendar.

    The new command line option --reform={Julian,Gregorian,iso,1752,...} allows to specify exclusively calendar system or reform date. The currently supported reform is only UK reform in 1752. In the next versions we will probably add support for another reforms as it's very region specific (for example 1584 in my country, 1873 in Japan and 1926 Turkey, etc.).

    Linux kernel supports multi-line log messages. Unfortunately, dmesg(1) support for this feature was insufficient. Now dmesg(1) provides better support and by new command line option --force-prefix allows to to print facility, level or timestamp information to each line of a multi-line message.

    The command fallocate(1) --dig-holes has been significantly improved and it's faster more effective now (thanks to Vaclav Dolezal).

    The command lscpu(1) provides more details about ARM CPUs now.

    The command lsmem(1) supports memory zones now.

    The command lsns(8) provides netnsid and nsfs columns now. ip(1) command allows to create network namespace, add logical name and ID for the namespace. Now all is visible by lsns(8). For example copy & past from our regression tests:

            NS TYPE NPROCS   PID USER     NETNSID NSFS                            COMMAND
    4026532001 net 281 1 root unassigned /usr/lib/systemd/systemd --switched-root --system --deserialize 24
    4026532400 net 1 795 rtkit unassigned /usr/libexec/rtkit-daemon
    4026532590 net 1 6707 root 0 /run/netns/LSNS-TEST-NETNSID-NS dd if=tests/output/lsns/FIFO-NETNSID bs=1 count=2 of=/dev/null

    where dd(1) is running in the net namespace, and the namespace is by nsfs mounted on /run/netns/LSNS-TEST-NETNSID-NS. See https://raw.githubusercontent.com/karelzak/util-linux/master/tests/ts/lsns/netnsidfor more details how to use ip(8) to setup this namespace.

    The command rtcwake(8) has been improved to wait stdin to settle down before entering a system sleep. This is important on systems where wireless USB devices (mouse, keyboard, ...) generate "noise" for fraction of a second after rtcwake(8)execution.

    The library libblkid has been extended to support LUKS2, Micron mpool, VDO and Atari partition table.

    Thanks to all 43 contributors!

    The next release v2.33 is planned for May 2018 (yes, the goal is to have 3-4 releases per year rather than 2 releases like in last years).

    Java and Certmonger

    Posted by Adam Young on February 15, 2018 06:04 AM

    Earlier this week, I got some advice from John Dennis on how to set up the certificates for a Java based web application. The certificates were to be issued by the Dogtag instance in a Red Hat Identity Mangement (RH IdM) install. However, unlike the previous examples I’ve seen, this one did some transforms from the certificate files, into PKCS12 and then finally into the keystore. It Looks like this:

    ipa-getcert request -f /etc/pki/tls/certs/rhsso-cert.pem -k /etc/pki/tls/private/rhsso-key.pem -I rhsso -K RHSSO/`hostname` -D `hostname`
    openssl pkcs12 -export -name rhsso -passout pass:FreeIPA4All -in /etc/pki/tls/certs/rhsso-cert.pem -inkey /etc/pki/tls/private/rhsso-key.pem -out rhsso.p12
    keytool -importkeystore -srckeystore rhsso.p12 -srcstoretype PKCS12 -srcstorepass FreeIPA4All -destkeystore keycloak.jks -deststorepass FreeIPA4All -alias rhsso
    keytool -keystore keycloak.jks -import -file /etc/ipa/ca.crt -alias ipa-ca
    cp keycloak.jks /etc/opt/rh/rh-sso7/keycloak/standalone/

    Aside from the complications of this process, it also means that the application will not be updated when Certmonger automatically renews the certificate, leading to potential down time. I wonder if there is a better option.

    Keystore Formats

    The latest couple releases of Java have supported a wider array of Keystore formats.

    from /usr/lib/jvm/java-1.8.0-openjdk-$VERSION.b14.fc27.x86_64/jre/lib/security/java.security

    # Default keystore type.
    # Controls compatibility mode for the JKS keystore type.
    # When set to 'true', the JKS keystore type supports loading
    # keystore files in either JKS or PKCS12 format. When set to 'false'
    # it supports loading only JKS keystore files.

    So it appears that one step above is unnecssary: we could use a PKCS-12 file instead of the Native Java KeyStore. However, Certmonger does not mange PKCS-12 files either, so that is not a complete solution.


    But what about PKCS-11?

    One thing that is tricky is that you are rarely going to find much about creating PKCS-11 files: instead, you find wasy to work with them via various tools. Why is that? PKCS-11 is not a file format per set, it is a standard.

    The PKCS#11 standard specifies an application programming interface (API), called “Cryptoki,” for devices that hold cryptographic information and perform cryptographic functions. Cryptoki follows a simple object based approach, addressing the goals of technology independence (any kind of device) and resource sharing (multiple applications accessing multiple devices), presenting to applications a common, logical view of the device called a “cryptographic token”.

    From the standard.

    In other words, PKCS-11 is an API for talking to various forms of storage for cryptographic information, specifically asymmetric keys.

    Asymmetric keys is that they come in pairs. One is public, the other is kept private. The PKCS-11 API helps enforce that. Instead of extracting a Private Key from a database in order to encrypt or decrypt data the data is moved into the container and signed internally. The private key never leaves the container.

    That is why we have two standards: PKCS-11 and PKCS-12. PKCS-12 Is the standard the way you safely extract a key and transport it to another location.

    Ideally, the PKCS-11 token is a hardware device. For example, a Yubikey device. Many computers come with Hardware Security Modules (HSMs) built in for just this purpose.

    The Mozilla project developed cryptography to with with these standards. It used to be called Netscape Security Services, but since has been retconned to be Network Security Services. Both, you notice, are the acronym NSS. To be clear, this is separate from the Name Server Switch API, which is also called NSS. I seem to recall having written this all before.

    The Firefox Browser, and related programs like Thunderbird, can fetch and store cryptographic certificates and keys in a managed database. This is usually called an NSS database, and it is accessed via PKCS-11, specifically so they have a singe API to use if the site want to do something more locked down, like use an HSM.

    OK, so this is a long way of saying that, maybe it is possible to use an NSS database as the Java Keystore.

    NSS Database

    First, lets create a scratch NSS Database:

    [root@sso ~]# cd /etc/opt/rh/rh-sso7/keycloak/standalone/
    [root@sso standalone]# mkdir keystore
    [root@sso standalone]# certutil -d dbm:$PWD/keystore -N
    Enter a password which will be used to encrypt your keys.
    The password should be at least 8 characters long,
    and should contain at least one non-alphabetic character.
    Enter new password: 
    Re-enter password: 

    Now lets request a cert. Because this NSS database is in a custom location, SELinx is going to block Certmonger from talking to it. For now, I’ll set the machine in permissive mode to let the request go in.

    [root@sso standalone]# setenforce permissive
    root@sso standalone]# ipa-getcert  request  -w -d dbm:$PWD/keystore -D $HOSTNAME -K RHSSO/$HOSTNAME -n RHSSO
    New signing request "20180215041951" added.
    [root@sso standalone]# getcert list
    Number of certificates and requests being tracked: 1.
    Request ID '20180215041951':
    	status: MONITORING
    	stuck: no
    	key pair storage: type=NSSDB,location='dbm:/etc/opt/rh/rh-sso7/keycloak/standalone/keystore',nickname='RHSSO',token='NSS Certificate DB'
    	certificate: type=NSSDB,location='dbm:/etc/opt/rh/rh-sso7/keycloak/standalone/keystore',nickname='RHSSO',token='NSS Certificate DB'
    	CA: IPA
    	issuer: CN=Certificate Authority,O=AYOUNG.RDUSALAB
    	subject: CN=sso.ayoung.rdusalab,O=AYOUNG.RDUSALAB
    	expires: 2020-02-16 04:19:46 UTC
    	dns: sso.ayoung.rdusalab
    	principal name: RHSSO/sso.ayoung.rdusalab@AYOUNG.RDUSALAB
    	key usage: digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment
    	eku: id-kp-serverAuth,id-kp-clientAuth
    	pre-save command: 
    	post-save command: 
    	track: yes
    	auto-renew: yes

    SELinux reworking will come at the end.

    OK, we should be able to list the certs in the database:

    [root@sso standalone]# certutil -L -d dbm:$PWD/keystore 
    Certificate Nickname                                         Trust Attributes
    RHSSO                                                        ,,   
    [root@sso standalone]# 

    Lets try it in Java. First, I need the Compiler

    sudo yum install java-1.8.0-openjdk-devel

    Now… the Java libraries it turns out are not default allowed to deal with NSS. We need a configuration file and we can create the provider dynamically. For NSS we need: security.provider.10=sun.security.pkcs11.SunPKCS11. The following code seems to succeed:

    import java.security.KeyStore;
    import java.security.Provider;
    import java.security.Security;
    import sun.security.pkcs11.SunPKCS11;
    public class ReadNSS{
        public static char[] password = new char[0];
        public static void main(String[] args) throws Exception{
            String configName = "/etc/opt/rh/rh-sso7/keycloak/standalone/pkcs11.cfg";
            Provider p = new sun.security.pkcs11.SunPKCS11(configName);
            KeyStore ks = KeyStore.getInstance("PKCS11", p); //p is the provider created above
            ks.load(null, password);
            for (Enumeration<string> aliases = ks.aliases(); aliases.hasMoreElements();){

    With the corresponding config file:

    name = NSScrypto
    nssModule = keystore
    nssDbMode = readOnly
    nssLibraryDirectory = /lib64/
    nssSecmodDirectory = /etc/opt/rh/rh-sso7/keycloak/standalone/keystore

    Compile and run

    [root@sso standalone]# javac ReadNSS.java 
    [root@sso standalone]# java ReadNSS

    We can list the keys. Ideally, I would pass that provider information on the command line, though.


    It does look like there is a way to create a database that Java can use as a KeyStore. The question, now, is whether TOmcat and JBoss based web apps can use this mechanism to manage their HTTPS certificates.

    SE Linux

    What should the SELinux rule be:

    type=AVC msg=audit(1518668385.358:6514): avc:  denied  { unlink } for  pid=15316 comm="certmonger" name="key4.db-journal" dev="vda1" ino=17484326 scontext=system_u:system_r:certmonger_t:s0 tcontext=system_u:object_r:etc_t:s0 tclass=file
    	Was caused by:
    		Missing type enforcement (TE) allow rule.
    		You can use audit2allow to generate a loadable module to allow this access.

    But that Generates

    #============= certmonger_t ==============
    #!!!! WARNING: 'etc_t' is a base type.
    allow certmonger_t etc_t:file { create setattr unlink write };

    Which, if I read it right, allows Certmonger to unlink and write and etc file. We want something more targeted.

    How to deal with this is in the next post.

    Fedora Atomic Workstation: Building flatpaks

    Posted by Matthias Clasen on February 14, 2018 11:10 PM

    In order to use my new Atomic Workstation for real, I need to be able to build things locally,  including creating flatpaks.

    Image result for project atomic logo

    One of the best tools for the  job (building flatpaks) is GNOME builder. I had already installed the stable build from flathub, but Christian told me that the nightly build is way better for flatpak building, so I went to install it from here.

    Getting GNOME Builder

    This highlights one of the nice aspects of flatpak: it is fundamentally decentralized. While flathub serves as a convenient one-stop-shop for many apps, it is entirely possible to have other remotes. Flathub is not privileged at all.

    It is also perfectly possible to have both the stable gnome-builder from flathub and the nightly installed at the same time.

    The only limitation is that only one of them will get to be presented as ‘the’ GNOME Builder by the desktop, since they use the same app id.  You can change between the installed versions of an application using the flatpak cli:

    flatpak make-current --user org.gnome.Builder master

    Building flatpaks

    Now on to building flatpaks! Naturally, my testcase is GNOME Recipes. I have a git checkout of it, so I proceeded to open it in GNOME Builder, started a build … and it failed, with a somewhat cryptic error message about chdir() failing :-(

    After quite a bit of head-scratching and debugging, we determined that this happens because flatpak is doing builds in a sandbox as well, and it is replacing /var with its own bind mount to do so. This creates a bit of confusion with the /home -> /var/home symlink that is part of the Atomic Workstation image. We are still trying to determine the best fix for this, you can follow along in this issue.

    Since I am going to travel soon, I can’t wait for the official fix, so I came up with a workaround: Remove the /home -> /var/home symlink, create a regular /home directory in its place, and change /etc/fstab to mount my home partition there instead of /var/home. One reason why this is ugly is that I am modifying the supposedly immutable OS image. How ? By removing the immutable attribute with chattr -i /.  Another reason why it is ugly is that this has to be repeated everytime a new image gets installed (regardless whether it is via an update or via package layering).

    But, with this workaround in place, there is no longer a troublesome symlink to cause trouble for flatpak, and my build succeeds. Once it is built, I can run the recipes flatpak with one click on the play button in builder.

    Neat! I am almost ready to take Atomic Workstation on the road.

    Fedora 27 : The strace tool for debug.

    Posted by mythcat on February 14, 2018 10:21 PM
    Today I test a great tool named strace from here.
    This tool will help you with diagnostic, debugging and monitor between processes and the Linux kernel.

    For example you can test this tool with ls command:
    - to display only a specific system call, use the strace -e option as shown below.
    $ strace -e open ls > /dev/null
    - the result of this will come with all infos about count time, calls, and errors for each system call.
    $ strace -c ls > /dev/null
    - save the trace execution to a file:
    $ strace -o output.txt ls
    - display and save the strace for a given process id:
    $ strace -p 1725 -o process_id_trace.txt

    You can see more examples on the official webpage.

    What is the best online dating site and the best way to use it?

    Posted by Daniel Pocock on February 14, 2018 05:25 PM

    Somebody recently shared this with me, this is what happens when you attempt to access Parship, an online dating site, from the anonymous Tor Browser.

    Experian is basically a private spy agency. Their website boasts about how they can:

    • Know who your customers are regardless of channel or device
    • Know where and how to reach your customers with optimal messages
    • Create and deliver exceptional experiences every time

    Is that third objective, an "exceptional experience", what you were hoping for with their dating site honey trap? You are out of luck: you are not the customer, you are the product.

    When the Berlin wall came down, people were horrified at what they found in the archives of the Stasi. Don't companies like Experian and Facebook gather far more data than this?

    So can you succeed with online dating?

    There are only three strategies that are worth mentioning:

    • Access sites you can't trust (which includes all dating sites, whether free or paid for) using anonymous services like Tor Browser and anonymous email addresses. Use fake photos and fake all other data. Don't send your real phone number through the messaging or chat facility in any of these sites because they can use that to match your anonymous account to a real identity: instead, get an extra SIM card that you pay for and top-up with cash. One person told me they tried this for a month as an experiment, expediently cutting and pasting a message to each contact to arrange a meeting for coffee. At each date they would give the other person a card that apologized for their completely fake profile photos and offering to start over now they could communicate beyond the prying eyes of the corporation.
    • Join online communities that are not primarily about dating and if a relationship comes naturally, it is a bonus.
    • If you really care about your future partner and don't want your photo to be a piece of bait used to exploit and oppress them, why not expand your real-world activities?

    Contribute at the Fedora Test Day for kernel 4.15

    Posted by Fedora Magazine on February 14, 2018 04:03 PM

    The kernel team is working on final integration for kernel 4.15. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test day for Thursday, February 22. Refer to the wiki page for links to the test images you’ll need to participate.

    How do test days work?

    A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

    To contribute, you only need to be able to do the following things:

    • Download test materials, which include some large files
    • Read and follow directions step by step

    The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

    Happy testing, and we hope to see you on test day.

    Moving a portal

    Posted by Alexander Larsson on February 14, 2018 03:45 PM

    Portals are a fundamental concept in flatpak. They are the way a sandboxed application can access information and services from the host in a safe, controlled way.

    Most of the portals in use are implemented by a module called xdg-desktop-portal, with backend implementations for Gtk+ and KDE. Many of the portals in it, such as the important file chooser portal relies on a lowlevel portal called the document portal. It is a combined dbus and fuse service that controls access to files with fine-grained per-application permissions.

    The snap developers are interested in using portals for snap packages, which is great for application developers as they only have to target a single API. However, historically the document portal was shipped as part of flatpak, which is suddenly a major problem.

    To fix this we had to move the document portal from flatpak to xdg-desktop-portal, and I’m happy to announce that with todays releases of xdg-desktop-portal and flatpak we now achieved this.

    Packagers need to be careful about this when updating to the new versions so that only one copy of the document portal is installed. The stable version of flatpak (0.10.4) can be built with or without the document portal, depending on what version of the desktop portal you have. The unstable flatpak release doesn’t have the document portal at all, and requires you to use the new desktop portal.

    dtrace for linux; Oracle does the right thing

    Posted by Mark J. Wielaard on February 14, 2018 10:13 AM

    At Fosdem we had a talk on dtrace for linux in the Debugging Tools devroom.

    Not explicitly mentioned in that talk, but certainly the most exciting thing, is that Oracle is doing a proper linux kernel port:

     commit e1744f50ee9bc1978d41db7cc93bcf30687853e6
     Author: Tomas Jedlicka <tomas.jedlicka@oracle.com>
     Date: Tue Aug 1 09:15:44 2017 -0400
     dtrace: Integrate DTrace Modules into kernel proper
     This changeset integrates DTrace module sources into the main kernel
     source tree under the GPLv2 license. Sources have been moved to
     appropriate locations in the kernel tree.

    That is right, dtrace dropped the CDDL and switched to the GPL!

    The user space code dtrace-utils and libdtrace-ctf (a combination of GPLv2 and UPL) can be found on the DTrace Project Source Control page. The NEWS file mentions the license switch (and that it is build upon elfutils, which I personally was pleased to find out).

    The kernel sources (GPLv2+ for the core kernel and UPL for the uapi) are slightly harder to find because they are inside the uek kernel source tree, but following the above commit you can easily get at the whole linux kernel dtrace directory.

    Update: There is now a dtrace-linux-kernel.git repository with all the dtrace commits rebased on top of recent upstream linux kernels.

    The UPL is the Universal Permissive License, which according to the FSF is a lax, non-copyleft license that is compatible with the GNU GPL.

    Thank you Oracle for making everyone’s life easier by waving your magic relicensing wand!

    Now there is lots of hard work to do to actually properly integrate this. And I am sure there are a lot of technical hurdles when trying to get this upstreamed into the mainline kernel. But that is just hard work. Which we can now start collaborating on in earnest.

    Like systemtap and the Dynamic Probes (dprobes) before it, dtrace is a whole system observability tool combining tracing, profiling and probing/debugging techniques. Something the upstream linux kernel hackers don’t always appreciate when presented as one large system. They prefer having separate small tweaks for tracing, profiling and probing which are mostly separate from each other. It took years for the various hooks, kprobes, uprobes, markers, etc. from systemtap (and other systems) to get upstream. But these days they are. And there is now even a byte code interpreter (eBPF) in the mainline kernel as originally envisioned by dprobes, which systemtap can now target through stapbpf. So with all those techniques now available in the linux kernel it will be exciting to see if dtrace for linux can unite them all.

    i915 driver Panel Self Refresh (PSR) status update

    Posted by Hans de Goede on February 14, 2018 08:57 AM
    Hi All,

    First of Thank you to everyone who has been sending me PSR test results, I've received well over a 100 reports!

    Quite a few testers have reported various issues when enabling PSR, 3 often reported issues are:

    • flickering

    • black screen

    • cursor / input lag

    The Intel graphics team has been working on a number of fixes which make PSR work better in various cases. Note we don't expect this to fix it everywhere, but it should get better and work on more devices in the near future.

    This is good news, but the bad news is that this means that all the tests people have so very kindly done for me will need to be redone once the new improved PSR code is ready for testing. I will do a new blogpost (and email people who have send me test-reports), when the new
    PSR code is ready for people to (re-)test (sorry).



    Wireless@SGx for Fedora and Linux users

    Posted by Harish Pillay 9v1hp on February 14, 2018 03:22 AM

    Eight years ago, I wrote about the use of Wireless@SGx being less than optimal some years ago.

    I must acknowledge that there has been efforts to improve the access (and speeds) to the extent that earlier this week, I was able to use a wireless@sgx hotspot to be on two conference calles using bluejeans.com and zoom.info. It worked very well that for the two hours I was on, there was hardly an issue.

    I tweeted about this and kudos must be sent to those who have laboured to make this work well.

    The one thing I would want the Wireless@SG people to do is to provide a full(er) set of instructions for access including Linux environments (Android is Linux after all).

    I am including a part of my 2010 post here for the configuration aspects (on a Fedora desktop):

    The information is trivial. This is all you need to do:

    	- Network SSID: Wireless@SGx
    	- Security: WPA Enterprise
    	- EAP Type: PEAP
    	- Sub Type: PEAPv0/MSCHAPv2

    and then put in your Wireless@SG username@domain and password. I could not remember my iCell id (I have not used it for a long time) so I created a new one – sgatwireless@icellwireless.net. They needed me to provide my cellphone number to SMS the password. Why do they not provide a web site to retrieve the password?

    Now from the info above, you can set this up on a Fedora machine (would be the same for Red Hat Enterprise Linux, Ubuntu, SuSE etc) as well as any other modern operating system.

    I had to recreate a new ID (it appears that iCell is no longer a provider) and apart from that, everything else is the same.

    Thank you for using our tax dollars well, IMDA.

    Fencing RHV or oVirt nested hypervisors

    Posted by Maxim Burgerhout on February 14, 2018 12:00 AM

    My setup

    I have this nice AMD Ryzen ThreadRipper, with 64GiB or RAM. What I want to do is run RHV 4.1 in virtual machines, and be able to play around with nested virtualization, HA and fencing.

    Problem is, that my virtual machines will live on libvirt, and VMs on libvirt do not have IPMI interfaces.

    The solution

    Enter virtualbmc! Stemming from the OpenStack project, virtualbmc is a small Python program that allows you to connect a virtual BMC interface to some of your VMs.

    The way it works, is as follows:

    You enable my copr, like njah:

    $ sudo dnf copr enable wzzrd/virtualbmc

    and then you install the actual program, like njah:

    sudo dnf install python2-virtualbmc python-virtualbmc-doc

    Virtualbmc will create a virtual BMC device for each VM you configure it for. If you, like me, want to have all BMC interfaces on the gateway of your virtual network, you need to choose a unique port for each virtual BMC interface. More below.

    Configuring virtualbmc

    Now you have to add virtual BMC interfaces to the VMs you want to control. For this, you need to pass the name of your VM, an address to bind to (optionally), a port to listen on, and a username and password to control the BMC.

    As said, I want all virtual BMCs to live on my virtual network gateway,, so I create the virtual BMC devices like njah:

    $ sudo vbmc add rhv-node-01.deployment6.lan --address --port 7001 --username root --password foobar
    $ sudo vbmc add rhv-node-02.deployment6.lan --address --port 7001 --username root --password foobar

    My hypervisor VMs (the so-called L1 machines), are obviously called rhv-node-01.deployment6.lan and rhv-node-02.deployment6.lan.

    Starting the virtual BMCs

    After creating the interfaces, we can see them, like njah:

    $ sudo vbmc list
    |         Domain name         | Status |    Address    | Port |
    | rhv-node-01.deployment6.lan |  down  | | 7001 |
    | rhv-node-02.deployment6.lan |  down  | | 7002 |

    And we can then start them to be used in RHV:

    sudo vbmc start rhv-node-01.deployment6.lan
    sudo vbmc start rhv-node-02.deployment6.lan

    Which gives us njah:

    $ sudo vbmc list
    |         Domain name         |   Status  |    Address    | Port |
    | rhv-node-01.deployment6.lan |  running  | | 7001 |
    | rhv-node-02.deployment6.lan |  running  | | 7002 |

    Configuring power management for the RHV nodes

    Now, start your VMs, and in RHVM, go into the power management interface for each hypervisor, and do the following:

    • Add a power management interface of type IPMILAN.

    • As the IP, pass the the we used above (or your equivalent)

    • As the username and password, pass the values we used above

    • Now here comes the tricky bit: in the options field, for each hypervisor, add:


    Change the port to reflect the port you have configured for this VM.

    It took me quite a bit of time to figure out the right options here. Hope this saves someone else a bunch of time!

    Happy testing!

    Episode 82 - RSA, TLS, Chrome HTTP, and PCI

    Posted by Open Source Security Podcast on February 13, 2018 11:56 PM
    Josh and Kurt talk about problems of textbook RSA implementations, the upcoming TLS changes in TLS, and the insecurity of http in Chrome.

    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/6257462/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes