Fedora People

LibreOffice 5.4 permite firmar con GnuPG

Posted by Eduardo Villagrán Morales on November 20, 2017 01:00 AM

Hace ya unos días que se lanzó Fedora 27. Entre sus novedades está LibreOffice 5.4, el cual ahora permite firmar los documentos usando una llave GnuPG.

  1. Creamos un documento en LibreOffice (Writer, Calc, Draw, etc) y lo guardamos

  2. En el menú Archivo seleccionamos Firmas digitales -> Firmas digitales...

  3. Presionamos Firmar …

Python testing: Asserting raw byte output with pytest

Posted by Toshio Kuratomi on November 18, 2017 07:55 PM

The Code to Test

When writing code that can run on both Python2 and Python3, I’ve sometimes found that I need to send and receive bytes to stdout. Here’s some typical code I might write to do that:

# byte_writer.py
import sys
import six

def write_bytes(some_bytes):
    # some_bytes must be a byte string
    if six.PY3:
        stdout = sys.stdout.buffer
    else:
        stdout = sys.stdout
    stdout.write(some_bytes)

if __name__ == '__main__':
    write_bytes(b'\xff')

In this example, my code needs to write a raw byte to stdout. To do this, it uses sys.stdout.buffer on Python3 to circumvent the automatic encoding/decoding that occurs on Python3’s sys.stdout. So far so good. Python2 expects bytes to be written to sys.stdout by default so we can write the byte string directly to sys.stdout in that case.

The First Attempt: Pytest newb, but willing to learn!

Recently I wanted to write a unittest for some code like that. I had never done this in pytest before so my first try looked a lot like my experience with nose or unittest2: override sys.stdout with an io.BytesIO object and then assert that the right values showed up in sys.stdout:

# test_byte_writer.py
import io
import sys

import mock
import pytest
import six

from byte_writer import write_bytes

@pytest.fixture
def stdout():
    real_stdout = sys.stdout
    fake_stdout = io.BytesIO()
    if six.PY3:
        sys.stdout = mock.MagicMock()
        sys.stdout.buffer = fake_stdout
    else:
        sys.stdout = fake_stdout

    yield fake_stdout

    sys.stdout = real_stdout

def test_write_bytes(stdout):
    write_bytes()
    assert stdout.getvalue() == b'a'

This gave me an error:

[pts/38@roan /var/tmp/py3_coverage]$ pytest              (07:46:36)
_________________________ test_write_byte __________________________

stdout = 

    def test_write_byte(stdout):
        write_bytes(b'a')
>       assert stdout.getvalue() == b'a'
E       AssertionError: assert b'' == b'a'
E         Right contains more items, first extra item: 97
E         Use -v to get the full diff

test_byte_writer.py:27: AssertionError
----------------------- Captured stdout call -----------------------
a
===================== 1 failed in 0.03 seconds =====================

I could plainly see from pytest’s “Captured stdout” output that my test value had been printed to stdout. So it appeared that my stdout fixture just wasn’t capturing what was printed there. What could be the problem? Hmmm…. Captured stdout… If pytest is capturing stdout, then perhaps my fixture is getting overridden by pytest’s internal facility. Let’s google and see if there’s a solution.

The Second Attempt: Hey, that’s really neat!

Wow, not only did I find that there is a way to capture stdout with pytest, I found that you don’t have to write your own fixture to do so. You can just hook into pytest’s builtin capfd fixture to do so. Cool, that should be much simpler:

# test_byte_writer.py
import io
import sys

from byte_writer import write_bytes

def test_write_byte(capfd):
    write_bytes(b'a')
    out, err = capfd.readouterr()
    assert out == b'a'

Okay, that works fine on Python2 but on Python3 it gives:

[pts/38@roan /var/tmp/py3_coverage]$ pytest              (07:46:41)
_________________________ test_write_byte __________________________

capfd = 

    def test_write_byte(capfd):
        write_bytes(b'a')
        out, err = capfd.readouterr()
>       assert out == b'a'
E       AssertionError: assert 'a' == b'a'

test_byte_writer.py:10: AssertionError
===================== 1 failed in 0.02 seconds =====================

The assert looks innocuous enough. So if I was an insufficiently paranoid person I might be tempted to think that this was just stdout using python native string types (bytes on Python2 and text on Python3) so the solution would be to use a native string here ("a" instead of b"a". However, where the correctness of anyone else’s bytes <=> text string code is concerned, I subscribe to the philosophy that you can never be too paranoid. So….

The Third Attempt: I bet I can break this more!

Rather than make the seemingly easy fix of switching the test expectation from b"a" to "a" I decided that I should test whether some harder test data would break either pytest or my code. Now my code is intended to push bytes out to stdout even if those bytes are non-decodable in the user’s selected encoding. On modern UNIX systems this is usually controlled by the user’s locale. And most of the time, the locale setting specifies a UTF-8 compatible encoding. With that in mind, what happens when I pass a byte string that is not legal in UTF-8 to write_bytes() in my test function?

# test_byte_writer.py
import io
import sys

from byte_writer import write_bytes

def test_write_byte(capfd):
    write_bytes(b'\xff')
    out, err = capfd.readouterr()
    assert out == b'\xff'

Here I adapted the test function to attempt writing the byte 0xff (255) to stdout. In UTF-8, this is an illegal byte (ie: by itself, that byte cannot be mapped to any unicode code point) which makes it good for testing this. (If you want to make a truly robust unittest, you should probably standardize on the locale settings (and hence, the encoding) to use when running the tests. However, that deserves a blog post of its own.) Anyone want to guess what happens when I run this test?

[pts/38@roan /var/tmp/py3_coverage]$ pytest              (08:19:52)
_________________________ test_write_byte __________________________

capfd = 

    def test_write_byte(capfd):
        write_bytes(b'\xff')
        out, err = capfd.readouterr()
>       assert out == b'\xff'
E       AssertionError: assert '�' == b'\xff'

test_byte_writer.py:10: AssertionError
===================== 1 failed in 0.02 seconds =====================

On Python3, we see that the undecodable byte is replaced with the unicode replacement character. Pytest is likely running the equivalent of b"Byte string".decode(errors="replace") on stdout. This is good when capfd is used to display the Captured stdout call information to the console. Unfortunately, it is not what we need when we want to check that our exact byte string was emitted to stdout.

With this change, it also becomes apparent that the test isn’t doing the right thing on Python2 either:

[pts/38@roan /var/tmp/py3_coverage]$ pytest-2            (08:59:37)
_________________________ test_write_byte __________________________

capfd = 

    def test_write_byte(capfd):
        write_bytes(b'\xff')
        out, err = capfd.readouterr()
>       assert out == b'\xff'
E       AssertionError: assert '�' == '\xff'
E         - �
E         + \xff

test_byte_writer.py:10: AssertionError
========================= warnings summary =========================
test_byte_writer.py::test_write_byte
  /var/tmp/py3_coverage/test_byte_writer.py:10: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
    assert out == b'\xff'

-- Docs: http://doc.pytest.org/en/latest/warnings.html
=============== 1 failed, 1 warnings in 0.02 seconds ===============

In the previous version, this test passed. Now we see that the test was passing because Python2 evaluates u"a" == b"a" as True. However, that’s not really what we want to test; we want to test that the byte string we passed to write_bytes() is the actual byte string that was emitted on stdout. The new data shows that instead, the test is converting the value that got to stdout into a text string and then trying to compare that. So a fix is needed on both Python2 and Python3.

These problems are down in the guts of pytest. How are we going to fix them? Will we have to seek out a different strategy that lets us capture stdout, overriding pytest’s builtin?

The Fourth Attempt: Fortuitous Timing!

Well, as it turns out, the pytest maintainers merged a pull request four days ago which implements a capfdbinary fixture. capfdbinary is like the capfd fixture that I was using in the above example but returns data as byte strings instead of as text strings. Let’s install it and see what happens:

$ pip install --user git+git://github.com/pytest-dev/pytest.git@6161bcff6e3f07359c94a7be52ad32ecb8822142
$ mv ~/.local/bin/pytest ~/.local/bin/pytest-2
$ pip3 install --user git+git://github.com/pytest-dev/pytest.git@6161bcff6e3f07359c94a7be52ad32ecb8822142

And then update the test to use capfdbinary instead of capfd:

# test_byte_writer.py
import io
import sys

from byte_writer import write_bytes

def test_write_byte(capfdbinary):
    write_bytes(b'\xff')
    out, err = capfdbinary.readouterr()
    assert out == b'\xff'

And with those changes, the tests now pass:

[pts/38@roan /var/tmp/py3_coverage]$ pytest              (11:42:06)
======================= test session starts ========================
platform linux -- Python 3.5.4, pytest-3.2.5.dev194+ng6161bcf, py-1.4.34, pluggy-0.5.2
rootdir: /var/tmp/py3_coverage, inifile:
plugins: xdist-1.15.0, mock-1.5.0, cov-2.4.0, asyncio-0.5.0
collected 1 item                                                    

test_byte_writer.py .

===================== 1 passed in 0.01 seconds =====================

Yay! Mission accomplished.


Emoji-Eingabe mit ibus-uniemoji

Posted by Fedora-Blog.de on November 17, 2017 06:44 PM

Wer schnell und einfach mit Emojis arbeiten möchte, muss dazu nicht unbedingt die Typing-Booster-Tastatur-Layouts nutzen, sondern kann genau so gut auch ibus-uniemoji verwenden.

Dazu muss natürlich zuerst einmal das entsprechende Paket mittels

su -c'dnf install ibus-uniemoji'

installiert werden. Anschließend muss noch der iBus-Daemon neugestartet werden, damit ibus-uniemoji erkannt wird. Dies geschieht ganz einfach mittels

ibus restart

Anschließend kann in den iBus-Einstellungen unter „Eingabemethode“ ibus-uniemoji hinzugefügt werden. Dazu muss im Hinzufügen-Dialog zuerst auf die drei Punkte am Ende der Liste geklickt und anschließend unter „Sonstige“ uniemoji ausgewählt werden.

Wer Gnome verwendet, muss in den Einstellungen unter „Region und Sprache“ unter „Eingabequellen“ zuerst auf die drei Punkte am Ende der Liste klicken und anschließend unter „Weitere“ die Eingabequelle „Sonstige (uniemoji)“ auswählen, oder alternativ im Suchfeld einfach „emoji“ eingeben.

Um jetzt einen Emoji in ein Textfeld einzufügen, muss zuerst die Eingabequelle auf uniemoji geändert werden. Anschließend kann man einfach die Bezeichnung des Emoji gefolgt von eingeben. Beispielsweise „light“ + für 💡.

Great new changes coming to nbdkit

Posted by Richard W.M. Jones on November 17, 2017 02:15 PM

Eric Blake has been doing some great stuff for nbdkit, the flexible plugin-based NBD server.

  • Full parallel request handling.
    You’ve always been able to tell nbdkit that your plugin can handle multiple requests in parallel from a single client, but until now that didn’t actually do anything (only parallel requests from multiple clients worked).
  • An NBD forwarding plugin, so if you have another NBD server which doesn’t support a feature like encryption or new-style protocol, then you can front that server with nbdkit which does.

As well as that he’s fixed lots of small bugs with NBD compliance so hopefully we’re now much closer to the protocol spec (we always check that we interoperate with qemu’s nbd client, but it’s nice to know that we’re also complying with the spec). He also fixed a potential DoS where nbdkit would try to handle very large writes which would delay a thread in the server indefinitely.


Also this week, I wrote an nbdkit plugin for handling the weird Xen XVA file format. The whole thread is worth reading because 3 people came up with 3 unique solutions to this problem.


Fedora 27: changes in httpd and php

Posted by Remi Collet on November 17, 2017 08:42 AM

The Apache HTTP server and PHP configuration have changed in Fedora 27, here is some explanations.

1. Switch of the Apache HTTP server in event mode

Since the first days of the distribution, the severs use the prefork MPM.

For obvious performance reasons, we choose to follow the upstream project recommandations and to use the event MPM by default.

This change is also required to have the full benefit and feature of the HTTP/2 protocol via mod_http2.

2. The problem of mod_php

The mod_php module is only supported when the prefork MPM is used

In the PHP documentation, we can read:

Warning We do not recommend using a threaded MPM in production with Apache 2.

And, indeed, we already have some bug reports about crashes in this configuration.

So it doesn't make sense to keep mod_php by default.

Furthermore, this module have some annoying limitations:

  • integrated in the web server, it shares its memory, which may have some negative security impacts
  • a single version can be loaded

3. Using FastCGI

For many years, we are working to make the PHP execution as much flexible as possible, using various combinations, without configuration change:

  • httpd + mod_php
  • httpd + php-fpm (when mod_php is disabled or missing and with a running php-fpm server)
  • nginx + php-fpm

The FPM way have become the default recommend configuration for a safe PHP execution:

  • support of multiple web servers (httpd, nginx, lighttpd)
  • frontend isolation for security
  • multiple backends
  • micro-services architecture
  • containers (docker)
  • multiple versions of PHP

4. FPM by default

Since Fedora 27, mod_php ZTS (multi-threaded) is still provided, but disabled, so FastCGI is now used by default.

To not break existing configuration during the distribution upgrade, and to have a working server after installation, we choose to implement some solutions, probably temporarily:

  • the php package have a optional dependency on the php-fpm package, so it is now installed by default
  • the httpd service have a dependency on the php-fpm service, so it is started automatically

5. Known issues

5.1. Configuration change

After a configuration change, or after a new extension installation, it is now required to restart the php-fpm service.

5.2. Configuration files

With mod_php, it is common to to use the php_value or php_flag directives in the Apache HTTP server configuration or in some .htaccess file.

It is now required to use the php_value or php_flag directives in the FPM pool configuration file, or to use some .user.ini file in the application directory.

6. Switching back to mod_php

If you really want to keep using (temporarily) mod_php, this is still possible, either way:

  • Switch back to prefork MPM in the /etc/httpd/conf.modules.d/00-mpm.conf file
 LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
 #LoadModule mpm_worker_module modules/mod_mpm_worker.so
 #LoadModule mpm_event_module modules/mod_mpm_event.so
  • Enable the module in the /etc/httpd/conf.modules.d/15-php.conf file. Warning, this configuration will not be supported, no bug report will be accepted.
 # ZTS module is not supported, so FPM is preferred
 LoadModule php7_module modules/libphp7-zts.so

After this change, the php-fpm package can be removed.

7. Conclusion

Fedora 27 now uses a modern configuration, matching the upstream projects recommendations. Security and performance are improved.

Any change may raise some small issues, and lot of gnashing of teeth, but we will try to take care of any difficulties, and to improve what must be in the next updates, or in the next fedora versions.

I plan to update this entry according to feedback.

Usar Yubikey para SSH

Posted by Eduardo Villagrán Morales on November 17, 2017 01:20 AM

En este post veremos cómo usar la yubikey que tenemos configurada con una llave GPG para iniciar sesión vía SSH.

Preparando el sistema

  1. Preparar gpg-agent para que soporte ssh. Debemos crear el archivo $HOME/.gnupg/gpg-agent.conf con el siguiente contenido:

    enable-ssh-support
    pinentry-program /usr/bin/pinentry
    default-cache-ttl 600
    default-cache-ttl-ssh 1800 …

News about the new Fedora 27 .

Posted by mythcat on November 16, 2017 08:28 PM
Today I tried to make a simple installation of Fedora 26 which I did without problems.
After that the software update come with the option to upgrade all of your Fedora system software to the new Fedora 27.
The work of the fedora team is appreciated for the work being done to create this new version.
For those who have been following the evolution of the design team over the course of Fedora's development, they will see evolution and quality in the new Fedora design.
Most software used by me was implemented without errors and everything seems to work very well.
Here are some screenshots made during the installation process:

My Fedora (linux) box upgraded to 27

Posted by Luigi Votta on November 16, 2017 07:08 PM
Today after some days of the new release of Fedora (the 27), I used the CLI to upgrade my PC. But I made a little mistake. I forget to check free disk space.
So while
"sudo dnf system-upgrade download --releasever=27"

went downloading, at some time the system notified free disk space at minumum (~1,1G). So during the process I deleted some files obtaining about 8 G free.    
Neverthless, at the end of the command many recent files was signed not installable because of insufficient space.
Tried
"sudo dnf system-upgrade reboot" 

and system rebooted twice but without upgrade. Re-launching previous dnf s*-u* download command resulted in skipping pkgs as already downloaded (but incomplete).
So after manual remove of /var/cache/PackageKit/27 and repeating the process, this time all went fine.

Check free disk space, please! (Perhaps in GUI this doesn't happen!?)

Thank you all!

Fun with Le Potato

Posted by Laura Abbott on November 16, 2017 07:00 PM

At Linux Plumbers, I ended up with a Le Potato SBC. I hadn't really had time to actually boot it up until now. They support a couple of distributions which seem to work fine if you flash them on. I mostly like SBCs for having actual hardware to test on so my interest tends to be how easily can I get my own kernel running.

Most of the support is not upstream right now but it's headed there. The good folks at BayLibre have been working on getting the kernel support upstream and have a tree available for use until then.

The bootloader situation is less than ideal currently. All the images run with the vendor provided u-Boot which is a few years out of date and runs with a bunch of out of tree patches. This is unfortunately common for many boards. There wasn't much information about u-Boot so I asked on the forums. I got a very prompt and helpful response that u-Boot upstreaming is also in progress. The first series looks like it's been reviewed and also comes with a very detailed README on how to actually build and install. This is important because you have to do some work to actually pick up the vendor firmware ('libre').

So here's roughly what I did to get my own code running. I'll note that this is just for something to output on serial. Make sure you have an SD card handy:

  • Download mainline u-Boot
  • Apply the base series
  • Follow the instructions in the README for compiling the base u-Boot ("u-boot compilation" section). I should note that I didn't feel like grabbing a bare metal toolchain so I just used the package Fedora provides for cross compilation. (CROSS_COMPILE=aarch64-linux-gnu-) YMMV.

The "Image creation" steps have a few gotchas, which I'll summarize:

  • wget the 4.8 toolchains. Before I asked on the forums about u-boot, I experimented with compiling the u-boot from the BSP with a newer toolchain. This was a bit of a nightmare so I just went ahead and used their suggested toolchain.
  • The toolchains are 32-bit binaries so you need to install 32-bit libs (dnf install glibc.i686 libstd++.i686 zlib.i686)
  • The vendor u-boot expects the toolchains to be in your path so set them accordingly.
  • Clone the vendor u-boot
  • Compile the 'vendor u-boot' (make gxl_p212_v1_defconfig && make)
  • Go back to your mainline u-Boot.
  • Run all the commands up to the dd commands (I put them in a shell script). Note that the line with acs_tool.pyc needs to be prefixed with python.
  • Run the dd command, setting the dev as appropriate.

You now have an SD card with u-boot and firmware on it. Of course you still need a kernel.

  • Clone the tree
  • make ARCH=arm64 defconfig
  • For a rootfs, I set CONFIG_INITRAMFS_SOURCE to point to my buildroot environment I use with QEMU.
  • make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu
  • Based on comments on the forums, I coverted the kernel to a uImage which u-boot understands:

    path/to/u-boot/tools/mkimage -A arm64 -O linux -C none -T kernel -a 0x01080000 -e 0x01080000 -d path/to/kernel/arch/arm64/boot/Image uImage

Fedora does provide mkimage in the uboot-tools package but given we're compiling u-boot, I went ahead and used the binary from that.

  • Insert the sdcard with u-boot to your computer and mount it (first partition should be FAT16)
  • Copy the uImage and arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dtb to the SD card

Your SD card should now have the kernel and devicetree on it. If all has gone well, you should be able to insert it and get to the u-boot prompt. Based on the comments on the forums, I did

  • loadaddr=1080000
  • fdt_high=0x20000000
  • fatload mmc 1:1 ${loadaddr} uImage
  • fatload mmc 1:1 ${fdt_high} meson-gxl-s905x-libretech-cc.dtb
  • bootm ${loadaddr} - ${fdt_high};

And it worked. Obviously this is a pretty simple setup but it shows that you can get something custom going if you want). I might try and throw in a version of Fedora on there to experiment with the multimedia hardware. I doubt this board will get official support unless the u-boot firmware situation improves.

On Password Management: Firefox Yes, Chrome No

Posted by Adam Young on November 16, 2017 03:51 PM

Summary: Firefox allows me to store passwords locally, encrypted, and password protected. Chrome wants to store passwords on line, which is not acceptable.

A recent, noticeable slowdown in Firefox (since alleviated) cause me to move more of my work over to Chrome. An essential part of this includes logging in to sites that I have password protected. My usual way to make a password is to run

uuidgen -r

And copy the result into the password field.  Since the output looks like this:

df413d28-957d-4d17-88f4-39c08b6eb81c

You can imagine that I don’t want to type that in every single time.

And, no, I did not just give you my password.  From the uuidgen man page:

There are two types of UUIDs which uuidgen can generate: time-based UUIDs and random-based UUIDs. By default uuidgen will generate a random-based UUID if a high-quality random number generator is
present. Otherwise, it will choose a time-based UUID. It is possible to force the generation of one of these two UUID types by using the -r or -t options.

OPTIONS
-r, –random
Generate a random-based UUID. This method creates a UUID consisting mostly of random bits. It requires that the operating system have a high quality random number generator, such as
/dev/random.

When I use Firefox, I store these in my local store.  To be precise, Firefox manages an NSS database.  This is a local, encrypted store that can be password protected.  I make use of that password protection.  That is the essential feature.

My local password to access my NSS Database is long, unique, and took me a long time to memorize, and I do not want it shared on other systems.  It is only used local to my primary work station.  Not a perfect setup, but better than the defaults.

I looked for a comparable feature in Chromium and found this under Manage Passwords:

Auto Sign-in
Automatically sign in to websites using stored credentials. If disabled, you will be asked for confirmation every time before signing in to a website.
Access your passwords from any device at passwords.google.com

So, no.  I do not want to share my password data with a third party.  I do not trust Google any more than I have to.  I do not want to send my password data over the wire for storage.

It is a constant battle to limit exposure.  Social media is a huge part of my life, and I don’t really like that.  I want to keep it as much under my own control as possible, or to split my exposure among different players.  Google owes me nothing more than my Google Play Music, as that is all I have paid them for.  Everything else they use for their own means, and I know and accept that.  But it means I have no reason to trust them, or any other social site.  I do not use off-machine password managers.  If I did, it would be one I installed and ran myself on a system that I paid for.

Not Google.

 

 

Security is from Mars, Developers are from Venus…...or ARE they?

Posted by Red Hat Security on November 16, 2017 03:00 PM

It is a tale as old as time. Developers and security personnel view each other with suspicion. The perception is that a vast gulf of understanding and ability lies between the two camps. “They can’t possibly understand what it is to do my job!” is a surprisingly common statement tossed about. Both groups blame the other for being the source of all of their ills. It has been well-known that fixing security bugs early in the development lifecycle not only helps eliminate exposure to potential vulnerabilities, but it also saves time, effort, and money. Once a defect escapes into production it can be very costly to remediate.

Years of siloing and specialization have driven deep wedges between these two critical groups. Both teams have the same goal: to enable the business. They just take slightly different paths to get there and have different expertise and focus. In the last few decades we’ve all been forced to work more closely together, with movements like Agile reminding everyone that we’re all ultimately there to serve the business and the best interest of our customers. Today, with the overwhelming drive to move to a DevOps model, to get features and functionality out into the hands of our customers faster, we must work better together to make the whole organization succeed.

Through this DevOps shift in mindset (Development and Operations working more closely on building, deploying, and maintaining software), both groups have influenced each other’s thinking. Security has started to embrace the benefits of things like iterative releases and continuous deployments, while our coder-counterparts have expanded their test-driven development methods to include more automation of security test cases and have become more mindful of things like the OWASP Top 10 (the Open Web Application Security Project). We are truly on the brink of a DevSecOps arena where we can have fruitful collaboration from the groups that are behind the engine that drives our respective companies. Those that can embrace this exciting new world are poised to reap the benefits.

Red Hat Product Security is pleased to partner with our friends over in the Red Hat Developer Program. Our peers there are driving innovation in the open source development communities and bringing open source to a new generation of software engineers. It is breathtaking to see the collaboration and ideas that are emerging in this space. We’re also equally pleased that security is not just an afterthought for them. Developing and composing software that considers “security by design” from the earliest stages of the development lifecycle helps projects move faster while delivering innovative and secure solutions. They have recently kicked-off a new site topic that focuses on secure programing and we expect it to be a great resource within the development community: Secure Programming at the Red Hat Developer Program.

In this dedicated space of our developer portal you’ll find a wealth of resources to help coders code with security in mind. You’ll find blogs from noted luminaries. You’ll find defensive coding guides, and other technical materials that will explain how to avoid common coding flaws that could develop into future software vulnerabilities. You’ll also be able to directly engage with Red Hat Developers and other open source communities. This is a great time to establish that partnership and “reach across the aisle” to each other. So whether you are interested in being a better software engineer and writing more secure code, or are looking to advocate for these techniques, Red Hat has a fantastic set of resources to help guide you toward a more secure future!

Category

Secure

Tags

Security-Vulnerabilities

Sending netdata metrics through syslog-ng to Elasticsearch

Posted by Peter Czanik on November 16, 2017 09:01 AM

netdata is a system for distributed real-time performance and health monitoring. You can use syslog-ng to collect and filter data provided by netdata and then send it to Elasticsearch for long-term storage and analysis. The aim is to send both metrics and logs to an Elasticsearch instance, and then access it via Kibana. You could also use Grafana for visualization, but that is not covered in this blog post.

I would like to thank here Fabien Wernli for his help in writing this HowTo.

Before you begin

This workflow uses two servers. Server A is the application server where metrics and logs are collected and sent to Server B, which hosts the Elasticsearch and Kibana instances. I use CentOS 7 in my examples but steps should be fairly similar on other platforms, even if package management commands and repository names are different.

 

In the example command lines and configurations, servers will be referred to by the names “servera” and “serverb”. Replace them with their IP addresses or real host names to reflect your environment.

Installation of applications

First we install all necessary applications. Once all components are up and running, we will configure them to work nicely together.

Installation of Server A

Server A runs netdata and syslog-ng. As netdata is quite a new product and develops quickly, it is not yet available in official distribution package repositories. There is a pre-built generic binary available, but installing from source is easy.

  1. Install a few development-related packages:
yum install autoconf automake curl gcc git libmnl-devel libuuid-devel lm_sensors make MySQL-python nc pkgconfig python python-psycopg2 PyYAML zlib-devel
  1. Clone the netdata git repository:
git clone https://github.com/firehol/netdata.git --depth=1
  1. Change to the netdata directory, and start the installation script as root:
cd netdata
./netdata-installer.sh
  1. When prompted, hit Enter to continue.

The installer script not only compiles netdata but also starts it and configures systemd to start netdata automatically. When installation completes, the installer script also prints information about how to access, stop, or uninstall the application.

By default, the web server of netdata listens on all interfaces on port 19999. You can test it at http://servera:19999/.

For other platforms or up-to-date instructions on future netdata versions, check https://github.com/firehol/netdata/wiki/Installation#1-prepare-your-system.

  1. Once netdata is up and running, install syslog-ng.

Enable EPEL and add a repository with the latest syslog-ng version, with Elasticsearch support enabled:

yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
cat <<EOF | tee /etc/yum.repos.d/czanik-syslog-ng312-epel-7.repo
[czanik-syslog-ng312]
name=Copr repo for syslog-ng312 owned by czanik
baseurl=https://copr-be.cloud.fedoraproject.org/results/czanik/syslog-ng312/epel-7-x86_64/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/czanik/syslog-ng312/pubkey.gpg
repo_gpgcheck=0
enabled=1
enabled_metadata=1
EOF

6. Install syslog-ng and Java modules:

# yum install syslog-ng syslog-ng-java

7. Make sure that libjvm.so is available to syslog-ng (for additional information, check: https://www.balabit.com/blog/troubleshooting-java-support-syslog-ng/):

echo /usr/lib/jvm/jre/lib/amd64/server > /etc/ld.so.conf.d/java.conf
ldconfig

8. Disable rsyslog, then enable and start syslog-ng:

systemctl stop rsyslog
systemctl enable syslog-ng
systemctl start syslog-ng

9. Once you are ready, you can check whether syslog-ng is up and running by sending it a log message and reading it back:

logger bla
tail -1 /var/log/messages

You should see a similar line on screen:

Nov 14 06:07:24 localhost.localdomain root[39494]: bla

Installation of Server B

Server B runs Elasticsearch and Kibana.

  1. Install JRE:
yum install java-1.8.0-openjdk.x86_64
  1. Add the Elasticsearch repository:
cat <<EOF | tee /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
  1. Install, enable, and start Elasticsearch:
yum install elasticsearch
systemctl enable elasticsearch.service
systemctl start elasticsearch.service
  1. Install, enable, and start Kibana:
yum install kibana
systemctl enable kibana.service
systemctl start kibana.service

Configuring applications

Once we installed all applications, we can start configuring them.

Configuring Server A

  1. First, configure syslog-ng. Replace its original configuration in /etc/syslog-ng/syslog-ng.conf with the following configuration:
@version:3.12
@include "scl.conf"

source s_system {
    system();
    internal();
};

source s_netdata {
  network(
    transport(tcp)
    port(1234)
    flags(no-parse)
    tags(netdata)
  );
};

parser p_netdata {
  json-parser(
    prefix("netdata.")
  );
};

filter f_netdata {
  match("users", value("netdata.chart_type"));
};

destination d_elastic {
  elasticsearch2 (
    cluster("elasticsearch")
    client_mode("http")
    index("syslog-${YEAR}.${MONTH}")
    time-zone(UTC)
    type("syslog")
    flush-limit(1)
    server("serverB")
    template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
    persist-name(elasticsearch-syslog)
  )
};

destination d_elastic_netdata {
  elasticsearch2 (
    cluster("syslog-ng")
    client_mode("http")
    index("netdata-${YEAR}.${MONTH}.${DAY}")
    time-zone(UTC)
    type("netdata")
    flush-limit(512)
    server("serverB")
    template("${MSG}")
    persist-name(elasticsearch-netdata)
  )
};

log {
  source(s_netdata);
  parser(p_netdata);
  filter(f_netdata);
  destination(d_elastic_netdata);
};

log {
    source(s_system);
    destination(d_elastic);
};

This configuration sends netdata metrics and also all syslog messages to Elasticsearch directly.

  1. If you want to collect some of the logs locally as well, keep some of the original configuration accordingly, or write your own rules.
  2. You need to change the server name from ServerB to something matching your environment. The f_netdata filter shows one possible way of filtering netdata metrics before storing to Elasticsearch. Adopt it to your environment.
  3. Next, configure netdata. Open your configuration (/etc/netdata/netdata.conf), and replace the [backend] statement with the following snippet:
[backend]
        enabled = yes
        type = json
        destination = localhost:1234
        data source = average
        prefix = netdata
        update every = 10
        buffer on failures = 10
        timeout ms = 20000
        send charts matching = *

The “send charts matching” setting here serves a similar role as “f_netdata” in the syslog-ng configuration. You can use either of them, but syslog-ng provides more flexibility.

  1. Finally, restart both netdata and syslog-ng so the configurations take effect. Note that if you used the above configuration, you do not see logs arriving in local files anymore. You can check your logs once the Elasticsearch server part is configured.

Configuring Server B

Elasticsearch controls how data is stored and indexed using index templates. The following two templates will ensure netdata and syslog data have the correct settings.

  1. First, copy and save them to files on Server B:
  • netdata-template.json:
{
    "order" : 0,
    "template" : "netdata-*",
    "settings" : {
      "index" : {
        "query": {
          "default_field": "_all"
        },
        "number_of_shards" : "1",
        "number_of_replicas" : "0"
      }
    },
    "mappings" : {
      "netdata" : {
        "_source" : {
          "enabled" : true
        },
        "dynamic_templates": [
          {
            "string_fields": {
              "mapping": {
                "type": "keyword",
                "doc_values": true
              },
              "match_mapping_type": "string",
              "match": "*"
            }
          }
        ],
        "properties" : {
          "timestamp" : {
            "format" : "epoch_second",
            "type" : "date"
          },
          "value" : {
            "index" : true,
            "type" : "double",
            "doc_values" : true
          }
        }
      }
    },
    "aliases" : { }
}
  • syslog-template.json:
{
    "order": 0,
    "template": "syslog-*",
    "settings": {
      "index": {
        "query": {
          "default_field": "MESSAGE"
        },
        "number_of_shards": "1",
        "number_of_replicas": "0"
      }
    },
    "mappings": {
      "syslog": {
        "_source": {
          "enabled": true
        },
        "dynamic_templates": [
          {
            "string_fields": {
              "mapping": {
                "type": "keyword",
                "doc_values": true
              },
              "match_mapping_type": "string",
              "match": "*"
            }
          }
        ],
        "properties": {
          "MESSAGE": {
            "type": "text",
            "index": "true"
          }
        }
      }
    },
    "aliases": {}
}

2. Once you saved them, you can use the REST API to push them to Elasticsearch:

curl -XPUT 0:9200/_template/netdata -d@netdata-template.json
curl -XPUT 0:9200/_template/syslog  -d@syslog-template.json

 

3. You can now edit your Elasticsearch configuration file and enable binding to an external interface so it can receive data from syslog-ng. Open /etc/elasticsearch/elasticsearch.yml and set the network.host parameter:

network.host:
  - [serverB_IP]
  - 127.0.0.1

Of course, replace [serverB_IP] with the actual IP address.

4. Restart Elasticsearch so the configuration takes effect.

5. Finally, edit your Kibana configuration (/etc/kibana/kibana.yml), and append the following few lines to the file:

server.port: 5601
server.host: "[serverB_IP]"
server.name: "A_GREAT_TITLE_FOR_MY_LOGS"
elasticsearch.url: "http://127.0.0.1:9200"

As usual, replace [serverB_IP] with the actual IP address.

6. Restart Kibana so the configuration takes effect.

Testing

You should now be able to log in to Kibana on port 5601 of Server B. You should set up your indexes on first use, and then you are ready to query your logs. If it does not work, here is a list of possible problems:

  • “serverb” has not been rewritten to the proper IP address in configurations.
  • SELinux is running (for testing, “setenforce 0” is enough, but for production, make sure that SELinux is properly configured).
  • The firewall is blocking network traffic.

Further reading

I gave here only minimal instructions to get started with netdata, syslog-ng, and Elasticsearch. You can learn more on their respective documentation pages:

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Sending netdata metrics through syslog-ng to Elasticsearch appeared first on Balabit Blog.

Fedora 27 Atomic Host is available on multiple architectures

Posted by Fedora Magazine on November 16, 2017 08:00 AM

The Fedora 27 Atomic Host now supports multiple architectures! Along with the x86_64 architecture, Atomic Host is also available on 64-bit ARM (aarch64) and PowerPC Little Endian (ppc64le). Both aarch64 and ppc64le architectures receive Atomic OSTree updates in the same way x86_64 does.

Atomic Host provides an immutable operating system tree (OStree) with a rollback facility. To learn more, see the Project Atomic website.

Download the Fedora 27 Atomic Host

Fedora Atomic Host for aarch64 and ppc64le are available in both ISO and cloud image formats:

Installation

You can install Fedora 27 Atomic Host where you want it — on bare metal, virtual machine, or cloud. If you have some aarch64 or ppc64le hardware, take Atomic Host for a spin. One quick way is to run virt-install on one of these platforms.

The virt-install command line tool creates a virtual machine based on the QEMU emulator. It offers KVM hardware acceleration managed by libvirt on top. To get started using virt-install in graphical or console mode, install the virtualization group of packages using sudo:

sudo dnf install @virtualization

Next, start the libvirtd service if not already running:

sudo systemctl start libvirtd

Now use virt-install to run the Atomic Host image:

sudo virt-install --name fedora-27-atomic --ram 2048 --vcpus 2 --disk path=/var/lib/libvirt/images/Fedora-Atomic-27..qcow2 --os-type linux --os-variant fedora26 --network bridge=virbr0 --graphics vnc,listen=127.0.0.1,port=5901 --cdrom /var/lib/libvirt/images/init.iso --noautoconsole

The example assumes the Fedora Atomic qcow2 image and init.iso file are both available in the /var/lib/libvirt/images/ directory. The file init.iso is a metadata ISO that provides critical data when Atomic Host boots. Check out this article on the Project Atomic website to create it. You might need to make other adjustments to the virt-install command line above depending upon your requirement.

Limitations with aarch64

64-bit ARM is available for a variety of hardware. Installation should work as expected on server class hardware. However:

  • On aarch64 machine, Atomic Host ISO can be installed only in UEFI mode.
  • Installation is not yet supported on single board computer, such as RPi3 or Pine64.

Features

Atomic Host provides various features such as:

  • Immutable Host
  • Package Layering – Install additional package(s) on top of base Atomic Host using package layering:
    sudo rpm-ostree install elfutils
  • Running containers – Run a container image available locally or from a registry. For example, running a hello-world container image from the docker.io registry on ppc64le machine:
    sudo docker run --rm docker.io/ppc64le/hello-world
  • Atomic Update and Rollback – Atomic Host receives update in the form of a new Atomic OSTree:
    sudo rpm-ostree upgrade

    Since each operation is atomic, it is also possible to undo changes with the rollback feature:

    sudo rpm-ostree rollback

Getting help

Go ahead and try it out! For any queries, contact the Atomic team:

Cockpit 156

Posted by Cockpit Project on November 16, 2017 08:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 156.

Redesign main navigation and support mobile browsing

The top-level dashboard menus and second-level menus of them have been redesigned to clarify the structure, be more extensible, and use less vertical screen space:

desktop mode

This version introduces proper support for using on mobile phones and tablets. As part of this, the login screen was rearranged in mobile mode to avoid overlapping the on-screen keyboard with the user/password input fields:

mobile mode with menu   mobile login page

Applications on the Apps page now show a link to the project home page, if provided in the AppStream metadata:

apps homepage link

Thanks Benjamin Deering for this feature!

Support alternate Kerberos keytabs

If present, Cockpit will now use the Kerberos /etc/cockpit/krb5.keytab file instead of the default system keytab /etc/krb5.keytab.

See the Single Sign On documentation for details.

Maintain an /etc/issue(5) file with current Cockpit status

Cockpit now maintains a /run/cockpit/issue file with instructions how to enable Cockpit (when disabled) or the Cockpit URL (when enabled).

The next util-linux 2.32 release will support /etc/issue.d/*.issue drop-ins. Distributions or users can make use of this by installing an /etc/issue.d/01-cockpit.issue symlink to the above file.

Use event-driven refresh of oVirt virtual machine data instead of polling

This makes the page much more reactive and use less resources.

Thanks to Marek Libra for this improvement!

Try it out

Cockpit 156 is available now:

PyConf Hyderabad 2017

Posted by Kushal Das on November 16, 2017 04:25 AM

In the beginning of October, I attended a new PyCon in India, PyConf Hyderabad (no worries, they are working on the name for the next year). I was super excited about this conference, the main reason is being able to meet more Python developers from India. We are a large country, and we certainly need more local conferences :)

We reached the conference hotel a day before the event starts along with Py. The first day of the conference was workshop day, we reached the venue on time to say hi to everyone. Meet the team at the conference and many old friends. It was good to see that folks traveled from all across the country to volunteer for the conference. Of course, we had a certain number of dgplug folks there :)

In the conference day, Anwesha and /my setup in the PSF booth, and talked to the attendees. During the lighting talk session, Sayan and Anwesha introduced PyCon Pune, and they also opened up the registration during the lighting talk :). I attended Chandan Kumar’s talk about his journey into upstream projects. Have to admit that I feel proud to see all the work he has done.

Btw, I forgot to mention that lunch at PyConf Hyderabad was the best conference food ever. They had some amazing biryani :).

The last talk of the day was my keynote titled Free Software movement & current days. Anwesha and I wrote an article on the history of Free Software a few months back, and that the talk was based on that. This was also the first time I spoke about Freedom of the Press Foundation (attended my first conference as the FPF staff member).

The team behind the conference did some amazing groundwork to make this conference happening. It was a good opportunity to meet the community, and make new friends.

Usar Yubikey con git

Posted by Eduardo Villagrán Morales on November 16, 2017 01:20 AM

Hoy veremos como usar GPG desde git para firmar los commits y tags. Lo que acá se detalla sirve para llaves GPG almacenadas en el disco como con las llaves GPG almacenadas en una Yubikey.

Definir la llave GPG a usar

Debemos indicar a git qué llave GPG usar y …

todo.txt done

Posted by Adam Young on November 15, 2017 10:51 PM

While I like the functionality of the todo.txt structure, I do not like the fact that done tasks stay in my todo list in perpetuity, and I also don’t want to lose them.  So, I’ve made a simple hack that allows me to move done items to a done folder.  Here’s the code:

#!/bin/sh
awk '/^x/ {print $0}' ~/Dropbox/todo/todo.txt >> ~/Dropbox/todo/done.txt 
awk '!/^x/ {print $0}' ~/Dropbox/todo/todo.txt > ~/Dropbox/todo/todo2.txt
mv ~/Dropbox/todo/todo2.txt ~/Dropbox/todo/todo.txt

 

I call it todo_done.sh.

I copied my original to /tmp/pre in order to test and make sure I have a backup.  After running todo_done.sh I get:

 

$ diff -u /tmp/pre/todo.txt ~/Dropbox/todo/todo.txt
--- /tmp/pre/todo.txt 2017-11-15 17:46:21.794510999 -0500
+++ /home/ayoung/Dropbox/todo/todo.txt 2017-11-15 17:46:24.584515043 -0500
@@ -7,7 +7,6 @@
 2017-10-02 Expenses
 2017-10-04 Containerize hammer
 2017-10-06 Complete steam setup 
-x 2017-10-12 Trrc time resource reduce cost 
 2017-10-12 Whiteboard training 
 2017-10-14 Subscription manager extensions for skis or products? 
 2017-10-15 Workcenter is made up of 4 things: machine, man, method, measures.

and

$ diff -u /tmp/pre/done.txt ~/Dropbox/todo/done.txt 
--- /tmp/pre/done.txt 2017-11-15 17:46:17.914505377 -0500
+++ /home/ayoung/Dropbox/todo/done.txt 2017-11-15 17:46:24.580515037 -0500
@@ -26,3 +26,4 @@
 x 2017-10-19 Drs appt? 
 x 2017-11-02 Letter of Support
 x 2017-11-15 2017-09-27 LinkedIn TJX
+x 2017-10-12 Trrc time resource reduce cost

Linking hackerspaces with OpenDHT and Ring

Posted by Daniel Pocock on November 15, 2017 07:57 PM

Francois and Nemen at the FIXME hackerspace (Lausanne) weekly meeting are experimenting with the Ring peer-to-peer softphone:

Francois is using Raspberry Pi and PiCam to develop a telepresence network for hackerspaces (the big screens in the middle of the photo).

The original version of the telepresence solution is using WebRTC. Ring's OpenDHT potentially offers more privacy and resilience.

Some reading

Posted by Laura Abbott on November 15, 2017 07:00 PM

Like all good developers, I do not know everything and will happily admit this. I've spent some time recently reading a couple of books to help fill in some gaps in my knowledge.

I've complained previously about disliking benchmarking. More generally, I'm not really a fan of performance analysis. I always feel like I get stuck at coming up with an approach to "it's going slower, why" beyond the basics. I watched a video of Brendan Gregg's talk from kernel recipes, and ended up going down the black hole1 of reading his well written blog. He does a fantastic job of explaining performance analysis concepts as well as the practical tools to do the analysis. He wrote a book several years ago and I happily ordered it. The book explains how to apply the USE method to performance problems across the system. This was helpful to me because it provides a way to generate a list of things to check and how to check them. It addresses the "stuck" feeling I get when dealing with performance problems. The book also provides a good high level overview of operating systems concepts. I'm always looking for references for people who are interested in kernels but don't know where to start and I think this book could fill a certain niche. Even if this book has been out for several years now, I was very excited to discover it.

I consider networking the biggest black hole of mystery in the kernel. I've never been a network or sysadmin for anything except my own Linux machines. Most of my networking debugging involves just googling for the correct command to type. I ended up buying a copy of Volume I of TCP/IP Illustrated. This is the canonical text and it's quite dense. For my style though, it's been helpful for grasping concepts. I have a better idea of exactly how packets flow and what exactly various networking functions (e.g. VPN) actually do. It's not very useful for practical experience though so I want to find some tasks to apply some of the skills I've learned. Maybe I'll write more if I find something interesting.


  1. I suffer from https://xkcd.com/214/ syndrome for all internet content. 

Linux Developer Conference Brazil

Posted by Athos Ribeiro on November 15, 2017 02:59 PM
Last weekend I attended the first Linux Developer Conference Brazil at Universidade Estadual de Campinas (Unicamp). It was an event focused on the upstream development of low level components related to Linux, such as gcc, systemtap and the Linux kernel itself. The event was organized by a few contributors of some of these upstream projects, like the kernel, and was sponsored by companies that work directly with it (among them were openSUSE and Collabora).

We’re Organizing Flatpak Workshop

Posted by Jiri Eischmann on November 15, 2017 02:43 PM

Several years ago, we organized two workshops focused on packaging software for Fedora. It was a success. Both workshops were full and several participants became package maintainers in Fedora. Packaging for Flatpak is becoming popular lately, so we decided to offer a workshop which is focused on flatpaking apps.

flatpak-logo

This time it’s not focused on the community around Fedora because you can create and run flatpaks in pretty much any Linux distribution. The workshop will again be run by experts in the field (Carlos Soriano, Felipe Borges, Jan Grulich,…). The morning part will consist of talks on Flatpak introduction, specifics for Qt and GTK+ based apps, portals, which integrate apps with the system, and at the end you will also learn to get your app to Flathub, the centralized Flatpak repository. In the afternoon you’ll have an opportunity to flatpak an application of your choice with help of the lectors. That’s why we’d appreciate if you specified what app you’d like to flatpak, so that we can get ready for it. But it’s completely optional.

Date: Wed, November 29, 2017
Time: 10:00-17:00
Venue: Neptunium and Plutonium Rooms in Red Hat Czech, Purkyňova 111, 612 00 Brno
Capacity: 20 participants
Registration: please fill out this form
Language: primarily in English
Admission: free
Prerequisities: knowledge of Linux at the level of advanced users, own laptop with flatpak, flatpak-builder (>=0.8) installed, GNOME Builder is not necessary, but may come handy.


Upgrading Fedora 26 to Fedora 27

Posted by Fedora Magazine on November 15, 2017 08:00 AM

Fedora 27 was just officially released. You’ll likely want to upgrade your system to the latest version of Fedora. Fedora offers a command-line method for upgrading Fedora 26 to Fedora 27. The Fedora 27 Workstation also has a graphical upgrade method.

Upgrading Fedora 26 Workstation to Fedora 27

Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the GNOME Software app. Or you can choose Software from GNOME Shell.

Choose the Updates tab in GNOME Software and you should see a window like this:

If you don’t see anything on this screen, try using the reload tool at the top left. It may take some time after release for all systems to be able to see an upgrade available.

Choose Download to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.

Using the command line

If you’ve upgraded from past Fedora releases, you are likely familiar with the dnf upgrade plugin. This method is the recommended and supported way to upgrade from Fedora 26 to Fedora 27. Using this plugin will make your upgrade to Fedora 27 simple and easy.

1. Update software and back up your system

Before you do anything, you will want to make sure you have the latest software for Fedora 26 before beginning the upgrade process. To update your software, use GNOME Software or enter the following command in a terminal.

sudo dnf upgrade --refresh

Additionally, make sure you back up your system before proceeding. For help with taking a backup, see the backup series on the Fedora Magazine.

2. Install the DNF plugin

Next, open a terminal and type the following command to install the plugin:

sudo dnf install dnf-plugin-system-upgrade

3. Start the update with DNF

Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:

sudo dnf system-upgrade download --releasever=27

This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the ‐‐allowerasing flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.

4. Reboot and upgrade

Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:

sudo dnf system-upgrade reboot

Your system will restart after this. Many releases ago, the fedup tool would create a new option on the kernel selection / boot screen. With the dnf-plugin-system-upgrade package, your system reboots into the current kernel installed for Fedora 26; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.

Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 27 system.

Upgrading Fedora: Upgrade complete!

Upgrading the Fedora Atomic Host

Fedora contributor and Atomic engineer Dusty Mabe has written a complete set of upgrade instructions at the Project Atomic website. Keep in mind that although the Fedora 26 Atomic Host continues to be updated, it is no longer regularly tested or formally released. Because the Atomic Host makes upgrades painless and low-risk (thanks to rollback), all Atomic Host users are recommended to upgrade to Fedora 27.

The short version of the upgrade process follows. These instructions assume you have sufficient space to rebase the new ostree. First, add the remote for the Fedora 27 Atomic Host:

sudo ostree remote add --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-27-primary fedora-atomic-27 https://kojipkgs.fedoraproject.org/atomic/27

Now upgrade:

sudo rpm-ostree rebase fedora-atomic-27:fedora/27/x86_64/atomic-host

Once you reboot, check the status of your ostree:

rpm-ostree status

You should see your Fedora 27 Atomic Host deployed.

Resolving upgrade problems

On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the DNF system upgrade wiki page for more information on troubleshooting in the event of a problem.

If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.

Different CloudForms Catalogs for Different Groups

Posted by Adam Young on November 15, 2017 02:37 AM

One of the largest value propositions of DevOps is the concept of Self Service provisioning. If you can remove human interaction from resource allocation, you can reduce both the response time and the likelihood of error in configuration. Red Hat CloudForms has a self service feature that allows a user to select from predefined services. You may wish to show different users different catalog items. This might be for security reasons, such as the set of credentials required and provided, or merely to reduce clutter and focus the end user on specific catalog items. Perhaps some items are still undergoing testing and are not ready for general consumption.

Obviously, these predefined services may not match your entire user population.

I’ve been working on setting up a CloudForms instance where members of different groups see different service catalogs. Here is what I did.

Tags are the primary tool used to match up users and their service catalogs. Specifically, A user will only see a catalog item if his group definition matches the Provisioning Scope tag of the Catalog Item. While you can make some catalog items to have a Provisioning Scope of All, you probably want to scope other items down to the target audience.

I have a demonstration setup based on IdM and CloudForms integration. When uses log in to the CloudForms appliance, one of the user groups managed by LDAP will be used to select their CloudForms group. The CloudForms group has a modified Provisioning Scope tag that will be used to select items from the service catalog.

I also have a top level tenant named “North America” that is used to manage the scope of the tags later on.  I won’t talk through setting this up, as most CloudForms deployment have something set as a top level tenant.

I’m not going to go through the steps to create a new catalog item.  There are other tutorials with go through this in detail.

My organization is adding support for statisticians.  Specifically, we need to provide support for VMs that are designed to support a customized version of the R programming environment.  All users that need these systems will be members of the stats group in IdM.  We want to be able to tag these instances with the stats Provisioning Scope as well.  The user is in the cloudusers group as well, which is required to provide access to the CloudForms appliance.

We start by having our sample user log in to the web UI.  This has the side effect of prepopulating the user and group data.  We could do this manually, but this way is less error prone, if a bit more of hassle.

My user currently only has a single item in her service catalog; the PostgreSQL appliance we make available to all developers.  This allows us to have a standard development environment for database work.

Log out and log back in as an administrator.  Here comes the obscure part.

Provisioning Scope tags are limited to set of valid values.  These values are, by default All or EVMGroup-user_self_service.  This second value matches a group with the same name.  In order to add an option, we need to modify the tag category associated with this tag.

  1. As an administrator, on the top right corner of the screen, click on your user name, and select the Configuration option from the dropdown.
  2. Select your region, in my case this is region 1.
  3. Across the top of the screen, you  will see Settings Region 1, and a series of tabs, most of which have the name of your tenant  (those of you that know my long standing issue with this term are probably grinning at my discomfort).  Since my top level tenant is “North America” I have a tab called North America Tags which I select. Select accordingly.
  4. Next to Category select “Provisioning Scope” from the drop down and you can see my existing set of custom tag values for Provisioning Scope.  Click on <New Entry> to add a new value, which I will call stats. I also use stats for the description.
  5. Click the Add button to the right.  See Below.

Now we can edit the newly defined “R Project” service to limit it to this provisioning scope.

  1. Navigate to Services->Catalogs->Catalog Items.
  2. Select the “R Project” Service.
  3. Click on the Policy  dropdown and select “Edit Tags”
  4. Click on the drop down to the right of “Select a customer tag to assign” (it is probably set on “Auto Approve -Max CPU *”) and scroll down to Provisioning Scope.
  5. The dropdown to the right, which defaults to “<Select a Value to Assign”>. Select this and scroll down to the new value.  For me, this is stats.  The new item will be added to the list.
  6. Click the Save button in the lower right of the screen.

Your list should look like this:

Finally, create the association between this provisioning scope and the stats group.

  1. From the dropdown on the top right of the screen that has your username, select Configuration.
  2. Expand the Access Control accordian
  3. Select groups.
  4. From the Configuration dropdown, select “Add a new Group”
  5. Select a Role for the user.  I use EvmRole-user_self_service
  6. Select a Project/Tenant for the user.
  7. Click on the checkbox labeled “Look Up External Authentiation Groups”
  8. A new field appears called “User to Look Up.”  I am going to user the “statuser” I created for this example, and click retrieve.
  9. The dropdown under the LDAP Groups for User is now populated.  I select stats.

To assign the tag for this group:

  1. Scroll down to the bottom of the page
  2. find and expand the “Provisioning Scope” tag
  3. Select “stats”
  4. Click the Add button in the bottom right corner of the page.

See Below.

Now when statuser logs in  to the self service web UI, they see both of the services provided:

 

One Big Caveat that has messed me up a few times:  a user only has one group active at a time.  If a user is a member of two groups, CloudForms will select one of them as the active group.  Services assigned only to the non-active group will not show up in the service catalog.  In my case, I had a group called cloudusers, and since all users are a member of that group, they would only see the Provisioning Scope, and thus the catalog items, for cloudusers, and not the stats group.

The Self Service webUI allows the user to change group to any of the other groups to which they are assigned.

The best option is to try and maintain a one to many relationship between groups and users;  constrain most users to a single group to avoid confusion.

This has been a long post.  The web UI for CloudForms requires a lot of navigation, and the concepts required to get this to work required more explanation than I originally had planned.  As I get more familiar with CloudForms, I’ll try to show how these types of operations can be automated from the command line, converted to Ansible playbooks, and thus checked in to version control.

I’ve also been told that, for simple use cases, it is possible to just put the user groups into separate tenants, and they will see different catalogs.  While that does not allow a single item to be in both catalogs, it is significantly easier to set up.

A Big Thank You to Laurent Domb for editing and corrections.

Rilis Fedora 27

Posted by Fedora Indonesia on November 15, 2017 02:13 AM
Versi terbaru Fedora, Fedora 27 telah dirilis, untuk lihat langsung di situs resmi fedora https://getfedora.org Download ISO Saat ini belum semua mirror memiliki ISO Fedora 27, solusi sementara adalah download langsung dari http://mirror.liquidtelecom.com/fedora/fedora/linux/releases/27/Spins/x86_64/iso/ atau http://mirrors.kernel.org/fedora/releases/27/ atau gunakan download dari link torrent Untuk kecepatan download sendiri, saya coba download dari Indihome 40 Mbps, baik direct download … Lanjutkan membaca "Rilis Fedora 27"

Usar llave GnuPG en Yubikey desde Evolution

Posted by Eduardo Villagrán Morales on November 15, 2017 01:20 AM

Ya que tenemos la Yubikey lista, configuraremos el cliente de correo Evolution para usar la llave GPG que ella posee.

  1. Identificamos el hash de la llave GPG a usar

    gpg2 --list-secret-key
    sec#  rsa4096 2011-11-23 [SC]
          95E30BC35BC1FB86E67B8F73F0DDFF1DD24CB499
    uid           [  absoluta ] Eduardo Andrés Villagrán Morales <evillagr@fedoraproject.org>
    

    En este caso el identificador …

Episode 70 - The security of Intel ME

Posted by Open Source Security Podcast on November 14, 2017 11:54 PM
Josh and Kurt talk about Intel ME, Equifax salary history, and IoT.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/5945821/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes



Fedora VM Flickering? Here's A Fix!

Posted by Darryl L. Pierce on November 14, 2017 08:41 PM
Today I installed VirtualBox on my MacBook Pro so that I could run a virtual instance of Fedora 27. After the install finished, though, I found that the guest screen was repeatedly flickering. Not sure what the problem was, I started googling and found a forum that pointed me to Wayland as the culprit.

I logged out of Gnome, logged back in but first selected the settings gear and told GDM I wanted "Gnome on Xorg". Once I was logged in the flicker was gone.

I'm not entirely sure what the problem is, but at least at this point it's working.

Writing Installer Images Directly With WebUSB

Posted by Nathaniel McCallum on November 14, 2017 08:31 PM

Chrome 61 recently released support for the WebUSB JavaScript API. This allows direct access to USB devices from websites. Somebody should build a website that takes distribution ISOs and writes them directly to USB mass storage devices. This would significally improve one of the most difficult and error prone steps when installing a Linux distribution such as Fedora.

Fedora 27 virt-builder images

Posted by Richard W.M. Jones on November 14, 2017 03:48 PM

Fedora 27 has just been released, and I’ve just uploaded virt-builder images so you can try it right away:

$ virt-builder -l | grep fedora-27
fedora-27                aarch64    Fedora® 27 Server (aarch64)
fedora-27                armv7l     Fedora® 27 Server (armv7l)
fedora-27                i686       Fedora® 27 Server (i686)
fedora-27                ppc64      Fedora® 27 Server (ppc64)
fedora-27                ppc64le    Fedora® 27 Server (ppc64le)
fedora-27                x86_64     Fedora® 27 Server
$ virt-builder fedora-27 \
      --root-password password:123456 \
      --install emacs \
      --selinux-relabel \
      --size 30G
$ qemu-system-x86_64 \
      -machine accel=kvm:tcg \
      -cpu host -m 2048 \
      -drive file=fedora-27.img,format=raw,if=virtio &

Fedora 27 is out!!!

Posted by Alberto Rodriguez (A.K.A bt0) on November 14, 2017 03:48 PM

Today Fedora 27 was released, check the complete note here:

Announcing the release of Fedora 27

<iframe class="wp-embedded-content" data-secret="0c3Xunrzy0" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/announcing-fedora-27/embed/#?secret=0c3Xunrzy0" title="“Announcing the release of Fedora 27” — Fedora Magazine" width="600"></iframe>

Igalia is Hiring

Posted by Michael Catanzaro on November 14, 2017 03:04 PM

Igalia is hiring web browser developers. If you think you’re a good candidate for one of these jobs, you’ll want to fill out the online application accompanying one of the postings. We’d love to hear from you.

We’re especially interested in hiring a browser graphics developer. We realize that not many graphics experts also have experience in web browser development, so it’s OK if you haven’t worked with web browsers before. Low-level Linux graphics experience is the more important qualification for this role.

Igalia is not just a great place to work on cool technical projects like WebKit. It’s also a political and social project: an egalitarian, worker-owned cooperative where everyone has an equal vote in company decisions and receives equal pay. It’s been around for 16 years, so it’s also not a startup. You can work remotely from wherever you happen to be, or from our office in A Coruña, Spain. You won’t have a boss, but you will be expected to work well with your colleagues. It’s not the right fit for everyone, but there’s nowhere I’d rather be.

Announcing the release of Fedora 27

Posted by Fedora Magazine on November 14, 2017 02:00 PM

The Fedora Project proudly announces the release and general availability of the Fedora 27 Workstation and Fedora 27 Atomic editions. Fedora 27 incorporates thousands of improvements from both the Fedora Community and various upstream software projects.

You can download Fedora 27 Workstation and the Fedora 27 Atomic Host right now from getfedora.org. Alternatively — for users already running Fedora — you can use the operating system itself to upgrade to Fedora 27. You can also download the Fedora 27 Beta Modular Server.

Fedora Workstation

The Workstation edition of Fedora 27 features GNOME 3.26. In the new release, both the Display and Network configuration panels have been updated, along with the overall Settings panel appearance improvement. The system search now shows more results at once, including the system actions.

GNOME 3.26 also features color emoji support, folder sharing in Boxes, and numerous improvements in the Builder IDE tool. Many thanks to the GNOME community for their work on these features. For more information refer to the upstream release notes at Gnome 3.26 Release Notes.

The new release also features LibreOffice 5.4. The latest version of LibreOffice offers new functions and improvements in Writer and Calc, as well as EMF+ vector images import. You also can now use OpenPGP keys to sign ODF documents.

The new release is also available via Fedora Media Writer. The latest version allows you to create bootable SD cards with Fedora for ARM devices such as Raspberry Pi. Support for Windows 7 and screenshot handling have been improved. The utility also notifies you when a new release of Fedora is available. You can read more about using Fedora Media Writer here.

Fedora Atomic Host

Fedora Atomic 27 now defaults to a simpler container storage setup. Furthermore, it offers containerized Kubernetes, flannel and etcd. These changes bring flexibility for users to choose different versions of Kubernetes, or to not use Kubernetes at all. This release ships with the latest rpm-ostree, now with support for base package overrides. Cockpit is also updated to the latest version. It includes support for Cockpit Dashboard installation on Atomic Host via RPM package layering.

What about Fedora Server?

The Fedora Server is being retooled in line with our modularity efforts. These changes allow Fedora Server users to experience a more modular operating system. The benefits include managing multiple components on different lifecycles, as opposed to upgrading the entire system every release or two to stay current. The Modularity documentation site provides more information about this exciting new concept. The Fedora 27 Modular Server Beta release is available today, with a Final release scheduled to follow roughly a month later.

Fedora variants and 32-bit images

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:

Note that 32-bit Live and installation media is not available for Workstation, Labs, or Spins. Network installation media is available for 32-bit Fedora Workstation and the Everything variant..

Release Notes

To read more about changes in Fedora 27, consult the release notes at the new Fedora Documentation site.

Publication de Fedora 27 !

Posted by Charles-Antoine Couret on November 14, 2017 01:42 PM

En ce mardi 14 novembre 2017, le projet Fedora est fier d’annoncer la sortie de la distribution GNU/Linux Fedora 27.

Cette version de Fedora s'est surtout concentrée sur trois axes : couche graphique, gestion du matériel et Fedora.next.

À noter que pour gagner du temps et des ressources, c'est la première version de Fedora n'ayant pas eu de version Alpha. Cela a été rendu possible grâce à l'amélioration des procédures de qualité pour les versions en développements.

GNOME-Bureau.png

Couche graphique

GNOME est toujours à l'honneur avec sa version 3.26. C'est une version essentiellement de polissage et de stabilité avec :

  • La barre principale qui devient transparente, si aucune fenêtre n'est maximisée ;
  • De nouvelles animations, plus fluides, en cas de redimensionnement ou de mouvement des fenêtres ;
  • La recherche globale fonctionne sur des actions du système (comme Éteindre) et affiche plus de résultats à la fois ;
  • Les paramètres du système bénéficient d'une refonte complète de l'interface ;
  • Le logiciel Disques peut enfin redimensionner les partitions, Agenda prenant en charge les évènements récurrents et Web acceptant la synchronisation depuis Firefox Sync ;
  • Le logiciel de virtualisation Machines peut télécharger et lancer automatiquement une RHEL gratuite ;
  • Amélioration des performances pour quelques applications ou GNOME en général.

Remplacement de l'interface graphique de gestion de paquets Yumex par dnfdragora qui propose une interface Qt, GTK+ et ncurses. Le développement de Yumex s'est arrêté il y a un an, qui met fin à une application ayant accompli dix ans de bons et loyaux services et a même su migrer de yum vers dnf. dnfdragora présente la particularité de reposer sur rpmdragora, qui vient de Mageia.

dnfdragora.png

Gestion du matériel

Fedora propose une image unique pour l'architecture AARCH64 (ARM 64 bits) ce qui rejoint la solution proposée pour les cartes disposant d'un ARMv7. Pour l'instant cette image prendra en charge les cartes suivantes :

  • Pine64 (et ses variantes)
  • Raspberry Pi 3 (64 bit mode)
  • 96boards HiKey
  • 96boards Dragonboard 410c
  • ARM Juno

L'offre des cartes prises en charge s'étoffera dans le temps, de même que la mise à disposition des versions personnalisées de Fedora.

Toujours à propos du matériel, Fedora a travaillé pour avoir une meilleure gestion des SoC Intel Bay Trail et Cherry Trail (essentiellement des puces Pentium, Celeron et Atom sur portables et tablettes). Le travail a consisté en l'amélioration de la surveillance de la batterie (consommation actuelle, temps restant sur batterie, savoir si la machine est en charge ou non) et de la gestion de l'audio. Les écrans tactiles et les accéléromètres seront également mieux détectés et donc exploitables par le système et les applications.

Fedora 27 peut enfin tourner sur les ordinateurs ayant un UEFI 32 bits tout en ayant un CPU 64 bits. Cela consiste en l'installation d'un GRUB 32 bits (chargé par l'UEFI lui même) qui lui même charge un noyau et l'espace utilisateur en 64 bits. Cette configuration, assez atypique, a nécessité un travail sur GRUB, Anaconda et les utilitaires EFI pour les prendre en charge. Fedora sera ainsi installable sur ces configurations comme l'Asus Transformer T100TA, le HP Stream 7, le Dell Venue 8 Pro 5830 et les premiers Macintosh Intel d'Apple.

GNOME-Paametres.png

Fedora.next

Séparation du Base Runtime en Plateforme et Hôte, le premier prenant en charge l'espace utilisateur et la base du système quand le second s'occupe uniquement de la gestion du matériel. En somme, la seconde partie contient le noyau, le chargeur de démarrage, les firmwares et quelques pilotes. Dans le cadre de la modularité, le but de ce changement est de découpler la gestion du matériel du reste du système pour proposer des cycles de vie différents et autonomes. L'utilisateur pourra ainsi bénéficier de plus de souplesse, comme avoir la dernière version du support du matériel avec le reste de Fedora un peu plus ancien et inversement. À terme, on pourrait avoir une sorte de gestion de matériel fournie par Fedora 27 avec un espace utilisateur fourni par Fedora 28. Ou inversement selon le cas d'usage.

L'édition Fedora Server reçoit les premiers travaux officiels pour gérer la modularité, alors qu'elle a été testée par l'édition spéciale Boltron lors de Fedora 26. L'objectif est de mettre en place la modularité dans une image officielle de Fedora et non annexe comme l'a été Boltron. Cela permettra aux administrateurs systèmes de prendre en main le projet de manière plus large pour bénéficier d'un maximum de retours. Il sera également possible de voir le comportement de la modularité durant le cycle de vie complet de Fedora 27.

Comme pour Fedora 26, je vous invite à consulter la documentation de la modularité et leur chaine Youtube pour en apprendre plus à ce sujet. À cause de ce changement important, l'édition Server sera disponible un mois après les autres éditions.

Et comme d'habitude, Fedora 27 réserve bien d'autres surprises à découvrir.

La communauté francophone

L'association

Logo.png

Borsalinux-fr est l'association qui gère la promotion de Fedora dans l'espace francophone. Nous constatons depuis quelques années une baisse progressive des membres à jour de cotisation et de volontaires pour prendre en main les activités dévolues à l'association.

Nous lançons donc un appel à nous rejoindre afin de nous aider.

L'association est en effet propriétaire du site officiel de la communauté francophone de Fedora, organise des évènements promotionnels comme les Rencontres Fedora régulièrement et participe à l'ensemble des évènements majeurs concernant le libre à travers la France principalement.

Si vous aimez Fedora, et que vous souhaitez que notre action perdure, vous pouvez :

  • Adhérer à l'association : les cotisations nous aident à produire des goodies, à nous déplacer pour les évènements, à payer le matériel ;
  • Participer sur le forum, les listes de diffusion, à la réfection de la documentation, représenter l'association sur différents évènements francophones ;
  • Concevoir des goodies ;
  • Organiser des évènements type Rencontres Fedora dans votre ville.

Nous serions ravis de vous accueillir et de vous aider dans vos démarches. Toute contribution, même minime, est appréciée.

Si vous souhaitez avoir un aperçu de notre activité, vous pouvez participer à nos réunions hebdomadaires chaque lundi soir à 20h30 (heure de Paris) sur IRC (canal #fedora-meeting-1 sur Freenode).

La documentation

Depuis juin 2017, un grand travail de nettoyage a été entrepris sur la documentation francophone de Fedora, pour rattraper les 5 années de retard accumulées sur le sujet.

Le moindre que l'on puisse dire, c'est que le travail abattu est important : près d'une cinquantaine d'articles corrigés et remis au goût du jour. Un grand merci à Charles-Antoine Couret, Nicolas Berrehouc, Édouard Duliège et les autres contributeurs et relecteurs pour leurs contributions.

L'équipe se réunit tous les lundis soir après 21h (heure de Paris) sur IRC (canal #fedora-doc-fr sur Freenode) pour faire progresser la documentation par un travail collaboratif. Le reste de la semaine cela se passe sur les listes de diffusion.

Si vous avez des idées d'articles ou de corrections à effectuer, que vous avez une compétence technique à retransmettre, n'hésitez pas à participer.

Liens

End of PHP 7.2 FTBFS marathon

Posted by Remi Collet on November 14, 2017 10:10 AM

QA is a very important part of my daily work, and since PHP 7.2 is available in Fedora rawhide, we have to ensure everything works as expected with this new version.

 

As already explained, Koschei is our QA tool, used to monitor the full PHP stack, including ~60 extensions and ~500 libraries.

After the initial build of PHP 7.2.0RC3 in rawhide (September 29th) we have around one hundred FTBFS packages (Failed To Build From Sources).

Today everything is ok, all FTBFS have been fixed.

1. Extensions

Most PHP extensions are now compatible with PHP 7.2, excepted

  • XDebug, but version 2.6.0-dev works and a beta should be released soon
  • Timecop, this have been reported upstream, searching for a fix.

2. Mcrypt

Lot of packages were broken because they sadly still rely on the old deprecated mcrypt extension.

Despite I'm fighting for years to be able to remove it (see about libmcrypt and php-mcrypt), we still need it, so I have created the new php-pecl-mcrypt package from the PECL cemetery. This is obviously only a temporary solution, this extension is deprecated, un-maintained and should die.

3. Upstream patches

Most of PHP projects consider fix for new PHP version as standard bugfix, which means, could be done in a simple minor version, without requiring any major change.

So, some projects have already made the few minor changes needed, but have not yet released new version including theses changes. So the work was only about finding these fix, and applying them in the Fedora packages.

4. Pull Requests

Most projects are not yet monitoring PHP 7.2 (not enabled in travis) so were not really aware of needed changes.

So, of course, the first work was to report this failure upstream, and usually providing a possible fix (PR).

Some are already merged, some are still waiting for review.

5. Skip some

For a very few packages, as no real good fix exists for now, we have to temporarily skip some tests with 7.2. Most are about the session change, which breaks unit tests (session_start() failing)  without any real impact on the real usage.

6. Common errors

The more common errors, requiring a fix, are :

  • count on not countable (and NULL is not countable)
  • stricter prototype checking, fix for #73987, originally applied in 7.1.2RC1 then reverted as introduce a small BC break.
  • object is a reserved keywork

7. Conclusion

We are ready for PHP 7.2 in Fedora, and as usually, we done this the Fedora way: upstream first.

I also consider that having most extensions / libraries ready is a important criteria for the new version adoption by users.

What’s new in Fedora 27 Workstation

Posted by Fedora Magazine on November 14, 2017 10:01 AM

Fedora 27 Workstation is the latest release of our free, leading-edge operating system. You can download it from the official website here right now. There are several new and noteworthy changes in Fedora Workstation.

GNOME 3.26

This Fedora Workstation release, as always, brings with it the latest, greatest GNOME desktop environment. Notable features in this release include:

  • Revamped Settings control panel
  • Enhanced Search that also returns system actions in an improved layout
  • Color Emoji in the Characters app
  • Better visual transitions for windows, and app thumbnails in the Overview
  • …and more

The GNOME 3.26 release notes online contain a much larger list of improvements. There are even improvements beyond those in the release notes. These include more secure image thumbnails and more gaming hardware support.

Pipewire

PipeWire is a new subsystem that vastly improves audio/video handling under Linux. Eventually it will support the full range of ways users currently use PulseAudio and JACK, and provide similar handling for video. It’s designed to be a new core foundation for audiovisual I/O in Linux apps. In Fedora 27 Workstation, PipeWire provides screen capture and screencast recording in GNOME Shell. In future releases its capabilities will expand.

Builder

The new release of GNOME Builder comes with numerous improvements as well. Examples includes the debugger, the overall design, symbol search and word completion. This release also has enhancements to inline documentation for a better developer experience.

LibreOffice 5.4

Fedora 27 Workstation also includes the latest stable release of the LibreOffice productivity suite. This full-featured suite makes it easy for you to create and collaborate on documents, presentations, spreadsheets, and more. The full set of features is in the upstream release notes, but here is a partial list:

  • Writer now has better support for .dotx, .dotm, and .rtf files
  • Calc remembers your CSV export settings for later sessions
  • Writer has a better style toolbar to make document formatting a snap
  • Calc adds new options for more granular cell protection
  • Writer handles export of bullets and numbering much better
  • Calc has new cell comment commands

RHEL Developer Subscription access in Boxes

For a while now, Red Hat has offered a no-cost developer subscription to Red Hat Enterprise Linux. (Full disclosure for those living under rocks: Red Hat sponsors the Fedora Project.) In Fedora 27 Workstation, the GNOME Boxes application will let you access your subscription to easily download and install RHEL for a guest box. This is a nice streamlining feature for developers who want to test their work for enterprise deployment.

Fedora Media Writer

The Fedora Media Writer is the easiest way to get a new release of the Workstation Live onto a convenient USB key. The latest release allows you to create bootable SD cards with Fedora for ARM devices such as Raspberry Pi. It also brings improved support for Windows 7 and screenshot handling. The utility also notifies you when a new release of Fedora is available. You can install it, as with many other apps and add-ons, in the Software app.

Other notes

This is but a small fraction of the new features of Fedora 27. Fedora also lets you install thousands of software apps provided by our community. Many of these apps and utilities have also received updates since Fedora 26. When you upgrade, you’ll receive them automatically.

Fedora 27 is available now for download.

Fedora 26->27 Atomic Host Upgrade Guide

Posted by Dusty Mabe on November 14, 2017 12:00 AM
cross posted with this Project Atomic blog post Introduction This week we put out the first release of Fedora 27 Atomic Host. Some quick notes: In Fedora 27 Atomic Host we removed kubernetes from the base OSTree. See Appendix A: Upgrading Systems with Kubernetes for more information. For Fedora 27 we are currently sticking with the non-unified repo approach as opposed to a unified repo. TL;DR nothing is changing for now but we expect to implement a unified repo as described here during the F27 release cycle.

New badge: Fedora 27 Release Partygoer !

Posted by Fedora Badges on November 13, 2017 08:36 PM
Fedora 27 Release PartygoerAttended a local release party to celebrate the launch of Fedora 27!

Eben Moglen is no longer a friend of the free software community

Posted by Matthew Garrett on November 13, 2017 05:42 PM
(Note: While the majority of the events described below occurred while I was a member of the board of directors of the Free Software Foundation, I am no longer. This is my personal position and should not be interpreted as the opinion of any other organisation or company I have been affiliated with in any way)

Eben Moglen has done an amazing amount of work for the free software community, serving on the board of the Free Software Foundation and acting as its general counsel for many years, leading the drafting of GPLv3 and giving many forceful speeches on the importance of free software. However, his recent behaviour demonstrates that he is no longer willing to work with other members of the community, and we should reciprocate that.

In early 2016, the FSF board became aware that Eben was briefing clients on an interpretation of the GPL that was incompatible with that held by the FSF. He later released this position publicly with little coordination with the FSF, which was used by Canonical to justify their shipping ZFS in a GPL-violating way. He had provided similar advice to Debian, who were confused about the apparent conflict between the FSF's position and Eben's.

This situation was obviously problematic - Eben is clearly free to provide whatever legal opinion he holds to his clients, but his very public association with the FSF caused many people to assume that these positions were held by the FSF and the FSF were forced into the position of publicly stating that they disagreed with legal positions held by their general counsel. Attempts to mediate this failed, and Eben refused to commit to working with the FSF on avoiding this sort of situation in future[1].

Around the same time, Eben made legal threats towards another project with ties to FSF. These threats were based on a license interpretation that ran contrary to how free software licenses had been interpreted by the community for decades, and was made without any prior discussion with the FSF. This, in conjunction with his behaviour over the ZFS issue, led to him stepping down as the FSF's general counsel.

Throughout this period, Eben disparaged FSF staff and other free software community members in various semi-public settings. In doing so he harmed the credibility of many people who have devoted significant portions of their lives to aiding the free software community. At Libreplanet earlier this year he made direct threats against an attendee - this was reported as a violation of the conference's anti-harassment policy.

Eben has acted against the best interests of an organisation he publicly represented. He has threatened organisations and individuals who work to further free software. His actions are no longer to the benefit of the free software community and the free software community should cease associating with him.

[1] Contrary to the claim provided here, Bradley was not involved in this process.

(Edit to add: various people have asked for more details of some of the accusations here. Eben is influential in many areas, and publicising details without the direct consent of his victims may put them at professional risk. I'm aware that this reduces my credibility, and it's entirely reasonable for people to choose not to believe me as a result. I will add that I said much of this several months ago, so I'm not making stuff up in response to recent events)

comment count unavailable comments

What I have found interesting in Fedora during the week 45 of 2017

Posted by Fedora Community Blog on November 13, 2017 02:18 PM

My highlights from the past week:

Fedora 27 Modular Server Beta & Fedora 27 Final are considered as Gold

On Thursday, November 9th, 2017 we have finally concluded the Fedora 27 Final compose as well as the Fedora 27 Modular Server Beta compose are ready to be released. The release date for both composes is November 14th at 14:00UTC. For more information you can check the meeting minutes from the Go/No-Go meeting.

Name of the Fedora Server release

During the past time there was a discussion what name we should be using for the Modular release of the Fedora Server edition. On the Server SIG meeting the last week a decision has been made to use Fedora Modular Server. The main reason why this name has been chosen is to avoid end-user confusion.

Fedora 27 Modular Server release validation

The Beta release of the Fedora 27 Modular Server was made under considerable pressure. To be sure we do not step aside from our quality standard, our QA team (namely Adam Williamson) has put together test matrices for the Modular Server and has aligned the Modular Server testing with the way we do validation of the standard release.

Changes Policy & Fedora Release Life Cycle

No More Alphas Change has implemented some changes in the way we plan and deliver Fedora Releases. During the Fedora 27 release we were improvising a bit to find the balance between the changes we needed to do and the overall stability of the release cycle. Finally, as the release is almost shipped, we have merged all the changes in Changes Policy & Fedora Release Life Cycle together with some tweaks and proposed a new versions of these documents. The draft of the Changes Policy as well as the draft of the Fedora Release Life Cycle  is now available for review. Once the review is done, I will bring these drafted documents to FESCo for the final approval. If you are interested in this, please contribute to the discussion on the devel@ mailing list.

Autumn elections

The preparation phase of the Autumn 2017 Elections is still in the progress. This Election cycle is following the schedule approved by Fedora Council and we are now collecting questions in our Election questionnaire. Anyone who would like to ask our candidates to FESCo, Council, Mindshare any question, can contribute into the Questionnaire. The collection of questions ends in one week on November 20th, 2017 at 23:59:59UTC.

The post What I have found interesting in Fedora during the week 45 of 2017 appeared first on Fedora Community Blog.

Gnome apps migrated to flathub

Posted by Alexander Larsson on November 13, 2017 02:05 PM

Last week I finally migrated the last app from the gnome stable application flatpak repo to flathub. The old repo is now deprecated and will not get any new builds.

In the future, all stable flatpak builds of gnome apps will be on flathub only, so make sure you add it as a remote.

There are a lot of apps on flathub now, have a look. And if your application is not there but you’re interested in adding it, please check out the flathub docs.

#PeruRumboGSoC2018 – Session 1

Posted by Julita Inca Chiroque on November 13, 2017 08:56 AM

Our first session has started last Sunday at UIGV. It was quite difficult to find a laboratory with Linux computers opened on Sunday, but thanks to the UIGV to let us use the classroom at Faculty of Turism. We had 7 hours of Linux basis,  from 10am to 6:00 pm. I started the training with GNU/Linux basis, Fedora, GNOME, Linux generalities and Linux commands. There was a group conformed by students from different universities so the first part was an introduction of what the have heard about GSoC, GNOME, Fedora and Linux.The students took their laptops to the session and an important detail was missing in classroom this time, an extension cord. Then, we placed them in the corners where plug points were. Gathered the list of their blogs and git accounts followed as they set their Fedora environments. I also lectured about the Linux history and basis of the GNU/Linux. Some works of the students, so far -> Carlos, Giohanny 1, 2, Johan, Franz, Fiorella, Rommel, Solanch, Lizbeth 1, 2 and Cristian.Thanks to GNOME for sponsoring our lunch. It was taken around 1:00 pm. We ate “chifa”.More basic commands, how to be connected on IRC (newcomers channels) and the use of VI were also explained while the light of the sun was slowly descending around 6:00 p.m.Thanks all the attendances, Randy as one of our trainers, Solanch for her help. Great job!Special thanks to GNOME, Fedora and the Linux Foundation for helping us in education! 🙂


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: #PeruRumboGSoC2018, comandos básicos Linux, fedora, GNOME, gnome 3, GSoC, Julita Inca, Julita Inca Chiroque, Lima, linux, Linux Foundation, UIGV

ListenBrainz community gardening and user statistics

Posted by Justin W. Flory on November 13, 2017 08:30 AM
On the data refrain: Contributing to ListenBrainz

This post is part of a series of posts where I contribute to the ListenBrainz project for my independent study at the Rochester Institute of Technology in the fall 2017 semester. For more posts, find them in this tag.


My progress with ListenBrainz slowed, but I am resuming the pace of contributing and advancing on my independent study timeline. This past week, I finished out assigned tasks to discuss contributor-related documentation, like a Code of Conduct, contributor guidelines, and a pull request template. I began research on user statistics and found some already created. I wrote one of my own, but need to learn more about Google BigQuery to advance further.

Paving the contributor pathway

Making it easier for people to contribute user statistics to ListenBrainz

Making it easier for people to contribute to ListenBrainz with helpful contibuting guidelines

Earlier, I identified weaknesses for the ListenBrainz contributor pathway and found ways we could improve the pathway. This started with the development environment documentation. Now, I helped draft first revisions of our contributor guidelines, Code of Conduct reference, and pull request templates. Together, these three documents have two goals.

  1. Make it easier to contribute to ListenBrainz
  2. Have a better experience and have fun contributing!

Adding these documents addresses these goals. Additionally, the GitHub community profile also highlights these deliverables as ways to meet these goals. After getting feedback and seeing what others think, we make more revisions later (with some trial runs).

Back to SELinux context flags

Recently, I set my desktop back up and installed Docker for the first time on this machine; however, the development environment still failed to start. When I ran the script, it would eventually error out because of a permission denial. The web server image for ListenBrainz was failing.

After debugging, I noticed that I missed the SELinux volume tags for the ListenBrainz web server images in my original pull request, #257. When I created the pull request, I might have had cached data that let my laptop run the development environment without a problem. In either case, it was an easy fix and I knew what the issue was when it happened. Therefore, I submitted a new fix in #290.

Writing new user statistics

The most interesting part of my independent study is working with the music data to build and generate interesting statistics. I finally began exploring the existing statistics in ListenBrainz. The statistic queries use BigQuery standard SQL. BigQuery helps rapidly scan and scale data queries to help with performance (I have a lot to learn about BigQuery).

Two types of statistics

Additionally, ListenBrainz generates two types of statistics:

  1. Site-wide statistics
  2. User statistics

Site-wide statistics are metrics non-specific to a single user. There is only one site-wide query now. It counts how many artists were ever submitted to this ListenBrainz instance and returns an integer. There’s room for expansion in site-wide statistics.

On the other hand, user statistics are metrics specific to a single user. There’s a fair number already, like the top artists and songs in a time period and the number of artists you’ve listened to. These are a little more complete and offer more expansion for doing cool front-end work with something like D3.js.

Writing user statistics

Of course, I had to try writing my own. One helpful query I thought of was getting a count of the songs you listened to over a time period (e.g. “you listened to 500 songs this week!”). I haven’t tested it yet, but I have this in a local branch and hope to test it with real data soon.

def get_play_count(musicbrainz_id, time_interval=None): 
 
 filter_clause = "" 
 if time_interval: 
     filter_clause = "AND listened_at >=
     TIMESTAMP_SUB(CURRENT_TIME(), 
     INTERVAL {})".format(time_interval) 
 
 query = """SELECT COUNT(release_msid) as listen_count 
            FROM {dataset_id}.{table_id} 
            WHERE user_name = @musicbrainz_id 
            {time_filter_clause} 
            LIMIT {limit} 
         """.format( 
                 dataset_id=config.BIGQUERY_DATASET_ID, 
                 table_id=config.BIGQUERY_TABLE_ID, 
                 time_filter_clause=filter_clause, 
                 limit=config.STATS_ENTITY_LIMIT, 
            ) 
 
 parameters = [ 
     { 
         'type': 'STRING', 
         'name': 'musicbrainz_id', 
         'value': musicbrainz_id 
     } 
 ] 
 
 return stats.run_query(query, parameters)

Researching Google BigQuery

My next steps for the independent study are researching Google BigQuery. After going through the existing statistics and understanding how ListenBrainz generates them, an understanding of Google BigQuery is essential to writing effective queries. When I become more comfortable with the tooling and how it works, I want to map out a plan of statistics to generate and measure.

Until then, the hacking continues! As always, keep the FOSS flag high…

The post ListenBrainz community gardening and user statistics appeared first on Justin W. Flory's Blog.

Configuraciones adicionales de Yubikey como Smartcard

Posted by Eduardo Villagrán Morales on November 13, 2017 03:00 AM

Ya hemos visto como agregar una llave GnuPG a una Yubikey como smartcard. Ahora veremos como configurar algunos parámetros adicionales.

Cambiar el PIN de llave y el PIN de admin

Las Yubikey 4 traen de fábrica configurados dos PIN, uno para usar la llave (123456) y otro para las operaciones …

Se construire un serveur de tuile OpenStreetMap

Posted by Didier Fabert (tartare) on November 12, 2017 04:34 PM

Le résultat final

Après un discution avec Jean-Yvon Landrac (Orolia SAS) au State of the Map France 2017, à Avignon, j’ai eu envie d’implémenter mon propre serveur de cartographie. Comme je ne suis ni un debianeux ni un ubuntiste, le défi est de l’installer sur CentOS 7 ou Fedora.

Pour construire mon propre serveur de tuiles OpenStreetMap (OSM), l’utilisation de RPM facilite grandement la tâche.

Il sera basé sur:

  • postgresql avec les extensions postgis et hstore qui hébergeront les données OSM
  • Apache et mod_tile pour servir les tuiles au navigateur
  • renderd, le chef d’orchestre du rendu
  • mapnik, la bibliothèque qui construira les images en fonction des données stockées en base.
  • osm-carto, le style officiel d’OSM
  • carto le compilateur de feuille de style
  • osmctools qui manipulera les données OSM (notamment pour fusionner deux extraits de données OSM
  • osm2pgsql qui intégrera en base les extraits de données OSM

Architecture d’un serveur de tuiles

Il faudra prévoir quand même pas mal d’espace disque: la taille de la base de données sera de 500Go environ pour l’Europe, 85Go environ pour la France entière, ou de 11Go pour les deux seules régions utilisées.

Le VPS utilisé ici possède 16Go de RAM et 2 vCPUs.

Installation

Seulement sur CentOS 7

  • Beaucoup de paquets utilisés proviennent du dépôt epel, on commence donc par s’assurer qu’il est installé.
    sudo yum install epel-release
  • Les paquets RPMs contenant les outils OSM proviennent de mes dépôts COPR, on va donc créer le fichier de dépôt /etc/yum.repos.d/tartare.repo
    sudo cat << eOF > /etc/yum.repos.d/tartare.repo
    [tartare-mapnik]
    name=Copr repo for mapnik owned by tartare
    baseurl=https://copr-be.cloud.fedoraproject.org/results/tartare/mapnik/epel-7-$basearch/
    type=rpm-md
    skip_if_unavailable=True
    gpgcheck=1
    gpgkey=https://copr-be.cloud.fedoraproject.org/results/tartare/mapnik/pubkey.gpg
    repo_gpgcheck=0
    enabled=1
    enabled_metadata=1
    
    [tartare-python-mapnik]
    name=Copr repo for python-mapnik owned by tartare
    baseurl=https://copr-be.cloud.fedoraproject.org/results/tartare/python-mapnik/epel-7-$basearch/
    type=rpm-md
    skip_if_unavailable=True
    gpgcheck=1
    gpgkey=https://copr-be.cloud.fedoraproject.org/results/tartare/python-mapnik/pubkey.gpg
    repo_gpgcheck=0
    enabled=1
    enabled_metadata=1
    
    [tartare-mod_tile]
    name=Copr repo for mod_tile owned by tartare
    baseurl=https://copr-be.cloud.fedoraproject.org/results/tartare/mod_tile/epel-7-$basearch/
    type=rpm-md
    skip_if_unavailable=True
    gpgcheck=1
    gpgkey=https://copr-be.cloud.fedoraproject.org/results/tartare/mod_tile/pubkey.gpg
    repo_gpgcheck=0
    enabled=1
    enabled_metadata=1
    
    [tartare-osmosis-bin]
    name=Copr repo for osmosis-bin owned by tartare
    baseurl=https://copr-be.cloud.fedoraproject.org/results/tartare/osmosis-bin/epel-7-$basearch/
    type=rpm-md
    skip_if_unavailable=True
    gpgcheck=1
    gpgkey=https://copr-be.cloud.fedoraproject.org/results/tartare/osmosis-bin/pubkey.gpg
    repo_gpgcheck=0
    enabled=1
    enabled_metadata=1
    
    [tartare-osmctools]
    name=Copr repo for osmctools owned by tartare
    baseurl=https://copr-be.cloud.fedoraproject.org/results/tartare/osmctools/epel-7-$basearch/
    type=rpm-md
    skip_if_unavailable=True
    gpgcheck=1
    gpgkey=https://copr-be.cloud.fedoraproject.org/results/tartare/osmctools/pubkey.gpg
    repo_gpgcheck=0
    enabled=1
    enabled_metadata=1
    
    [tartare-openstreetmap-carto]
    name=Copr repo for openstreetmap-carto owned by tartare
    baseurl=https://copr-be.cloud.fedoraproject.org/results/tartare/openstreetmap-carto/epel-7-$basearch/
    type=rpm-md
    skip_if_unavailable=True
    gpgcheck=1
    gpgkey=https://copr-be.cloud.fedoraproject.org/results/tartare/openstreetmap-carto/pubkey.gpg
    repo_gpgcheck=0
    enabled=1
    enabled_metadata=1
    EOF
    

Seulement sur Fedora

    • On installe mes dépôts COPR
sudo dnf copr enable tartare/mod_tile 
sudo dnf copr enable tartare/osmosis-bin
sudo dnf copr enable tartare/openstreetmap-carto 

Pour les deux distributions
Si on souhaite faire l’installation dans des partitions séparées, il faut préparer le terrain. La partition /var/lib/pgsql sera, de préférence, sur un disque SSD.

Partition 2 régions France Europe
/var/lib/mod_tile 50Go
/var/lib/openstreetmap-carto 3Go
/var/lib/pgsql 20Go 100Go 500Go
  1. Création des 3 partitions et formatage en xfs (ou en ext4)
  2. création des points de montages
    sudo mkdir /var/lib/pgsql /var/lib/mod_tile /var/lib/openstreetmap-carto
  3. Modification de la table des partitions en éditant le fichier /etc/fstab
  4. Montage de nos partions
    sudo mount -a

On installe la base de données postgresql avec l’extension postgis

sudo yum -y install postgresql-server postgresql-contrib postgis postgis-utils

Puis les outils OSM et quelques dépendances

sudo yum install mapnik mapnik-utils python2-mapnik mod_tile osm2pgsql \
                 osmctools openstreetmap-carto PyYAML
sudo yum install npm wget git unzip java

On installe maintenant l’utilitaire carto, via npm

sudo npm -g install carto

Le paquet mod_tile créé un utilisateur système osm mais celui-ci n’a pas de shell (compte système = shell /sbin/nologin). Par flemme simplicité, on lui attribue un shell et on lui ajoute le groupe wheel, et on lui affecte un mot de passe, le temps de l’installation seulement.

usermod -s /bin/bash osm
usermod -aG wheel osm
passwd osm

On créé le répertoire volatile d’accueil de la socket renderd

sudo systemd-tmpfiles --create

Initialisation de la base de données

L’initialisation de postgresql est triviale

sudo postgresql-setup initdb
sudo systemctl enable postgresql httpd
sudo systemctl start postgresql httpd

On va maintenant configurer finement notre base de données en modifiant le fichier /var/lib/pgsql/data/postgresql.conf:

  • shared_buffers = 128MB
  • maintenance_work_mem = 256MB

De manière temporaire, afin d’optimiser le temps d’import de notre fichier français, on peut aussi modifier ces valeurs. Toutefois ces valeurs doivent être remis à on une fois l’import terminé.

  • autovacuum = off

On applique les changements

sudo systemctl restart postgresql

On initialise la base de données

sudo -u postgres createuser -s osm
sudo -u postgres createuser apache
sudo -u postgres createdb -E UTF8 -O osm gis
sudo -u postgres psql
\c gis
CREATE EXTENSION postgis;
CREATE EXTENSION hstore;
ALTER TABLE geometry_columns OWNER TO osm;
ALTER TABLE spatial_ref_sys OWNER TO osm;
\q

Si la création de la base échoue avec l’erreur (ref http://stackoverflow.com)
createdb: database creation failed: ERROR: new encoding (UTF8) is incompatible with the encoding of the template database (SQL_ASCII)

sudo -u postgres psql 
psql (9.2.18)
Type "help" for help.

postgres=# UPDATE pg_database SET datistemplate = FALSE WHERE datname = 'template1';
UPDATE 1
postgres=# DROP DATABASE template1;
DROP DATABASE
postgres=# CREATE DATABASE template1 WITH TEMPLATE = template0 ENCODING = 'UNICODE';
CREATE DATABASE
postgres=# UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template1';
UPDATE 1
postgres=# \c template1
You are now connected to database "template1" as user "postgres".
template1=# VACUUM FREEZE;
VACUUM
template1=# \q

Style osm-carto

A partir d’ici, nous utiliserons le compte utilisateur osm exclusivement (pour toutes les commandes du reste de ce tutorial).

sudo su - osm

Pour pouvoir générer des tuiles, il faudra utiliser une feuille de style (un peu comme le css des pages web). Le style par défaut d’OSM est utilisé.

Il faut environ 2Go de libre dans /var pour les fichiers shapefile

cd /usr/share/openstreetmap-carto/
scripts/get-shapefiles.py
sudo ln -s /var/lib/openstreetmap-carto/style.xml /usr/share/openstreetmap-carto/style.xml
carto -a "3.0.0" project.mml > style.xml

On modifie le fichier /etc/renderd.conf pour spécifier notre feuille de style

[default]
...
XML=/usr/share/openstreetmap-carto/style.xml

Import des données

Pour complexifier un peu la tâche, on souhaite avoir deux régions (c’est cool d’être limitrophe):

  • PACA
  • Languedoc-Roussillon

osmconvert fera le travail de fusion.

On créé le répertoire d’accueil de nos extraits de planet et on s’assure que la partition est assez grande pour contenir le(s) fichier(s)

sudo mkdir /home/osm
sudo chown osm:osm /home/osm
cd /home/osm

On télécharge nos deux extraits de planet, en prenant soin de télécharger d’abord le checksum md5 et le fichier state.txt pour avoir le timestamp de l’extrait. Ce timestamp sera utilisé par l’utilitaire osmosis lors de la mise à jour de la base de données. Par sécurité, on applique le timestamp du site geofabrik sur nos extraits. On aura un backup gratuit d’une information essentielle au processus de mise à jour.

cd /home/osm
mkdir provence-alpes-cote-d-azur
pushd provence-alpes-cote-d-azur
wget http://download.geofabrik.de/europe/france/provence-alpes-cote-d-azur-latest.osm.pbf.md5
wget http://download.geofabrik.de/europe/france/provence-alpes-cote-d-azur-updates/state.txt
wget http://download.geofabrik.de/europe/france/provence-alpes-cote-d-azur-latest.osm.pbf
popd

mkdir languedoc-roussillon
pushd languedoc-roussillon
wget http://download.geofabrik.de/europe/france/languedoc-roussillon-latest.osm.pbf.md5
wget http://download.geofabrik.de/europe/france/languedoc-roussillon-updates/state.txt
wget http://download.geofabrik.de/europe/france/languedoc-roussillon-latest.osm.pbf
popd

On les vérifie

pushd /home/osm/provence-alpes-cote-d-azur
md5sum -c provence-alpes-cote-d-azur-latest.osm.pbf.md5
popd
pushd /home/osm/languedoc-roussillon
md5sum -c languedoc-roussillon-latest.osm.pbf.md5
popd

et on les fusionne

osmconvert languedoc-roussillon/languedoc-roussillon-latest.osm.pbf --out-o5m | osmconvert - provence-alpes-cote-d-azur/provence-alpes-cote-d-azur-latest.osm.pbf -o=sud-est.osm.pbf

Nous allons utiliser la commande osm2pgsql pour importer les données

  • La valeur du paramètre C est d’environ 2/3 de la RAM
  • l’option –slim est obligatoire si des mises à jour sont prévues
  • on spécifie notre feuille de style avec l’option –style
  • on spécifie le nom de notre base de données avec l’option -d
  • on spécifie le nombre de tâches à paralléliser (lorsque c’est possible), à adapter en fonction du nombre de CPU disponible.
  • on importe notre fusion d’extrait du planet

Il est préférable de lancer cette commande avec nohup, ce qui évitera d’arrêter le processus en cas de déconnexion intempestive.

osm2pgsql --slim --hstore --style /usr/share/openstreetmap-carto/openstreetmap-carto.style --tag-transform-script /usr/share/openstreetmap-carto/openstreetmap-carto.lua -d gis -C 10000 --number-processes 2 /home/osm/sud-est.osm.pbf

osm2pgsql version 0.92.0 (64 bit id space)

Using lua based tag processing pipeline with script /usr/share/openstreetmap-carto/openstreetmap-carto.lua
Using projection SRS 3857 (Spherical Mercator)
Setting up table: planet_osm_point
Setting up table: planet_osm_line
Setting up table: planet_osm_polygon
Setting up table: planet_osm_roads
Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=10000MB, maxblocks=160000*65536, allocation method=11
Mid: pgsql, scale=100 cache=10000
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels

Reading in file: /home/osm/sud-est.osm.pbf
Using PBF parser.
Processing: Node(46593k 6.4k/s) Way(6822k 1.91k/s) Relation(39430 79.66/s)  parse time: 11438s
Node stats: total(46593329), max(4919729897) in 7336s
Way stats: total(6822494), max(501069714) in 3577s
Relation stats: total(39438), max(7336004) in 495s
Committing transaction for planet_osm_point
Committing transaction for planet_osm_line
Committing transaction for planet_osm_polygon
Committing transaction for planet_osm_roads
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Using lua based tag processing pipeline with script /usr/share/openstreetmap-carto/openstreetmap-carto.lua
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Using lua based tag processing pipeline with script /usr/share/openstreetmap-carto/openstreetmap-carto.lua

Going over pending ways...
        5962327 ways are pending

Using 2 helper-processes
Finished processing 5962327 ways in 6957 s

5962327 Pending ways took 6957s at a rate of 857.03/s
Committing transaction for planet_osm_point
Committing transaction for planet_osm_line
Committing transaction for planet_osm_polygon
Committing transaction for planet_osm_roads
Committing transaction for planet_osm_point
Committing transaction for planet_osm_line
Committing transaction for planet_osm_polygon
Committing transaction for planet_osm_roads

Going over pending relations...
        0 relations are pending

Using 2 helper-processes
Finished processing 0 relations in 0 s

Committing transaction for planet_osm_point
WARNING:  there is no transaction in progress
Committing transaction for planet_osm_line
WARNING:  there is no transaction in progress
Committing transaction for planet_osm_polygon
WARNING:  there is no transaction in progress
Committing transaction for planet_osm_roads
WARNING:  there is no transaction in progress
Committing transaction for planet_osm_point
WARNING:  there is no transaction in progress
Committing transaction for planet_osm_line
WARNING:  there is no transaction in progress
Committing transaction for planet_osm_polygon
WARNING:  there is no transaction in progress
Committing transaction for planet_osm_roads
WARNING:  there is no transaction in progress
Sorting data and creating indexes for planet_osm_roads
Sorting data and creating indexes for planet_osm_polygon
Sorting data and creating indexes for planet_osm_line
Sorting data and creating indexes for planet_osm_point
Copying planet_osm_roads to cluster by geometry finished
Creating geometry index on planet_osm_roads
Creating osm_id index on planet_osm_roads
Creating indexes on planet_osm_roads finished
All indexes on planet_osm_roads created in 476s
Completed planet_osm_roads
Copying planet_osm_point to cluster by geometry finished
Creating geometry index on planet_osm_point
Copying planet_osm_line to cluster by geometry finished
Creating geometry index on planet_osm_line
Creating osm_id index on planet_osm_point
Creating indexes on planet_osm_point finished
All indexes on planet_osm_point created in 1624s
Completed planet_osm_point
Creating osm_id index on planet_osm_line
Creating indexes on planet_osm_line finished
All indexes on planet_osm_line created in 2291s
Completed planet_osm_line
Copying planet_osm_polygon to cluster by geometry finished
Creating geometry index on planet_osm_polygon
Creating osm_id index on planet_osm_polygon
Creating indexes on planet_osm_polygon finished
All indexes on planet_osm_polygon created in 5913s
Completed planet_osm_polygon
Stopping table: planet_osm_nodes
Stopped table: planet_osm_nodes in 0s
Stopping table: planet_osm_ways
Building index on table: planet_osm_ways
Stopped table: planet_osm_ways in 10042s
Stopping table: planet_osm_rels
Building index on table: planet_osm_rels
Stopped table: planet_osm_rels in 97s
node cache: stored: 46593329(100.00%), storage efficiency: 52.06% (dense blocks: 1698, sparse nodes: 37792830), hit rate: 100.00%

Osm2pgsql took 34458s overall

Durée environ 9 heures et 35 minutes

On génère les index

cd /usr/share/openstreetmap-carto/
scripts/indexes.py | psql -d gis

Le chef d’orchestre du rendu: renderd

On vérifie d’abord la configuration: fichier /etc/renderd.conf

[renderd]
num_threads=2

[mapnik]
plugins_dir=/usr/lib64/mapnik/input
font_dir=/usr/share/fonts

[default]
XML=/usr/share/openstreetmap-carto/style.xml

On adapte le paramètre num_threads de la section [renderd] pour refléter le nombre de vCPUs.

Puis on tente de lancer le démon à la main, avec l’utilisateur osm, afin de vérifier que tout fonctionne comme attendu.

/usr/sbin/renderd -f -c /etc/renderd.conf

On essaie de télécharger une tuile pour vérifier que tout fonctionne: localhost/mod_tiles/0/0/0.png

Si tout est correct, on obtient cette image.

On arrête notre instance de renderd (CTRL+C) et on lance le démon

sudo systemctl start renderd
sudo systemctl enable renderd

Pré-génération des tuiles (optionnel)

Afin d’avoir une bonne réactivité de la carte, il est nécessaire de pré-générer la génération des tuiles. Un niveau de zoom 10 devrait suffir, les niveaux supérieurs seront générer à la demande. Bien évidemment, ça aussi, ça prend énormement du temps.

Le démon renderd doit être démarré et opérationnel
Il est préférable de lancer cette commande avec nohup

render_list -a -n 2 -Z 10

Zoom 01: min: 25.0 avg: 25.0 max: 25.0  over a total of    25.0s in    1 requests
Zoom 02: min: 36.2 avg: 36.2 max: 36.2  over a total of    36.2s in    1 requests
Zoom 03: min: 42.1 avg: 42.1 max: 42.1  over a total of    42.1s in    1 requests
Zoom 04: min:  3.4 avg: 47.9 max: 92.4  over a total of   191.6s in    4 requests
Zoom 05: min:  2.5 avg: 11.8 max: 84.1  over a total of   189.6s in   16 requests
Zoom 06: min:  1.6 avg:  6.7 max: 129.3 over a total of   426.7s in   64 requests
Zoom 07: min:  1.8 avg:  4.0 max: 222.0 over a total of  1025.7s in  256 requests
Zoom 08: min:  1.7 avg:  3.2 max: 12.4  over a total of  3313.9s in 1023 requests
Zoom 09: min:  1.0 avg:  3.7 max: 193.0 over a total of 15224.3s in 4090 requests
Zoom 10: min:  0.9 avg:  3.0 max: 123.8 over a total of 49596.2s in 16373 requests

La page web magique

Si l’installation a été faite en RPM

sudo cp /usr/share/doc/mod_tile-0.5/slippymap.html /var/www/html/index.html

Sinon, la page web est à copier depuis le répertoire des sources de mod_tile (/home/osm dans ce tutorial)

Le rendu est visible sur map.tartarefr.eu
On modifie quelques variables:

  • var lat Mettre la lattitude correspondante au centre des extraits de planet utilisés (43.81 pour mes extraits de planet)
  • var lon Mettre la longitude correspondante au centre des extraits de planet utilisés (4.64 pour mes extraits de planet)
  • var newLayer On modifie l’URL car par défaut, il faut ajouter l’alias /osm_tiles pour pouvoir être servi en tuile par notre serveur (/osm_tiles/${z}/${x}/${y}.png)

On peut aussi rajouter la ligne avant la fonction init(), afin d’éviter les tuiles roses pales (tuile par défaut correspondant à un fichier non trouvé). Le javascript du navigateur fera 5 tentatives avant de déclarer que le fichier n’existe pas.

OpenLayers.IMAGE_RELOAD_ATTEMPTS = 5;

Mise à jour de la base Postgis

L’utilitaire osmosis s’occupera de télécharger les mises à jour et osm2pgsql les intégrera en base. Toujours avec l’utilisateur osm, on initialisera le processus et on le lancera de manière périodique (tâche cron)

  • On initialise la mise à jour pour le premier extrait de planet (l’ordre de traitement des extraits importe peu)
    export WORKDIR_OSM=/var/lib/mod_tile/.osmosis/provence-alpes-cote-d-azur
    mkdir -p ${WORKDIR_OSM}
    cd ${WORKDIR_OSM}
    osmosis --read-replication-interval-init workingDirectory=${WORKDIR_OSM}

    Même chose pour le deuxième

    export WORKDIR_OSM=/var/lib/mod_tile/.osmosis/languedoc-roussillon
    mkdir -p ${WORKDIR_OSM}
    cd ${WORKDIR_OSM}
    osmosis --read-replication-interval-init workingDirectory=${WORKDIR_OSM}
  • On modifie les URL de mise à jour (on utilise geofabrik)
    sed -i -e "/^baseUrl/ s;=.*$;=http://download.geofabrik.de/europe/france/provence-alpes-cote-d-azur-updates;" /var/lib/mod_tile/.osmosis/provence-alpes-cote-d-azur/configuration.txt
    sed -i -e "/^baseUrl/ s;=.*$;=http://download.geofabrik.de/europe/france/languedoc-roussillon-updates;" /var/lib/mod_tile/.osmosis/languedoc-roussillon/configuration.txt
  • On copie le fichier state.txt pour qu’osmosis connaisse le point de départ de la mise à jour
    cp /home/osm/provence-alpes-cote-d-azur/state.txt /var/lib/mod_tile/.osmosis/provence-alpes-cote-d-azur/
    cp /home/osm/languedoc-roussillon/state.txt /var/lib/mod_tile/.osmosis/languedoc-roussillon/
    
  • On lance la première tâche de mise à jour manuellement pour s’assurer que tout fonctionne correctement
    export WORKDIR_OSM=/var/lib/mod_tile/.osmosis/provence-alpes-cote-d-azur
    cd ${WORKDIR_OSM}
    osmosis --read-replication-interval workingDirectory=${WORKDIR_OSM} \
            --write-xml-change /tmp/change.osm.gz
    osm2pgsql --append --slim --hstore \
              --style /usr/share/openstreetmap-carto/openstreetmap-carto.style \
              --tag-transform-script /usr/share/openstreetmap-carto/openstreetmap-carto.lua \
              -d gis -C 10000 --number-processes 2 \
              -e 10-20 -o /tmp/expire.list
              /tmp/change.osm.gz
    render_expired --min-zoom=10 --delete-from=15 < /tmp/expire.list
    rm -f /tmp/change.osm.gz
    rm -f /tmp/expire.list
    
    export WORKDIR_OSM=/var/lib/mod_tile/.osmosis/languedoc-roussillon
    cd ${WORKDIR_OSM}
    osmosis --read-replication-interval workingDirectory=${WORKDIR_OSM} \
            --write-xml-change /tmp/change.osm.gz
    osm2pgsql --append --slim --hstore \
              --style /usr/share/openstreetmap-carto/openstreetmap-carto.style \
              --tag-transform-script /usr/share/openstreetmap-carto/openstreetmap-carto.lua \
              -d gis -C 10000 --number-processes 2 \
              -e 10-20 -o /tmp/expire.list
              /tmp/change.osm.gz
    render_expired --min-zoom=10 --delete-from=15 < /tmp/expire.list
    rm -f /tmp/change.osm.gz
    rm -f /tmp/expire.list

Le lancement des commandes (script de mise à jour osm-update.sh) en tâche planifiée du dernier point est laissé en exercice, juste quelques conseils:

  • il faut absolument lancer les tâches planifiées avec l'utilisateur osm.
  • Je péfère écrire les fichiers temporaires dans /tmp, car c'est un système de fichier en RAM (plus rapide), mais on peut tout à fait l'écrire dans n'importe quel répertoire où l'utilisateur osm a des droits lecture/écriture
  • Le lancement des commandes ci-dessus ne met en base de données que des modifications d'une seule journée. Si plusieurs jours se sont écoulés depuis l'installation, il faudra lancer ces commandes autant de fois que de jours en retard pour rattraper le lag. Le plus simple est d'obtenir le numéro de la dernière mise à jour sur le site de geofabrik et de comparer ce chiffre à celui du fichier state.txt du répertoire de travail d'osmosis (qui est incrémenté de 1 à chaque commande de mise à jour. Si beaucoup de jour ce sont écoulés, il vaut mieux télécharger toutes les mises à jour et les fusionner dans un seule fichier (osmctools: osmconvert).
  • On peut tout à fait appliquer une mise à jour qui a déjà été intégrée. C'est même conseiller dans le cas d'utilisation des minutely changes officielles.
  • On peut utiliser un pipe entre les tâche osmosis et osm2pgsql afin d'éviter de passer par un fichier temporaire mais je profite de ce fichier temporaire pour faire la mise à jour de nominatim (hors sujet de l'installation d'un serveur de tuiles)
  • On régénère les tuiles qui ont été modifiées par la mise à jour si le zoom est compris entre 10 et 14 et on les supprime simplement si le zoom est supérieur ou égal à 15 (elles seront reconstruites à la demande)

Post installation

Maintenant que tout est installé, configuré et opérationnel, on va enlever le shell à l'utilisateur osm, le retirer du groupe wheel, lui affecter un mot de passe vide et on réactive l'autovacuum de postgresql. Ces commandes sont à lancer avec notre utilisateur normal (ayant servi à l'installation des RPMs).

sudo usermod -s /sbin/nologin osm
sudo passwd -d osm
sudo gpasswd -d osm wheel
sudo sed -i -e '/^autovacuum/ s/off/on/' /var/lib/pgsql/data/postgresql.conf
sudo systemctl restart postgresql

Références

Exam Results and Pass List #PeruRumboGSoC2018

Posted by Julita Inca Chiroque on November 12, 2017 03:26 AM

Early this morning, students from different universities of Lima, Peru came to UNI to take an exam to prove their knowledge of programming and GNU/Linux.

52 were registered in the previuos days and I started to look for a lab with 50 Linux manchines which was actually very challenging. Then, we decided to just have two classrooms and take a written exam where we could see the skill level growth related to programming on Linux. Only 17 showed up ((maybe the Peruvian soccer match from last night that determined if we would go to the World Cup was the reason why it was so hard for them to wake up early).

According to our plan, we are looking for students that are almost ready for the GSoC, because we are aware that six Sundays will not be enough to master these subjects: C, python, JavaScript, Node JS, Linux, GIT and documentation. We also noticed that people who manage these skills, in advance are working and are not enrolled in university or are retired, or simply they want their Sundays to rest.

However, there are interested students that might not have the same skills as intermediate or advanced level in programming on Linux. That is why we consider it important to have a general view of the new group throughout the exam, so they can compare their academic achievements at the end of the instructional period. 

This is the first step to make our students more confident to apply next April 2018 to the GSoC program. Our main objetive in this phase is to build a good GITHub profile with blogs and content related to programming on Linux.

Thanks to the support of the Linux Foundation, we are going be to supported with $500 in course for each participant who finishes this program successfully. The only way to prove this is by documentation (Git / blog) of each session. Glad to have more women involved.

Tomorrow our sessions are going to start at UIGV. We have authorization from 10 am to 4pm, I am going to use two hours to ensure they accomplish their documentation until 6pm.

The list of the participants who pass is only a reference, we are going to help them with food expenses. But we are not restricting any student who wants to learn with us. So anyone can join us, and the pacing will be basically according to the level of the first top 12 students.  Also, a constantly evaluation every session will let us decide who is going to receive the sweatshirts from the Linux Foundation.

Screen Shot 2017-11-12 at 11.17.05 PM

Thank so much LinuXatUNI, trainers and all people who help us as a local group.


Filed under: FEDORA, GNOME Tagged: #PeruRumboGSoC2018, Exam Results, fedora, Fedora + GNOME community, GNOME, GSoC Peru Preparation, Julita Inca, Julita Inca Chiroque, Lima, LinuXatUNI, Pass List, Perú

Configurar una Yubikey con GnuPG en Fedora

Posted by Eduardo Villagrán Morales on November 12, 2017 03:00 AM

En esta serie veremos diversos elementos relacionados con GnuPG y Yubikey como Smartcard. Los post son específicos para Yubikey 4, pero podrían funcionar con Yubikey-Neo con algunas modificaciones.

Estos ejemplos están probados en:

  • Fedora 26
  • GnuPG 2.2.1
  • Yubikey 4 (v2.1)

Verificar la Yubikey

Para los siguientes ejemplos …

Sécuriser son VPS centos/redhat/fedora en 10 étapes.

Posted by Didier Fabert (tartare) on November 11, 2017 05:33 PM

Pré-requis

Depuis notre poste client, on s’assure qu’une paire de clés (rsa c’est mieux que dsa) existe pour notre utilisateur courant. Celle-ci servira à se connecter à notre VPS sans mot de passe, mais de manière sécurisée. Si ce n’est pas le cas, on n’en créé une et on lui affecte un mot de passe.

ssh-keygen -t rsa

On se sert ensuite de l’agent SSH pour ne renseigner le mot de passe de notre clé privée qu’une seule fois, mais si on préfère le taper à chaque fois ….

ssh-add

Sécurisation

On peut maintenant s’occuper de notre VPS:

  1. Dès réception du mail de confirmation d’installation, on se connecte en SSH sur notre VPS, avec le login root et le mot de passe fourni.
    Cette session ne doit pas être fermée avant de pouvoir se connecter sans mot de passe (par clé donc) avec l’utilisateur qui sera créé (point 7).
  2. On modifie tout de suite le mot de passe root
    passwd
  3. On ajoute un utilisateur standard et on lui affecte un mot de passe
    useradd -m -s /bin/bash -c "<Nom> <Prenom>" <mon-user>
    passwd <mon-user>
  4. Depuis notre poste client, on autorise notre clé sur le serveur
    ssh-copy-id <mon-user>@<ip-de-mon-vps>
  5. On Modifie la configuration du service SSH (typiquement le fichier /etc/ssh/sshd_config):
    • ne plus autoriser le super-utilisateur root
      PermitRootLogin no
    • ne plus autoriser les connexions par mot de passe (n’accepter que les connexions par clé)
      PasswordAuthentication no
    • On redémarre le service ssh (fedora, centos7 ou rhel7)
      systemctl restart sshd

      ou sur les distributions ne prenant pas en charge systemd (centos6 ou rhel6)

      service sshd restart
  6. Dans un autre shell, on s’assure que notre utilisateur peut se connecter au VPS sans mot de passe
    ssh <mon-user>@<ip-de-mon-vps>
  7. Maintenant que la porte d’entrée a été changée, que notre VPS est bien accessible depuis notre poste client avec un utilisateur normal, on peut fermer le shell ouvert en début de procédure, mais cela reste optionnel
  8. On met à jour son système
    yum update
  9. On met en place un firewall
    Si le service firewalld est disponible, il est normalement déjà installé:

    1. On obtient la zone par défaut
      firewall-cmd --get-default-zone
      public
      

      Notre zone par défaut s’appelle public

    2. On vérifie que le service ssh est autorisé sur le firewall
      firewall-cmd --zone=public --list-services
      dhcpv6-client ssh
    3. Si le service ssh n’est pas dans la liste, on l’ajoute (de manière permanente) et on recharge le firewall
      firewall-cmd --zone=public --permanent --add-service=ssh
      firewall-cmd --reload

    Sinon, on utilise le service iptables

    1. Si firewalld est installé mais qu’il ne fonctionne pas (merci à l’hébergeur qui ne sait pas configurer la technologie proxmox correctement), on le supprime
      yum remove firewalld
    2. On installe le service iptables
      yum install iptables-services
    3. On le démarre et on l’active
      systemctl start iptables
      systemctl enable iptables

      Ou sur les distributions ne prenant pas en charge systemd (centos6 ou rhel6)

      service iptables start
      chkconfig iptables on
      
    4. On met en place une configuration minimale dans le fichier /etc/sysconfig/iptables: on autorise le trafic entrant sur la boucle locale (localhost), les pings (protocole icmp), ainsi que le trafic entrant sur les connexions déjà établies ( state RELATED,ESTABLISHED ) plus le trafic entrant sur le port 22 (ssh) et surtout on ignore tout le reste avec la politique par défaut à DROP. Par contre, on autorise le trafic sortant.
      # sample configuration for iptables service
      # you can edit this manually or use system-config-firewall
      # please do not ask us to add additional ports/services to this default configuration                                                                                             
      *filter 
      :INPUT DROP [0:0]
      :FORWARD DROP [0:0]
      :OUTPUT ACCEPT [0:0]
      -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
      -A INPUT -p icmp -j ACCEPT
      -A INPUT -i lo -j ACCEPT
      -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
      COMMIT
      
    5. On redémarre le service iptables
      systemctl restart iptables

      Ou sur les distributions ne prenant pas en charge systemd (centos6 ou rhel6)

      service iptables restart
  10. On met en place le service fail2ban sur le service ssh
    • Installation
      yum install fail2ban
    • On met en place une configuration minimale: on surcharge le fichier jail.conf dans le fichier jail.local pour bannir les gros lourds 48 heures directement, car il ne peut y avoir de mots de passe mal renseigné, vu qu’on accepte uniquement les connexions par clés.
      [DEFAULT]
      ignoreip = 127.0.0.1/8 <ip-de-mon-vps>
      bantime = 3600
      banaction = firewallcmd-ipset
      banaction_allports = firewallcmd-ipset
      backend = systemd
      sender = fail2ban@<mon-domaine>
      destemail = root
      action = %(action_mwl)s
      
      [sshd]
      enabled  = true
      maxretry = 5
      bantime  = 172800
      findtime = 172800
      

      ou si firewalld n’est pas disponible

      [DEFAULT]
      ignoreip = 127.0.0.1/8 <ip-de-mon-vps>
      bantime = 3600
      banaction = iptables-multiport
      banaction_allports = iptables-allports
      backend = systemd
      sender = fail2ban@<mon-domaine>
      destemail = root
      action = %(action_mwl)s
      
      [sshd]
      enabled  = true
      maxretry = 5
      bantime  = 172800
      findtime = 172800
      
    • On démarre et on active le service
      systemctl start fail2ban
      systemctl enable fail2ban

      Ou sur les distributions ne prenant pas en charge systemd (centos6 ou rhel6)

      service fail2ban start
      chkconfig fail2ban on
      

Voilà, le VPS est maintenant sécurisé et les premiers courriels de bannissement des gros relous ne devraient pas tarder…

WordPress et CSP: unsafe-inline et unsafe-eval (script-src)

Posted by Didier Fabert (tartare) on November 11, 2017 11:07 AM

Ou comment supprimer unsafe-inline et unsafe-eval de script-src dans l’en-tête HTTP Content-Security-Policy avec WordPress et obtenir un A+ sur securityheaders.io.

Déjà, autant prévenir, ça va être long et pénible… Cela se fera à coup de petite modification, test, re petite modification. Oui modification est au singulier, car on ne change qu’un paramètre à la fois entre chaque test.

WordPress seul est dans l’ensemble assez propre, dans le sens où il n’embarque que très peu de javascript inline. Il n’y a donc que quatre problèmes à résoudre

  1. Désactiver les emojis, avec un plugin pour ne pas toucher au code
  2. Choisir un thème n’utilisant pas de javascript inline
  3. N’utiliser ques des plugins sans javascript inline
  4. Ajouter une exception pour la partie admin

Ça parait simple dit comme ça, mais dans la pratique, le moindre plugin (surtout ceux de galerie d’images) peut venir violer la politique de sécurité du contenu (Content-Security-Policy). De plus, pour avoir les rapports de violation des CSP, il va falloir un script qui enregistre tout ça.

Prérequis

  • On s’écrit un petit script PHP perfectible qui enregistrera les violations. Vu le peu de sécurité offert par ce script, il est préférable de le durcir et de mutualiser l’enregistrement des rapports: Expect-Certificate-Transparency (ect) et Public-Key-Pins (pkp). Ou bien de simplment le supprimer une fois nos tests effectués.
    Fichier /usr/share/wordpress/report.php
    <?php
    date_default_timezone_set('UTC');
    $LOGPATH = "/var/log/httpd";
    $ROWS = array(
      'violated-directive',
      'effective-directive',
      'blocked-uri',
      'document-uri',
      'line-number',
      'status-code',
      'referrer',
      'disposition',
      'original-policy',
      'script-sample'
    );
    
    $header = date("Y-m-d H:i:s");
    $raw = file_get_contents('php://input');
    
    if ( ! isset( $raw ) or empty ( $raw ) or strlen( $raw ) >= 2048 ) {
      exit( 1 );
    }
    
    $rows = json_decode( $raw );
    $message = $header . ' csp-report' . PHP_EOL;
    foreach( $ROWS as $row ) {
      $message .= "  $row: " . $rows->{'csp-report'}->{$row} . PHP_EOL;
    }
    $message .= PHP_EOL;
    
    $file = file_put_contents( $LOGPATH . "/report.log", $message, FILE_APPEND | LOCK_EX );
    
    echo "For reporting Content-Security-Policy violation";
    exit( 0 );
    
  • On va commencer en mettant une définition très restrictive dans la configuration apache.
    Header always set Content-Security-Policy "default-src 'self' data: ; script-src 'self' ; style-src 'self' 'unsafe-inline' ; font-src 'self' data: ; img-src 'self' data: ; report-uri https://blog.example.com/report.php"
    
  • On désactive tous les plugins WordPress

On verra apparaitre au rechargement de la page, les premiers rapports, dans le fichier /var/log/httpd/report.log

Premier test

Rapports de violation

  • C’est le seul problème du cœur de WordPress (hors partie admin) concernant la politique de sécurité du contenu: les emojis.
    2017-11-12 12:02:29 csp-report
      violated-directive: script-src https://www.tartarefr.eu
      effective-directive: 
      blocked-uri: self
      document-uri: https://www.tartarefr.eu/
      line-number: 12
      status-code: 
      referrer: 
      disposition: 
      original-policy: default-src https://www.tartarefr.eu data:; script-src https://www.tartarefr.eu; style-src https://www.tartarefr.eu 'unsafe-inline'; font-src https://www.tartarefr.eu data:; img-src https://www.tartarefr.eu data:; report-uri https://report.tartarefr.eu/report.php?type=csp
      script-sample: 
                            window._wpemojiSettings = {"baseUrl"...
    
  • Dans le CSS de notre thème, il y a une dépendance à https://fonts.googleapis.com
    2017-11-12 12:02:29 csp-report
      violated-directive: style-src https://www.tartarefr.eu 'unsafe-inline'
      effective-directive: 
      blocked-uri: https://fonts.googleapis.com
      document-uri: https://www.tartarefr.eu/
      line-number: 
      status-code: 
      referrer: 
      disposition: 
      original-policy: default-src https://www.tartarefr.eu data:; script-src https://www.tartarefr.eu; style-src https://www.tartarefr.eu 'unsafe-inline'; font-src https://www.tartarefr.eu data:; img-src https://www.tartarefr.eu data:; report-uri https://report.tartarefr.eu/report.php?type=csp
      script-sample:
    

Résolution

  • On ré-active (installe et active) le module Disable Emojis
  • On ajoute https://fonts.googleapis.com à notre en-tête Content-Security-Policy, dans la section style-src et on recharge Apache.
    Header always set Content-Security-Policy "default-src 'self' data: ; script-src 'self' ; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com ; font-src 'self' data: ; img-src 'self' data: ; report-uri https://report.example.com/report.php?type=csp"

Deuxième test

Rapports de violation

    • C’est la dépendance pour la police de caractère
      2017-11-12 12:06:52 csp-report
        violated-directive: font-src https://www.tartarefr.eu data:
        effective-directive: 
        blocked-uri: https://fonts.gstatic.com
        document-uri: https://www.tartarefr.eu/
        line-number: 
        status-code: 
        referrer: 
        disposition: 
        original-policy: default-src https://www.tartarefr.eu data:; script-src https://www.tartarefr.eu; style-src https://www.tartarefr.eu 'unsafe-inline' https://fonts.googleapis.com; font-src https://www.tartarefr.eu data:; img-src https://www.tartarefr.eu data:; report-uri https://report.tartarefr.eu/report.php?type=csp
        script-sample: 
      

Résolution

      • On ajoute https://fonts.gstatic.com à notre en-tête Content-Security-Policy, dans la section font-src et on recharge Apache.
        Header always set Content-Security-Policy "default-src 'self' data: ; script-src 'self' ; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com ; font-src 'self' data: https://fonts.gstatic.com ; img-src 'self' data: ; report-uri https://report.example.com/report.php?type=csp"

À partir d’ici, on reçoit plus de rapport de violation, sauf si on essaie la partie administrateur ou un plugin de galerie.

La partie administrateur

La partie administrateur du site ne fonctionne pas bien avec notre définition de Content-Security-Policy. On va encore l’adapter mais à la fin il faudra tricher pour la section script-src et modifier l’en-tête HTTP Content-Security-Policy pour la partie admin.
On va donc ajouter à notre définition Content-Security-Policy

  • https://code.jquery.com à notre en-tête Content-Security-Policy, dans la section style-src
  • https://code.jquery.com à notre en-tête Content-Security-Policy, dans la section img-src
  • https://secure.gravatar.com à notre en-tête Content-Security-Policy, dans la section img-src

Pour le reste, pas de recette miracle, hélas. La définition dans Apache devient ceci:

<Location "/wp-admin">
  ...
  Header always set Content-Security-Policy "default-src 'self' data: ; script-src 'self' 'unsafe-inline' 'unsafe-eval' ; style-src 'self' 'unsafe-inline' fonts.googleapis.com ; font-src 'self' fonts.gstatic.com data: ; img-src 'self' data: secure.gravatar.com ; report-uri https://report.example.com/report.php"
</Location>
Header always set Content-Security-Policy "default-src 'self' data: ; script-src 'self' ; style-src 'self' 'unsafe-inline' fonts.googleapis.com https://code.jquery.com ; font-src 'self' fonts.gstatic.com data: ; img-src 'self' data: https://code.jquery.com https://secure.gravatar.com ; report-uri https://report.example.com/report.php"

Test du site sur securityheaders.io

Bien évidemment, on ne teste pas la partie admin …

How a smart phone makes time irrelevant

Posted by Justin W. Flory on November 11, 2017 08:00 AM
How a smart phone makes time irrelevant

It’s 2pm in the afternoon and the weather is becoming cold after so long. On this brisk November day, an old professor steps out in the corner lobby of the college. The golden rays of the sun cast a warm, radiant glow, leaving a bright, inviting air. This small moment of time is meaningless in an infinite universe of possible moments.

Yet, he stands and watches for perhaps five or ten minutes, before taking his leave. During his observation, he was never interrupted by a digital device. Only the ever-present world filled that moment for him. The moment, like many others, is preserved into the mind as a scientist meticulously stores his laboratory materials.

Smart phone world

The world of digital devices alters modern experience of reality. Time no longer asserts priority or influence over a moment. In a moment, a smart phone takes a picture of something nice, and preserves it in electrons. At any moment in the future, you spend that moment reflecting back on the captured moment. It will never leave you, as you preserved a digital replica.

But the missed point is the relationship between analog and digital. Analog is the pure format – there is no conversion. All digitized items suffer from quality loss when converted from analog to digital, and back to analog. Sound, colors, brightness, warmth… all factors that lack in a digital form.

Humans may invent experiences for ourselves to simulate and re-live a moment or some time, but it will only ever be a simulation. Without the ability to access time as a dimension, there is no way anything will ever be but a simulation created or influenced by modern-day humans.

Lossy compression of memory

So the smart phone age of the information era deteriorates time’s hold on capturing your attention. Just like a digital song starts analog, goes digital, and comes out analog again, we down-scale our memories on the conversion scale. It’s a lossy compression. We hold a moment in our hands, measured by pixels, over a connection and passion that comes from remembering the full power of a moment.

But the solution isn’t to abandon the digital world and cast the device aside. The solution is to promote and encourage better balance between the digital and analog worlds. Compact lenses capture a moment, but the act of capturing doesn’t have to end the moment. If your digital world is ever gnawing at your back, find time to pull out into the analog world a bit.


Featured image by Justin W. Flory. Uses content by Iris Li from the Noun Project.

The post How a smart phone makes time irrelevant appeared first on Justin W. Flory's Blog.

Instalar Checkpoint SNX en Fedora

Posted by Eduardo Villagrán Morales on November 11, 2017 03:00 AM

Para los pasos siguientes necesitarás:

  • la ip o nombre de dominio del servidor SNX
  • nombre de usuario
  • password

Luego de tener ello deberás:

  1. bajar el instalador del cliente

    wget http://blog.gotencool.com/download/snx_install.sh
    
  2. Instalar las dependencias del cliente

    sudo dnf install pam.i686 libstdc++.i686 compat-libstdc++-33 …