Fedora People

Return on Risk Investment

Posted by Josh Bressers on January 24, 2017 08:25 PM
I found myself in a discussion earlier this week that worked its way into return on investment topics. Of course nobody could really agree on what the return was which is sort of how these conversations often work out. It’s really hard to decide what the return on investment is for security features and products. It can be hard to even determine cost sometimes, which should be the easy number to figure out.

All this talk got me thinking about something I’m going to call risk investment. The idea here is that you have a risk, which we’ll think about as the cost. You have an investment of some sort, it could be a product, training, maybe staff. This investment in theory reduces your risk in some measurable way. The reduction of the risk is the return on risk investment. We like to think about these things in the context of money, but risk doesn’t exactly work that way. Risk isn’t something that can often be measured easily. Even incredibly risky behaviors can work out fine, and playing it safe can end horribly. Rather than try to equate everything to money, what if we ignored that for the moment and just worried about risk.

 First, how do you measure your risk? There isn’t a nice answer for this. There are plenty of security frameworks you can use. There are plenty of methodologies that exist, threat modeling, attack surface analysis, pen test reports, architecture reviews, automated scanning of products and infrastructure. There’s no single good answer to this question. I can’t tell you what your risk profile is, you have to decide how you’re going to measure this. What are you protecting? If it’s some sort of regulated data, there will be substantial cost in losing it, so this risk measurement is easy. It’s less obvious if you’re not operating in an environment that has direct cost to having an incident. It’s even possible you have systems and applications that pose zero risk (yeah, I said it).

 Assuming we have a way to determine risk, now we wonder how do you measure the return on controlling risk? This is possibly more tricky than deciding on how to measure your risk. You can’t prove a negative in many instances, there’s no way to say your investment is preventing something from happening. Rather than measure how many times you didn’t get hacked, the right way to think about this is if you were doing nothing, how would you measure your level of risk? We can refer back to our risk measurement method for that. Now we think about where we do have certain protections in place, what will an incident look like? How much less trouble will there be? If you can’t answer this you’re probably in trouble. This is the important data point though. When there is an incident, how do you think your counter measures will help mitigate damage? What was your investment in the risk?

 And now this brings us to our Return on Risk Investment, or RORI as I’ll call it, because I can and who doesn’t like acronyms? Here’s the thing to think about if you’re a security leader. If you have risk, which we all do, you must find some way to measure it. If you can’t measure something you don’t understand it. If you can’t measure your risk, you don’t understand your risk. Once you have your method to understand what’s happening, make note of your risk measurement without any sort of security measures in place, your risk with ideal (not perfect, perfect doesn't exist) measures in place, and your risk with existing measures in place. That will give you an idea of how effective what you’re doing is. Here’s the thing to watch for. If your existing measures are close to the risk level for no measures, that’s not a positive return. Those are things you either should fix or stop doing. Sometimes it’s OK to stop doing something that doesn’t really work. Security theater is real, it doesn’t work, and it wastes money. The trick is to find a balance that can show measurable risk reduction without breaking the bank.


How do you measure risk? Let me know: @joshbressers on Twitter.


My highest used Python function

Posted by Kushal Das on January 24, 2017 04:16 PM

I heard the above-mentioned lines many times in the past 10 years. Meanwhile I used Python to develop various kinds of applications. Starting from web-applications to system tools, small command line tools or complex application dealing with Operating System ABI. Using Python as a gluing language helped me many times in those cases. There is a custom function which I think is the highest used function in the applications I wrote.

system function call

The following is the Python3 version.

import subprocess

def system(cmd):
    """
    Invoke a shell command.
    :returns: A tuple of output, err message and return code
    """
    ret = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
    out, err = ret.communicate()
    return out, err, ret.returncode

The function uses subprocess module. It takes any shell command as input, executes the command, and returns any output, error text, and exit code. Remember that in Python3 out, and err are bytes, not strings. So, we have to decode them to standard strings before using.

This small function enabled me to reuse any other already existing tool, and build on top of them. Some developers will advise doing everything in more Pythonic API. But, in many cases no API is available. I found using this function, and then parsing the text output is much simpler than writing a library first. There is an overhead of creating a new process, and running it, and there are times when it is a big overhead. But again, for simple tools or backend processes (which does have to provide a real-time output) this is helpful. You can use this to create a PoC application, and then you can profile the application to find all the pain points.

Adding and building a module for Fedora

Posted by Karsten Hopp on January 24, 2017 02:58 PM

Modularity New-Module-Process

Adding and building a module for Fedora

This describes the process of adding a new module to the Fedora Modularity project, how to build it locally and how to build it in Fedora infrastructure

Process and policy for how to add a module to Fedora

Adding a module repository is a manual process atm. Find someone from the release-engineering group to add the new repository and give you write access. Later on this will be automated by MBS, the module build server, but this is still being worked on.

Writing a new modulemd file

A modulemd file is a yaml file that contains the module metadata like description, license and dependencies. The sample file in the upstream git repository of modulemd contains a complete documentation of the required and optional yaml tags. The Modularity team uses a shorter modulemd file to test builds, but it can also be used as a base for new modules. Another good example is base-runtime.yml Lets use the vim modulemd as an example for this document. It is in the /home/karsten/Modularity/modules/vim/ directory on my system.
document: modulemd
version: 1
data:
    summary: The best text editor and IDE
    description: The classic, extensible text editor, the legend.
    license:
        module: [ MIT ]
    dependencies:
        buildrequires:
            base_runtime: master
        requires:
            base_runtime: master
    references:                                                                 
        community: https://fedoraproject.org/wiki/Modularity                    
        documentation: https://fedoraproject.org/wiki/Fedora_Packaging_Guidelines_for_Modules                                                                   
        tracker: https://taiga.fedorainfracloud.org/project/modularity          
    profiles:
        default:
            rpms:
                - vim-enhanced
                - vim-common
                - vim-filesystem
        minimal:
            rpms:
                - vim-minimal
    api:
        rpms:
            - vim-common
    components:
        rpms:
            vim:
                rationale: Provides API for this module
                ref: f25                                                        
                buildorder: 10                                                  
            generic-release:
                rationale: build dependency
                ref: f25
            perl-Carp:
                rationale: build dependency
                ref: f25
            gpm:
                rationale: build dependency
                ref: f25
            perl-Exporter:
                rationale: build dependency
                ref: f25
           
All dependencies of vim need to be listed under components/rpms, except those that are already included in Base Runtime. Here's how you can get the list of vim dependencies that are not in Base Runtime:
wget https://raw.githubusercontent.com/asamalik/fake-base-runtime-module-image/master/packages/gen-core-binary-pkgs.txt
for i in `repoquery --requires --recursive --resolve --qf "%{SOURCERPM}\n" \
      vim-enhanced vim-minimal vim-common vim-filesystem \
    | sed -e "s/-[^-]*-[^-]*$//" | sort -n | uniq` ; do
    grep -wq $i gen-core-binary-pkgs.txt || echo $i
done

verifying the syntax of the new modulemd file and bug fixing

Once the modulemd file is finished, it is a good idea to check if there any errors in the yaml syntax. The check_modulemd program checks modulemd files for errors. You need to install some packages to use this:
python2-aexpect - dependency for python-avocado
python2-avocado - avocado testing framework
python2-modulemd - Module metadata manipulation library
python-enchant - spell checker library (needed only for check_modulemd.py)
hunspell-en-US - English dictionary (needed only for check_modulemd.py)
Then run
./run-checkmmd.sh /home/karsten/Modularity/modules/vim/vim.yaml
and check the output for errors:
Running: avocado run ./check_modulemd.py --mux-inject 'run:modulemd:/home/karsten/Modularity/modules/vim/vim.yaml'
JOB ID : 51581372fec0086a50d9be947ea06873b33dd0e5
JOB LOG : /home/karsten/avocado/job-results/job-2017-01-19T11.28-5158137/job.log
TESTS : 11
(01/11) ./check_modulemd.py:ModulemdTest.test_debugdump: PASS (0.16 s)
(02/11) ./check_modulemd.py:ModulemdTest.test_api: PASS (0.15 s)
(03/11) ./check_modulemd.py:ModulemdTest.test_components: PASS (0.16 s)
(04/11) ./check_modulemd.py:ModulemdTest.test_dependencies: WARN (0.02 s)
(05/11) ./check_modulemd.py:ModulemdTest.test_description: PASS (0.16 s)
(06/11) ./check_modulemd.py:ModulemdTest.test_description_spelling: PASS (0.16 s)
(07/11) ./check_modulemd.py:ModulemdTest.test_summary: PASS (0.16 s)
(08/11) ./check_modulemd.py:ModulemdTest.test_summary_spelling: WARN (0.02 s)
(09/11) ./check_modulemd.py:ModulemdTest.test_rationales: ERROR (0.04 s)
(10/11) ./check_modulemd.py:ModulemdTest.test_rationales_spelling: PASS (0.16 s)
(11/11) ./check_modulemd.py:ModulemdTest.test_component_availability: WARN (0.02 s)
RESULTS : PASS 7 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 3 | INTERRUPT 0
TESTS TIME : 1.20 s
So this isn't quite right yet, lets have a look at the logfile mentioned in the output.
grep -i error /home/karsten/avocado/job-results/job-2017-01-19T11.28-5158137
....
TestError: Rationale for component RPM generic-release should end with a period: build dependency
It seems that rationales need to end with a period. Change all those lines so that they look like this:
vim:
rationale: Provides API for this module.
ref: f25
buildorder: 10
generic-release:
rationale: build dependency.
ref: f25
gpm:
rationale: build dependency.
ref: f25
perl:
rationale: build dependency.
ref: f25
perl-Carp:
rationale: build dependency.
ref: f25
perl-Exporter:
rationale: build dependency.
ref: f25
Another run of check_modulemd.py completes without errors.

Building the module locally

The build_module script from the build-module repository on github makes local module builds really easy. It sets up the environment and then builds a module and its components locally with mock. One requirement is to have docker installed and running on your system. It is also required that the name of the new modulemd file, the repository name of that module and the name of the module itself match in order to use the build_module script. As build_module builds the latest commit in the master branch of the module git repository, changes need to be checked into git, a push to upstream (dist-git) is not required at this stage. The basic usage of build_module is
./build_module /home/karsten/Modularity/modules/vim/ /tmp/results
This will download a container with the Module build service and rebuild the dependencies that are listed in the modulemd file. This step can take quite some time, depending on the module and how many components need to be built. When build_module is done there will be a couple of rebuilt rpms in /tmp/results/module-vim-master-*/results/:
cd /tmp/results/module-vim-master-*/results/
find . -name "*.rpm"
./vim-8.0.206-1.fc25.src.rpm
./vim-debuginfo-8.0.206-1.fc25.x86_64.rpm
./vim-X11-8.0.206-1.fc25.x86_64.rpm
./vim-filesystem-8.0.206-1.fc25.x86_64.rpm
./vim-enhanced-8.0.206-1.fc25.x86_64.rpm
./vim-minimal-8.0.206-1.fc25.x86_64.rpm
./vim-common-8.0.206-1.fc25.x86_64.rpm
./perl-Carp-1.40-365.fc25.src.rpm
./perl-Carp-1.40-365.fc25.noarch.rpm
./generic-release-25-1.src.rpm
./generic-release-notes-25-1.noarch.rpm
./generic-release-25-1.noarch.rpm
./perl-5.24.1-381.fc25.src.rpm
./perl-debuginfo-5.24.1-381.fc25.x86_64.rpm
./perl-Time-Piece-1.31-381.fc25.x86_64.rpm
./perl-Test-1.28-381.fc25.noarch.rpm
./perl-SelfLoader-1.23-381.fc25.noarch.rpm
./perl-Pod-Html-1.22.01-381.fc25.noarch.rpm
./perl-open-1.10-381.fc25.noarch.rpm
./perl-Net-Ping-2.43-381.fc25.noarch.rpm
./perl-Module-Loaded-0.08-381.fc25.noarch.rpm
./perl-Memoize-1.03-381.fc25.noarch.rpm
./perl-Math-Complex-1.59-381.fc25.noarch.rpm
./perl-Math-BigRat-0.2608.02-381.fc25.noarch.rpm
./perl-Math-BigInt-FastCalc-0.40-381.fc25.x86_64.rpm
./perl-Locale-Maketext-Simple-0.21-381.fc25.noarch.rpm
./perl-libnetcfg-5.24.1-381.fc25.noarch.rpm
./perl-IO-Zlib-1.10-381.fc25.noarch.rpm
./perl-IO-1.36-381.fc25.x86_64.rpm
./perl-ExtUtils-Miniperl-1.05-381.fc25.noarch.rpm
./perl-ExtUtils-Embed-1.33-381.fc25.noarch.rpm
./perl-Errno-1.25-381.fc25.x86_64.rpm
./perl-Devel-SelfStubber-1.05-381.fc25.noarch.rpm
./perl-Devel-Peek-1.23-381.fc25.x86_64.rpm
./perl-bignum-0.42-381.fc25.noarch.rpm
./perl-Attribute-Handlers-0.99-381.fc25.noarch.rpm
./perl-core-5.24.1-381.fc25.x86_64.rpm
./perl-utils-5.24.1-381.fc25.noarch.rpm
./perl-tests-5.24.1-381.fc25.x86_64.rpm
./perl-macros-5.24.1-381.fc25.x86_64.rpm
./perl-devel-5.24.1-381.fc25.x86_64.rpm
./perl-libs-5.24.1-381.fc25.x86_64.rpm
./perl-5.24.1-381.fc25.x86_64.rpm
./perl-Exporter-5.72-366.fc25.src.rpm
./perl-Exporter-5.72-366.fc25.noarch.rpm
./gpm-1.20.7-9.fc25.src.rpm
./gpm-debuginfo-1.20.7-9.fc25.x86_64.rpm
./gpm-static-1.20.7-9.fc25.x86_64.rpm
./gpm-devel-1.20.7-9.fc25.x86_64.rpm
./gpm-libs-1.20.7-9.fc25.x86_64.rpm
./gpm-1.20.7-9.fc25.x86_64.rpm
./module-build-macros-0.1-1.module_vim_master_20170119134619.src.rpm
./module-build-macros-0.1-1.module_vim_master_20170119134619.noarch.rpm

Putting the packages into a container

For this step you'll need to create a rpm repository of the new packages.
cd /tmp/results/module-vim-master-20170119120233/
mkdir vim-module-repo
cp results/*.rpm vim-module-repo
cd vim-module-repo
createrepo .
The /tmp/results/module-vim-master-20170119120233/vim-module-repo needs to be uploaded somewhere public so that docker can access it. A good place for that is the fedorapeople.org webspace that each Fedora developer has.
scp -r /tmp/results/module-vim-master-20170119120233/vim-module-repo karsten.fedorapeople.org:public_html/
You'll also need a dnf/yum configfile (/home/karsten/Modularity/modules/vim/vimmodule.repo) that points at this new repo:
cat /home/karsten/Modularity/modules/vim/vimmodule.repo
[vimmodule]
name=VIM module
failovermethod=priority
baseurl=https://karsten.fedorapeople.org/vim-module-repo/
enabled=1
metadata_expire=7d
gpgcheck=0
skip_if_unavailable=True
Now put everything into a Dockerfile. We're using Adam Samalik's fake-gen-core-module as there is no usable base-runtime module yet:
cat /home/karsten/Modularity/modules/vim/Dockerfile
FROM asamalik/fake-gen-core-module
ADD vimmodule.repo /etc/yum.repos.d/vimmodule.repo
RUN dnf -y update vim-minimal
RUN dnf -y install vim-enhanced

Building the module in Fedora infrastructure using a local module-build-service instance

This step uses a local module-build-service and other components in containers and passes results on to the Fedora staging infrastructure. A checkout of the module-build-service is required. Change into your local copy of this repository and run
docker-compose down
docker-compose up --build
This will start the module-build-service frontend and scheduler as well as fedmsg and you'll see when messages about module builds are coming in over the Federated Message Bus. The local module-build-service will connect to product-definition-center (PDC) on the modularity developer server modularity.fedorainfracloud.org At the moment an account on modularity.fedorainfracloud.org is required for this step as remote port forwarding needs to be established via ssh:
cat ~/.ssh/config
host MODULARITY-DEV
Hostname modularity.fedorainfracloud.org
user fedora
RemoteForward 3007 127.0.0.1:5001
Some changes need to be made to the local module-build-service (aka fm-orchestrator) git repository so that it allows to build from a git repositoy on github:
diff --git a/conf/config.py b/conf/config.py
index 97eed6e..2ddb61b 100644
--- a/conf/config.py
+++ b/conf/config.py
@@ -35,7 +35,7 @@ class BaseConfiguration(object):
PDC_URL = 'http://modularity.fedorainfracloud.org:8080/rest_api/v1'
PDC_INSECURE = True
PDC_DEVELOP = True
- SCMURLS = ["git://pkgs.stg.fedoraproject.org/modules/"]
+ SCMURLS = ["git://pkgs.stg.fedoraproject.org/modules/","https://github.com/KarstenHopp/"]

# How often should we resort to polling, in seconds
# Set to zero to disable polling
diff --git a/contrib/submit-build.json b/contrib/submit-build.json
index 0e312a5..e46bf8a 100644
--- a/contrib/submit-build.json
+++ b/contrib/submit-build.json
@@ -1,3 +1,3 @@
{
- "scmurl": "git://pkgs.stg.fedoraproject.org/modules/testmodule.git?#620ec77"
+ "scmurl": "git://github.com/KarstenHopp/vim-module.git?#cdbc4bf"
}
Now you need to get a kerberos ticket for the Fedora staging environment. If you haven't already done so, add
STG.FEDORAPREOJECT.ORG = {
kdc = https://id.stg.fedoraproject.org/KdcProxy
}
to the realms section of your /etc/krb5.conf file. Then point your web browser at https://admin.stg.fedoraproject.org/accounts and log in with your Fedora credentials so that the account will get synced from the Fedora production environment. Run
kinit karsten@STG.FEDORAPROJECT.ORG
(replace 'karsten' with your FAS account) and get a kerberos ticket. In another window, ssh into modularity.fedorainfracloud.org to establish required port forwarding. Now you can run
python submit-build.py
from within the module-build-service git repo. You might need to login again at id.stg.fedoraproject.org, just follow the instructions. Now the module build will be submitted to the Fedora build servers. Log messages will show the progress on your screen.

building the module

When your module is ready to get added to Fedora, you need to have write access to the module dist-git on pkgs.stg.fedoraproject.org and you need to have pushed all your changes to this module git repository. You can build your module using two different methods: A special version of rpkg with module-build support is required for this step. Change the working directory to your local copy of your module repo and simply run
fedpkg module-build
The other method requires that you add the git URL of your latest module commit to the submit-build.json file in the module-build-server git repository and then run
python submit-build.py

The flatpak security model, part 3 – The long game

Posted by Alexander Larsson on January 24, 2017 11:27 AM

We saw previously that you can open up the Flatpak sandbox in order to run applications that are not built to be sandboxed. However, the long term plan is to make it possible for applications to work sandboxed, and then slowly make this the default for most applications (while losing little or no features). This is a work in progress on many levels, we need work on Flatpak itself, but also on the Linux desktop platform and the applications themselves.

So, how do we reach this long-term goal?

Some things were mentioned in earlier parts. For example, once we have a realistic shot at sandboxing we need to make the permissions more visible in the UI, and make it easier to override them. We also need to drop X11 and move to Wayland, as well as finish the work on PulseAudio permissions.

However, the really important part is being able to have dynamic, fine-grained permissions. This is achieved with something we call Portals.

A Portal is a service that runs on the host, yet is accessible from the sandbox. This is ok because the interface it exposes has been designed in order to be “safe”.

So, what makes a portal safe?

Lets start with a very simple portal, the Network Monitor portal. This service returns network connection status (online/offline) and signals when it changes. You can already get the basic status from the kernel even in the sandbox, but the portal can use Network Manager to get additional information like whether there is a captive portal active, and if the network is metered.

This portal looks at whether the calling app has network access, and if so allows it to read the current status, because this information could already be collected by the app manually (by replicating what network manager does). The portal is merely a more efficient and easy way to do this.

The next example is the Open URI portal. The application sends a request with a URI that it wants to be shown. For instance you could use this for links the user clicked on in the application, but also to show your application documentation.

We don’t want the sandbox to be able to start apps with caller-controlled URIs in the background, because that would be an easy way to do attack them. The way we make this safe is to make the operation interactive and cancellable. So, the portal shows a dialog, allowing the user to select the app to open the URI in, or (if the dialog was unexpected) close the dialog. All this happens outside the sandbox, which means that the user is always in control of what gets launched and when.

A similar example is the FileChooser portal. The sandbox cannot see the users files, but it can request the user to pick a file. Once a file is chosen outside the sandbox, the application is granted access to it (and only it). In this case too it is the interactivity and cancellability that makes the operation safe.

Another form of portal is geolocation. This is safe because the portal can reliably identify the calling application, and it keeps a list of which applications are currently allowed access. If the application is not allowed it replies that it doesn’t know the location. Then a UI can be created in the desktop to allow the user to configure these permissions. For instance, there can be an application list in the control center, or there could be a notification that lets you grant access.

To sum up, portals are considered safe for one of these reasons:

  • The portal doesn’t expose any info you wouldn’t already know, or which is deemed unsafe
  • The operation is interactive and cancellable
  • Portals can reliably identify applications and apply dynamic rules

Theoretically any kind of service designed in this way could be called a portal. For instance, one could call Wayland a UI portal. However, in practice portals are dbus services.  In fact by default Flatpak lets the sandbox talk to any service named org.freedesktop.portal.* on the session bus.

The portals mentioned above are part of the Desktop portal, xdg-desktop-portal.  It is a UI-less, desktop-independent service. But for all user-interaction or policy it defers to a desktop-specific backend. There are currently backends for Gtk and (work in progress) KDE. For sandboxing to work these need to be installed on the host system.

In addition the the previously listed portals, xdg-desktop-portal also contains:

  • Printing
  • User account information
  • Inhibiting suspend
  • Notifications
  • Proxy configuration
  • Screenshot request
  • Device access request

There is also a portal shipped with flatpak, the Document portal. It’s permissions based, and is what the FileChooser portal uses to grant access to files dynamincally on a file-by-file basis.

We are also planning to add more portals as needed. For instance we’d like to add a Share portal that lets you easily share content with registered handlers (for instance posting text to a Twitter or Facebook app).

Running SAS University Edition on Fedora 25

Posted by Adam Young on January 24, 2017 03:19 AM

My Wife is a statistician. Over the course of her career, she’s done a lot of work coding in SAS, and, due to the expense of licensing, I’ve never been able to run that code myself. So, when I heard about SAS having a free version, I figured I would download it and have a look, maybe see if I could run something.

Like many companies, SAS went the route of shipping a virtual appliance. They chose to use Virtual Box as the virtualization platform. However, when I tried to install and run the VM in virtual box, I found that the mechanism used to build the Virtual Box specific module for the Linux Kernel, the build assumption were not met, and the VM would not run.

Instead of trying to fix that situation, I investigated the possibility of running the virtual appliance via libvirt on my Fedora systems already installed and configured kvm setup. Turns out it was pretty simple.

To start I went through the registration and download process from here. Once I had a login, I was able to download a file called unvbasicvapp__9411008__ova__en__sp0__1.ova.

What is an ova file? It turns out is is a non-compressed tar file.

$ tar -xf unvbasicvapp__9411008__ova__en__sp0__1.ova
$ ls
SAS_University_Edition.mf  SAS_University_Edition.ovf   SAS_University_Edition.vmdk  unvbasicvapp__9411008__ova__en__sp0__1.ova

Now I had to convert the disk image into something that would work for KVM.

$qemu-img convert -O qcow2 SAS_University_Edition.vmdk SAS_University_Edition.qcow2

Then, I used the virt-manager gui to import the VM. TO be sure I met the constraints, I looked inside the SAS_University_Edition.ovf file. It turns out they ship a pretty modest VM: 1024 MB of Memory and 1 Virtual CPU. These are pretty easy constraints to meet, and I might actually up the amount of memory or CPUs in the VM in the future depending on the size of the data sets I ended up playing around with. However, for now, this is enough to make things work.

Add a new VM from the file menu.

Import the existing image

Use the Browse Local button Browse to the directory where you ran the qemu-img convert command above.

Complete the rest of the VM creation. Defaults should suffice. Run the VM inside VM Manager.

Once the Boot process has completed, you should get enough information from the console to connect to the web UI.

Hitting the Web UI from a browser shows the landing screen.

Click Start… and start coding

Hello, World.

 

Inkscape 0.92 available in Fedora

Posted by Fedora Magazine on January 24, 2017 03:06 AM

Earlier this month, the Inkscape project released version 0.92 of the Inkscape vector graphics editor. Inkscape 0.92 is now also available for download from the official Fedora repositories for Fedora 24 and Fedora 25. If you already have Inkscape installed, you will receive the updated version when you next update with DNF or the Software Application.

Inkscape is a versatile, feature rich vector graphics editor that can be used for a wide range of tasks, including UI mockups, icons, logos, digital illustration. Inkscape uses the Scalable Vector Graphics (SVG) format as the primary source filetype, which is getting more and more popular as a format for vector graphics on the web. Inkscape can also export to a wide range of different formats, including PNG and PDF.

What’s new in Inkscape 0.92

Despite the seemingly small version number bump from the previous 0.91 release of Inkscape,  Inkscape 0.92 provides a range of new features and bugfixes, including mesh gradients, improved support for the SVG2 and CSS3 specs, brand new path effects, and the new Object dialog managment of objects. The Inkscape 0.92 Release Notes has a full list and descriptions of all the new features and bugfixes.

Mesh Gradients

The flagship new feature in this updated version of Inkscape is support for creating and editing gradient meshes. In previous versions of Inkscape, creating complex shading involved multiple objects with gradients and blurs applied. With gradient meshes in Inkscape, it is now possible to create more complex shading effects with a single gradient.

A single inkscape rectangle object with a mesh gradient applied

With mesh gradients it is also possible now to create single, detailed conical gradients, which can be used to simulate shiny metal discs, or create a colorwheel:

 

Objects Dialog

Inkscape 0.92 also introduces a new objects dialog that will be useful for artists that have complicated drawings with many objects, grouped in many ways. This new dialog provides a tree view of all the objects in the document, allowing you to drill down and find the specific element you want to work on:

 

Be sure to check out the Release Notes and the Release Announcement for more details on these and the many other features in this new version of Inkscape.

Save

Save

Save

Save

Save

Save

Save

F25-20170120 updated Lives Released

Posted by Ben Williams on January 23, 2017 03:24 PM

I am happy to announce new F25-20170120 Updated Lives.

(with Kernel 4.9.4)

With F25 we are now using Livemedia-creator to build the updated lives.

Also from now on we will only be releasing updated lives on even kernel releases.

To build your own please look at  https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

With this new build of F25 Updated Lives will save you 675M of updates after install on Workstation.

As always the isos can be found at http://tinyurl.com/Live-respins2


Android permissions and hypocrisy

Posted by Matthew Garrett on January 23, 2017 07:58 AM
I wrote a piece a few days ago about how the Meitu app asked for a bunch of permissions in ways that might concern people, but which were not actually any worse than many other apps. The fact that Android makes it so easy for apps to obtain data that's personally identifiable is of concern, but in the absence of another stable device identifier this is the sort of thing that capitalism is inherently going to end up making use of. Fundamentally, this is Google's problem to fix.

Around the same time, Kaspersky, the Russian anti-virus company, wrote a blog post that warned people about this specific app. It was framed somewhat misleadingly - "reading, deleting and modifying the data in your phone's memory" would probably be interpreted by most people as something other than "the ability to modify data on your phone's external storage", although it ends with some reasonable advice that users should ask why an app requires some permissions.

So, to that end, here are the permissions that Kaspersky request on Android:
  • android.permission.READ_CONTACTS
  • android.permission.WRITE_CONTACTS
  • android.permission.READ_SMS
  • android.permission.WRITE_SMS
  • android.permission.READ_PHONE_STATE
  • android.permission.CALL_PHONE
  • android.permission.SEND_SMS
  • android.permission.RECEIVE_SMS
  • android.permission.RECEIVE_BOOT_COMPLETED
  • android.permission.WAKE_LOCK
  • android.permission.WRITE_EXTERNAL_STORAGE
  • android.permission.SUBSCRIBED_FEEDS_READ
  • android.permission.READ_SYNC_SETTINGS
  • android.permission.WRITE_SYNC_SETTINGS
  • android.permission.WRITE_SETTINGS
  • android.permission.INTERNET
  • android.permission.ACCESS_COARSE_LOCATION
  • android.permission.ACCESS_FINE_LOCATION
  • android.permission.READ_CALL_LOG
  • android.permission.WRITE_CALL_LOG
  • android.permission.RECORD_AUDIO
  • android.permission.SET_PREFERRED_APPLICATIONS
  • android.permission.WRITE_APN_SETTINGS
  • android.permission.READ_CALENDAR
  • android.permission.WRITE_CALENDAR
  • android.permission.KILL_BACKGROUND_PROCESSES
  • android.permission.RESTART_PACKAGES
  • android.permission.MANAGE_ACCOUNTS
  • android.permission.GET_ACCOUNTS
  • android.permission.MODIFY_PHONE_STATE
  • android.permission.CHANGE_NETWORK_STATE
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_LOCATION_EXTRA_COMMANDS
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.CHANGE_WIFI_STATE
  • android.permission.VIBRATE
  • android.permission.READ_LOGS
  • android.permission.GET_TASKS
  • android.permission.EXPAND_STATUS_BAR
  • com.android.browser.permission.READ_HISTORY_BOOKMARKS
  • com.android.browser.permission.WRITE_HISTORY_BOOKMARKS
  • android.permission.CAMERA
  • com.android.vending.BILLING
  • android.permission.SYSTEM_ALERT_WINDOW
  • android.permission.BATTERY_STATS
  • android.permission.MODIFY_AUDIO_SETTINGS
  • com.kms.free.permission.C2D_MESSAGE
  • com.google.android.c2dm.permission.RECEIVE

Every single permission that Kaspersky mention Meitu having? They require it as well. And a lot more. Why does Kaspersky want the ability to record audio? Why does it want to be able to send SMSes? Why does it want to read my contacts? Why does it need my fine-grained location? Why is it able to modify my settings?

There's no reason to assume that they're being malicious here. The reasons that these permissions exist at all is that there are legitimate reasons to use them, and Kaspersky may well have good reason to request them. But they don't explain that, and they do literally everything that their blog post criticises (including explicitly requesting the phone's IMEI). Why should we trust a Russian company more than a Chinese one?

The moral here isn't that Kaspersky are evil or that Meitu are virtuous. It's that talking about application permissions is difficult and we don't have the language to explain to users what our apps are doing and why they're doing it, and Google are still falling far short of where they should be in terms of making this transparent to users. But the other moral is that you shouldn't complain about the permissions an app requires when you're asking for even more of them because it just makes you look stupid and bad at your job.

comment count unavailable comments

Developing Command Line Interpreters using python-cmd2

Posted by Kushal Das on January 23, 2017 07:09 AM

Many of you already know that I love command line applications. Let it be a simple command line tool, or something more complex with a full command line interface/interpreter (CLI) attached to it. Back in college days, I tried to write a few small applications in Java with broken implementations of CLI. Later when I started working with Python, I wanted to implement CLI(s) for various projects. Python already has a few great modules in the standard library, but, I am going to talk about one external library which I prefer to use a lot. Sometimes even for fun :)

Welcome to python-cmd2

python-cmd2 is a Python module which is written on top of the cmd module of the standard library. It can be used as a drop-in replacement. Through out this tutorial, we will learn how to use it for simple applications.

Installation

You can install it using pip, or standard package managers.

$ pip install cmd2
$ sudo dnf install python3-cmd2

First application

#!/usr/bin/env python3

from cmd2 import Cmd


class REPL(Cmd):

    def __init__(self):
        Cmd.__init__(self)


if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

We created a class called REPL, and later called the cmdloop method from an object of the same class. This will give us a minimal CLI. We can type ! and then any bash command to execute. Below, I called the ls command. You can also start the Python interpreter by using py command.

$ python3 mycli.py 
(Cmd) 
(Cmd) !ls
a_test.png  badge.png  main.py	mycli.py
(Cmd) py
Python 3.5.2 (default, Sep 14 2016, 11:28:32) 
[GCC 6.2.1 20160901 (Red Hat 6.2.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
(REPL)

        py <command>: Executes a Python command.
        py: Enters interactive Python mode.
        End with ``Ctrl-D`` (Unix) / ``Ctrl-Z`` (Windows), ``quit()``, '`exit()``.
        Non-python commands can be issued with ``cmd("your command")``.
        Run python code from external files with ``run("filename.py")``
        
>>> 
(Cmd) 

You can press Ctrl+d to quit or use quit/exit commands.

Let us add some commands

But, before that, we should add a better prompt. We can have a different prompt by changing the prompt variable of the Cmd class. We can also add some banner by adding text to the intro variable.

#!/usr/bin/env python3
from cmd2 import Cmd

class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)


if __name__ == '__main__':
    app = REPL()
    app.cmdloop()
$ python3 mycli.py 
Welcome to the real world!
life> 

Any method inside our REPL class which starts with do_ will become a command in our tool. For example, we will add a loadaverage command to show the load average of our system. We will read /proc/loadavg file in our Linux computers to find this value.

#!/usr/bin/env python3

from cmd2 import Cmd


class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)

    def do_loadaverage(self, line):
        with open('/proc/loadavg') as fobj:
            data = fobj.read()
        print(data)

if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

The output looks like:

$ python3 mycli.py 
Welcome to the real world!
life> loadaverage
0.42 0.23 0.24 1/1024 16516

life> loadaverage
0.39 0.23 0.24 1/1025 16517

life> loadaverage
0.39 0.23 0.24 1/1025 16517

If you do not know about the values in this file, the first three values indicate the CPU/IO utilization of the last one, five and ten minutes back. Then we have the number of currently running processes and the total number of processes. The final column shows the last process ID used. You can also see that TAB will autocomplete the command in our shell. We can go back to the past commands by pressing the arrow keys. We can also press Ctrl+r to do a reverse search like the standard bash shell. This feature comes from the readline module. We can use that more, and add a history file to our tool.

import os
import atexit
import readline
from cmd2 import Cmd

history_file = os.path.expanduser('~/.mycli_history')
if not os.path.exists(history_file):
    with open(history_file, "w") as fobj:
        fobj.write("")
readline.read_history_file(history_file)
atexit.register(readline.write_history_file, history_file)



class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)

    def do_loadaverage(self, line):
        with open('/proc/loadavg') as fobj:
            data = fobj.read()
        print(data)

if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

Taking input in the commands

We can use the positional argument in our do_ methods to have arguments in our commands. Whatever input you are passing to the command, comes to the line variable in our example. We can use the same to do anything. For example, we can take any URL as input, and then check the status. We will use requests module for this example. We also used the Cmd.colorize method to add colors to our output text. I have added one extra command to make the tool more useful.

#!/usr/bin/env python3

import os
import atexit
import readline
import requests
from cmd2 import Cmd

history_file = os.path.expanduser('~/.mycli_history')
if not os.path.exists(history_file):
    with open(history_file, "w") as fobj:
        fobj.write("")
readline.read_history_file(history_file)
atexit.register(readline.write_history_file, history_file)



class REPL(Cmd):
    prompt = "life> "
    intro = "Welcome to the real world!"

    def __init__(self):
        Cmd.__init__(self)

    def do_loadaverage(self, line):
        with open('/proc/loadavg') as fobj:
            data = fobj.read()
        print(data)

    def do_status(self, line):
        if line:
            resp = requests.get(line)
            if resp.status_code == 200:
                print(self.colorize("200", "green"))
            else:
                print(self.colorize(str(resp.status_code), "red"))

	def do_alternativefacts(self, line):
    		print(self.colorize("Lies! Pure lies, and more lies.", "red"))

if __name__ == '__main__':
    app = REPL()
    app.cmdloop()

Building these little shells can be a lot of fun. The documentation has all the details, but, you should start reading from the standard lib cmd documentation. There is also the video from PyCon 2010.

L’internationalisation : une priorité de la FSF

Posted by Jean-Baptiste Holcroft on January 22, 2017 11:00 PM

Comme le pointe Jehan sur Linuxfr, la FSF a revu sa liste des priorités et l’internationalisation en fait désormais partie !

Pourquoi est-ce important ?

Comme l’indique la page de ce projet prioritaire – internationalisation, quand nous régionalisons un logiciel, nous le rendons plus accessible et étendons les cas d’utilisation possibles dans le monde. Au contraire d’un logiciel non libre qui doit rentabiliser le coût de chaque traduction par des clients en faisant effectivement usage, l’ajout d’une langue est un coût négligable voire nul pour le projet.

D’ailleurs, on remarque que dans la liste des logiciels libres éditée par le gouvernement français, le fait que le logiciel est complètement traduit est mis en avant.

Le coût de l’internationalisation

Prendre en compte la charge de travail dans la conception du logiciel est fondammental pour un projet libre. Cela va demander de maintenir du code spécifique, des chaînes de compilation et des étapes supplémentaires dans la publication de chaque version.

La première étape est de choisir dès le début des technologies qui le permettent, et des formats compatibles. Si pour les logiciels s’installant sur le poste de travail c’est souvent parfaitement standardisé, mais gare aux outils web ! Faire de l’internationalisation de sites n’est pas disponible avec tous les cadriciels et devoir changer d’outils de développement est très coûteux !

Pensez également à :

  • favoriser des formats standards tel le «.po » de gettext que je trouve plutôt bon !
  • choisir une plateforme, pour la majorité des besoins, Weblate est un bon choix !

Et pourquoi pas globalisation ?

Dans le projet Fedora, l’internationalisation et la régionalisation sont en cours de regroupement sous le terme générique de « globalisation ». À moins qu’il s’agisse d’un faux ami de traduction, globaliser c’est une simplification, une harmonisation vers le bas dans laquelle je ne m’inscris pas.

Je parlerais plutôt d’une pluralisation, d’une déclinaison des logiciels vers divers langues, cultures et écritures. Qui me semble plus proche de la réalité.

Episode 29 - The Security of Rogue One

Posted by Open Source Security Podcast on January 22, 2017 11:00 PM
Josh and Kurt discuss the security of the movie Rogue One! Spoiler: Security in the Star Wars universe is worse than security in our universe.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/303899056&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Mechanical Computer


Fedora als OpenVZ-Gast: system-upgrade

Posted by Fedora-Blog.de on January 22, 2017 09:40 PM

Virtual Server Hosting mit OpenVZ hat einen Nachteil. dnf system-upgrade funktioniert nicht. In einem OpenVZ-Gast mit Fedora hat man glücklicherweise eine sehr einfache Alternative.

Wie in einem Beitrag im Blog „nTh among all“ beschrieben wird, bootet system-upgrade in den Single User mode und bricht ab, wenn kein neuer Kernel installiert wird – und genau das wird in einem OpenVZ-Guest nicht gemacht.
dnf kann allerdings noch den distro-sync (der ohnehin bei einem system-upgrade gemacht wird) und der macht genau das, was man in diesem Fall benötigt.

Der Befehl

[lukas@vzGuest ~]$ sudo dnf --releasever 25 distro-sync

auf einem Fedora 24 bringt alle Pakete auf den Stand von 25, was der Befehl

[lukas@ovzGuest ~]$ lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 25 (Twenty Five)
Release: 25
Codename: TwentyFive

beweist.

Mein Linevast-Server lief auf Fedora 23. In dem Fall habe ich das System zuerst auf 24 gehoben und dann auf Fedora 25.

Bei dem Prozess erscheinen gegebenenfalls altbekannte Hinweise/Warnungen, wie etwa

warning: /etc/ssh/sshd_config created as /etc/ssh/sshd_config.rpmnew
Warning: mlocate-updatedb.timer changed on disk. Run 'systemctl daemon-reload' to reload units.

Diese Hinweise müssen natürlich dementsprechend behandelt werden.

University Connect at SICSR Pune

Posted by Kushal Das on January 22, 2017 02:04 PM

Last Thursday, I visited SICSR, Pune campus as part of University Connect program from Red Hat Pune. This was the second event in the University Connect program. Rupali, the backbone of all community events in the local Red Hat office, is also the primary driving force of this program. Priyanka, Varsha, Prathamesh and I reached office early morning and later went to the college in Rupali’s car. Sayak came to the venue directly from his house.

The event started with a brief inauguration ceremony. After that Rupali took the stage and talked about the University Connect program, and how Red Hat Pune is taking part in the local communities. I will let her write a blog post to explain in details :)

Next, I went up to start talking about the journey of Red Hat, various upstream projects we take part in. Discussing the various product lines Red Hat have. We discussed https://pagure.io and how many Indian students are contributing to that project with guidance from Pierre-YvesChibon. Because Python is already a known language among the students, I had many Python projects as examples.

Priyanka took the stage after me; she and Sayak are alumni of SICSR so it was nice for the students to them speaking. She talked about how contributing to Open Source projects can change one’s career. She told stories of her own life, talked about various points which can help a student to make their resume better.

Sayak then took the stage to talk about how to contribute to various open source projects. He talked and showed examples from Mozilla, Fedora, and KDE. He also pointed to the answer Sayan wrote to a Quora question.

At the end of the day, we had around 20 minutes of open QA session.

FedBerry 25 on the Raspberry Pi 3

Posted by Scott Dowdle on January 22, 2017 02:24 AM
FedBerry 25 on the Raspberry Pi 3 Scott Dowdle Sat, 01/21/2017 - 19:24

kojipkgs: what it is and how it works

Posted by Kevin Fenzi on January 21, 2017 07:34 PM

The last week or so I have spent a ton of time on kojipkgs (sorry again for any build failures this may have caused), so I thought it would be good to outline what it is and how it’s used and finally the working setup we have now.

kojipkgs is a concept that koji has of a host/url to go to to download packages. On small koji installs this host/url can be simply the koji hub host. Or it can be a seperate host or hosts, as long as it has access to all the packages koji wants. So in practice this means it has to share a NFS mount or other shared storage with the koji hub.

For many years Fedora infrastructure had a kojipackages instance that NFS mounted the koji storage, ran a apache web server locally on port 8080 and then had a squid instance in front of that, listening on port 80 and 443 and talking to apache for any items that were not in it’s cache. This worked out well because squid could cache commonly used objects and save access to the NFS server as well as return more quickly. However, it also has drawbacks: It’s a single point of failure and it’s exposed to the outside world and squid’s SSL code has not gotten anywhere near the same scrutiny that apache’s has.

A bit of a side note, but related, lets talk about how koji does builds for a minute. If you tell koji to do an official build of some package, first it makes a mock chroot and downloads and installs the packages in the srpm-build koji group (you can see these and what packages are in them with a ‘koji list-groups’ command). Note that this install is done with the HOST dnf into the chroot. The src.rpm is then built from pkgs git and lookaside cache and then koji starts builds for any needed arches. These builds first download the src.rpm via a koji urllib2 call and the make a mock chroot and download and install all the packages in the ‘build’ group. Then rpmbuild is called, etc.

So, week before last I updated builders (as I often do) and also upgraded the kojipkgs01 squid server (running rhel7) to the latest versions. However, we started seeing downloading problems at build time. Sometimes (but rarely) dnf would just fail to download a package it needed and fail the build. I figured this would be a great time to try and remove the single point of failure of that one server.

I spun up a second kojipkgs instance (kojipkgs02) setup just like the first one (rhel7, squid, apache, NFS). Then I setup our two proxies in that main datacenter to handle kojipkgs requests and use haproxy to send them in to kojipkgs01/02. Then, a switch of dns and it was switched over. Now, however we hit a new problem. Sometimes the src.rpm download via koji was not working. It was silently downloading only part of the src.rpm and then failing the build. Additionally, the build download issues were still happening. I tried isolating to just one proxy. Just one backend. Various haproxy balancer setups so that requests that went to one backend kept on that backend. I replaced both kojipkgs instances with Fedora 25 (much newer squid version). Koji has the ability to pass several urls to the mock setup it uses for builds, so we passed it multiple urls to kojipkgs there and that seemed to get rid of that problem, but the src.rpm download issue was still there.

Finally, we tried disabling squid’s “smp mode”. Suddenly everything started working nicely.

So in the end of this saga we have:

  • Nice Highy Available kojipkgs. (2 proxies using 2 backends, so we can update, restart, reboot a proxy/backend anytime we like without causing problems).
  • A bug report on librepo to retry downloads the number of times it claims to in dnf.conf default even if there’s only one baseurl and it was slow/timed out: https://bugzilla.redhat.com/show_bug.cgi?id=1414116
  • A koji ticket to make the src.rpm more robust: https://pagure.io/koji/issue/290 and some PR’s to change it use requests and validate the downloaded rpm already.
  • I still need to file an upstream bug on squid about smp mode, but I don’t have too much info to give them (there were no errors anywhere, just some small number of connections would hang/not finish).

Petit bilan de Rawhide, 2 mois après

Posted by Charles-Antoine Couret on January 21, 2017 04:06 PM

Comme vous le savez, il y a deux mois je suis passé à Rawhide. Pour la première fois je vais vivre entièrement le cycle de développement d'une version de Fedora. L'objectif étant bien entendu de découvrir des choses et d'aider à détecter des problèmes plus tôt. Afin que la Fedora 26 soit la plus stable possible.

Les problèmes rencontrés

Il n'y a pas à dire, à ce stade il y a du boulot ! J'ai rencontré en 2 mois quelques bogues plutôt importants. Je m'y attendais, et il y a eu de grands progrès par rapport à la situation d'il y a quelques années.

En premier lieu, un bogue de Firefox que j'avais peu avant la sortie de F25 qui continue à me poursuivre. Les vidéos sur les pages Web parfois tournent en boucle sur 1 seconde de buffer. Rendant illisible la lecture, obligeant de relancer Firefox.

Ensuite, un bogue qui continue toujours, GNOME Shell a une erreur obligeant à relancer la session, sans raison à son lancement. C'est plutôt aléatoire. Mais si la session se lance bien, ce problème ne surgit plus avant le prochain lancement.

Un bogue apparemment qui a concerné aussi F25 et F24, avec Firefox impossible d'aller sur des sites tels que Google ou Wikipédia. Un problème dû à HTTPS sur HTTP2 apparemment. Cela a été corrigé rapidement, mais pourrait revenir pour Thunderbird et SeaMonkey à l'avenir, à cause de la mise à jour de NSS (la bibliothèque de sécurité de Mozilla) et le manque de correctif pour ces programmes. C'est en discussion sur la manière d'aborder la chose.

Un nouveau bogue apparu hier, qui semble venir du logiciel lui même. Ouvrir des onglets dans Epiphany rend l'écran noir ou gèle le système. Plutôt ennuyeux, mais au moins Epiphany n'est pas mon navigateur principal donc cela ne me gêne pas trop.

Enfin, dernier bogue qui vient d'être résolu. GNOME Shell, GDM et d'autres programmes plantaient, à cause d'un écart de version entre la bibliothèque harfbuzz de Fedora et de FreeType dans RPMFusion. Le mainteneur dans RPMFUsion a rapidement corrigé le tir.

Bref, 5 bogues que je considère comme importants et que j'ai rencontré en 2 mois. Ce n'est pas beaucoup. Dont 2 qui ont touché aussi la version stable de Fedora par ailleurs. Cela demande quand même un peu de patience et de bidouilles pour corriger la situation mais rien d'insurmontable. :-)

Cependant, il y a eu discussion il y a quelques temps sur Rawhide. Un empaqueteur a jugé bon de pousser un paquets qui ne fonctionnait pas sur Rawhide dans les dépôts (car pas testé par ses soins avant). Selon lui Rawhide doit être un dépôt poubelle où la stabilité ne compte pas car cela pourrait ralentir ses développements. Je trouve que de manquer ici de respect aux testeurs est dommageable. Rawhide par définition doit être utilisable et un paquet doit être testé par son mainteneur avant. Cela paraît être le minimum à faire.

Heureusement que cette mentalité a fortement régressé. Il n'y a plus beaucoup d'empaqueteurs qui tiennent encore de tels propos.

Les changements

Forcément l'intérêt de passer à Rawhide ou à une version de tests, c'est aussi découvrir et profiter des changements en avance. Pour l'instant pas de grands bouleversements. GNOME Shell et LibreOffice sont en avance d'une version par rapport à Fedora 25 (mais sont toujours en développement). Comme souvent leurs améliorations sont par petite touche et l'ensemble est plus agréable.

Le plus grand changement visible vient de GNOME Builder je pense, dont une bonne partie de l'interface a été changée. C'est l'IDE que j'utilise depuis une année maintenant et il évolue vraiment dans le bon sens, c'est très appréciable.

Après nous ne sommes qu'au début du développement de ce que deviendra Fedora 26. Les fonctionnalités ne sont pas encore toutes définies, et encore moins mises en place. Et les logiciels upstream aussi ont beaucoup à faire, GNOME n'a pas encore fini de changer (sa sortie étant dans 2 mois) par exemple.

J'espère que ce genre de billets vous plairont, je compte en faire mensuellement. Quand j'aurais des changements plus visibles je ferais des captures d'écran associés. Si cela intéresse les gens d'aller à l'aventure de Rawhide, n'hésitez pas, il y a très peu de testeurs mais beaucoup de besoins ! Et c'est grâce à cette activité que la Fedora stable est stable.

News: Linux kernel 4.9 LTS.

Posted by mythcat on January 20, 2017 11:09 PM
According to this email the linux kernel will come with longterm supported kernel version.
From: Greg Kroah-Hartman <gregkh xxxxxxxxxxxxxxxxxxx="">

Might as well just mark it as such now, to head off the constant
questions. Yes, 4.9 is the next longterm supported kernel version.

Signed-off-by: Greg Kroah-Hartman <gregkh xxxxxxxxxxxxxxxxxxx=""></gregkh></gregkh>

News: PulseAudio 10.0 released.

Posted by mythcat on January 20, 2017 10:33 PM
Readre about this news here.
  • Automatically switch Bluetooth profile when using VoIP applications
  • New module for prioritizing passthrough streams (module-allow-passthrough)
  • Fixed hotplugging support for USB surround sound cards
  • Separate volumes for Bluetooth A2DP and HSP profiles
  • memfd-based shared memory mechanism enabled by default
  • Removed module-xenpv-sink
  • Dropped dependency to json-c
  • When using systemd to start PulseAudio, pulseaudio.socket is always started first
  • Compatibility with OpenSSL 1.1.0
  • Clarified qpaeq licence

GitHub + Gmail — Filtering for Review Requests and Mentions

Posted by Tim Bielawa on January 20, 2017 07:43 PM

The Problem

I’ve been looking for a way to filter my GitHub Pull Request lists under the condition that a review is requested of me. The online docs didn’t show any filter options for this, so I checked out the @GitHubHelp twitter account. The answer was there on the front page — they don’t support filtering PRs by review-requested-by:me yet:

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js?x72940"></script>

So what is one to do? I’m using Gmail so I began considering what filter options were available to me there. My objectives were to clearly label and highlight:

  •  PRs where review has been requested
  • Comments where I am @mention‘d

Interested in knowing more? Read on after the break for all the setup details.

Review Requested

Applying labels for PRs where a review is requested of me is a little hacky, but the solution I came up with works well enough. When your review is requested you should receive an email from GitHub with a predictable message in it

@kwoodson requested your review on: openshift/openshift-ansible#3130 Adding oc_version to lib_openshift..

That highlighted part there, requested your review on:, is the key.

In Gmail we’re going to add a new filter. You can reach the new filter menu through the settings interface or by hitting the subtle little down-triangle (▾) left of the magnifying glass (🔍) button in the search bar.

  • In the “Has the words” input box put (in quotes!): "requested your review on:" (You can pick a specific repo if you wish by including it in the search terms)
  • Press the Create filter with this search » link

  • Use the “Apply the label” option to create a new label, for example, “Review Requested”
  • You might want to check the “Also apply filter to X matching conversations” box
  • Create the new filter

Mentions

Labeling @mention‘s in Gmail is a little easier and less prone to error than the review request filter could be. It also follows a similar process.

  1. Create a new filter
  2. In the “To” input box put: Mention <mention@noreply.github.com>
  3. Press the Create filter with this search » link
  4. Continue from step 4 in the previous example

 

 

A response to ‘Strong Encryption and Death’

Posted by Eric "Sparks" Christensen on January 20, 2017 07:10 PM

I recently read an article on the TriLUG blog mirror discussing access to data after the death of the owner.  I’ve also given this a lot of thought as well and had previously come to the same conclusion as the original author of the article has:

“I created a file called “deathnote.txt” which I then encrypted using GPG.  This will encrypt the file so that both Bob and Alice can read it (and I can too). I then sent it to several friends unrelated to them with instructions that, upon my death (but not before), please send this file to Bob and Alice.”

–Tarus

To be honest, I didn’t actually go through with this project as there were just too many variables that I hadn’t figured out.  There is a lot of trust involved in this that potentially requires a very small number of people (2) to really hose things up.  It’s not that I wouldn’t trust my “trusted friends” with the responsibility but it potentially makes them targets and two is just a really low threshold for an adversary to recover this information.

What really threw me was that the author also included a copy of his private key in case they couldn’t locate it on his computer to, I’m assuming here, access other data.  I have one word for this: NOPE!

Okay, short of the private key thing, what was proposed was quite logical.  Like I said above, I had a very similar idea a while back.  Springboarding from that idea, I’d like to propose another layer of security into this whole process.

Splitting up the data

So you have your encrypted blob of information that goes to person A when you kick off but you don’t want person A to have it before.  Import some trusted friends and you have a means of providing the information to person A upon your demise.  But letting a single person, or even two people, control this information is dangerous.  What if you could split up that data into further encrypted parts and handed those parts out to several friends?  Then, not one single person would hold all the information.  You’d likely want some overlap so that you wouldn’t need ALL the friends to present the information (maybe it got lost, maybe the friend got hit by the same bus that you did, etc) so we’d want to build in a little redundancy.

ssss

Shamir’s Secret Sharing Scheme (ssss) is a neat piece of software that takes some information, encrypts it, and then break it into pieces.  Redundancy can be added so that not all parts are required to reassemble the data (think RAID 5).

“In cryptography, a secret sharing scheme is a method for distributing a secret amongst a group of participants, each of which is allocated a share of the secret. The secret can only be reconstructed when the shares are combined together; individual shares are of no use on their own.”

–From the SSSS website

Implementing the solution

Because ssss can only share relatively small strings (less than 1024 bits), my “death” instructions would likely need to be stored whole as a cipher text and the key (symmetric) being the shared object.

The other piece of this solution would be whom to get to hold the shared bits of keys.  It would likely be best if the individuals were not only trusted but also didn’t know the others involved in the share.  That way there is a smaller chance that these individuals could get together to put the key back together.

Also, if person A is the one holding the cipher text, even if the individuals did find each other they would only have a key and not be able to decode the actual texts.

Conclusion

I’m quite happy that I read the original article and I hope to do the same thing that the author did before I kick the bucket.  I’m quite sure that there are other ways to do what Tarus and I wrote about and actual implementation will vary depending upon the individual, their technical level, and their personal privacy requirements.  This problem, though, is one that deserves to be solved as more and more of our information is kept digitally.


Locations are hard

Posted by Suzanne Hillman (Outreachy) on January 20, 2017 05:17 PM

Turns out that figuring out people’s locations is hard, especially if you want to try to reduce the amount of work someone has to do or if they are likely to be using a mobile phone.

For some reason, I’d thought that this was already a solved problem, so was somewhat surprised when feedback on a mockup made me question that assumption. After pinging Máirín Duffy to find out if we had access to a database of countries, and how they break down into cities/states/provinces/etc, she realized that we needed a longer discussion.

One thing she wondered was if we actually need street addresses. Given that the goal is to be help people find each other nearby, city was almost certainly sufficient for that purpose.

In many cases, especially on mobile, we would have access to GPS information. So, in that case, we can just show them where we think they are — with a map on which we overlay the city and country information and in which we will make sure that that information is appropriately accessible — and they can adjust as necessary.

<figure></figure>

On computers, we may have access to the information provided by the web browser, and from that we can similarly show them a map with their city and country information. In this case, we may end up being wildly inaccurate due to people using VPN connections.

In both mobile and computer cases, people may not want to share that level of detail. So, for this case, we would use their IP address information to guess, and display the same thing.

<figure></figure>

Finally, it is entirely possible that a connection error would prevent us from actually having location information. In that case, we would show a zoomed out map on a computer, with empty city and country fields. On mobile or if we cannot tell where they are, a blank map, with empty city and country fields.

<figure></figure>

In all cases, people can edit the country and city information to make it more accurate. The ‘city’ field will offer type-ahead suggestions which will include any division between city and country that is relevant. For example, if someone is detected as being in Buffalo, NY, but is actually in Boston, MA, we would offer then Boston, NY first, due to proximity, but also show Boston, MA. And anyone can continue typing to get more specificity, or select from a list of visible options. If, however, the country field is incorrect, they will need to change that before the city suggestions will be correct. As with the map location information, type-ahead suggestions need to be appropriately accessible to people who cannot use a mouse or cannot see the suggestions.

The problem with the type-ahead suggestions is that we still need access to a database which contains that information for each country. There are a couple of options, but that problem remains to be solved, and is a large part of making location information actually workable.

This was an unexpectedly complicated discussion, but I’m very glad we had it. For more information, please see issue #286.

Fedora badges: how to

Posted by Maria Leonova on January 20, 2017 04:47 PM

Fedora badges is a perfect place to start if you want to help out the Fedora Design Team. ‘I’m not a designer!’ ‘I can’t draw!’ ‘I’ve never opened Inkscape’ – you might say. And that is totally fine! Everybody can help out, and none of those reasons will stop you from designing your first badge (and getting badges for designing badges ;)).

So let’s look at how to get started! (all of these can be found in our presentation here)

  1. Badges resources

    Inkscape Download: https://inkscape.org/en/download/

    Fedora Badges: https://badges.fedoraproject.org/

    Fedora Badges Trac: https://fedorahosted.org/fedora-badges/

    Fedora Badges Design Resources Zip: https://fedorahosted.org/fedora-badges/attachment/wiki/DesignResources/FedoraBadgesResources.zip

    2. Anatomy of a badge

anatomy

As you can see, badge consists of several elements, all of which will be different for different badges based on how you categorize them.  More on those as we look at Resources.

3. Resources

So now go ahead and download the Fedora Badges design resources

ATTENTION! VERY IMPORTANT! Prior to designing check out the Style Guidelines!  Couple of things to keep in mind here:

  • background and rings colors: it is important to keep badges consistent – please categorize your badge and based on that choose colors from the palette. If you need help categorizing, ask on IRC #fedora-design or during our bi-weekly badges meetings every other Wednesday  7-8 US Eastern on fedora-meeting-1@irc.freenode.net.
  • pallette (pp 12-13): if you need some other color, pick one from the palette. You can even download and install it on your computer to use straight from Inkscape. To import them, save the .gpl files to the ~/.config/inkscape/palettes/ directory.
  • fonts (pp 17-18): use Comfortaa and pay attention to do’s and don’ts listed there.
  • do’s and don’ts: it is very important to keep those in mind while designing, so all our badges are consistent and beautiful.

Another tip for consistency: once you’ve have picked a badge, go look at ALL the badges here: https://badges.fedoraproject.org/explore/badges. If you are just starting, it’s a great place for inspiration; you can see how similar badges have been categorized, and what imagery and patterns have been used. Download one of these badge artwork files and use it as a template or starting point for your badge design. To do that, simply click on a badge and go to its ticket. Usually .svg can be downloaded from there.

Selection_0472.png

4. Design

  • Look at similar badges on badges index.
  • Choose a concept for your badge. Look at similar elements, consider suggested concepts from the ticket, or come up with something yourself if you feel like it!
  • The easiest badges are Conference and event badges. They are all the same colors: purple ring, grey background for conferences and dark blue for presenters. Use the template or even re-use last year’s badge and put your conference logo / year on it – Congratulations! You’re done!selection_048
  • Gather inspiration & resources. This means going on the internet and researching images and concepts. For example, if you want to draw a badger on a bike, you might want to search for a photo or an illustration of a person on a bike to use as a reference. No need to reinvent. This may not be necessary for the simpler badges.
  • Categorize your badge using the Style Guide or ask one of us for help.
  • Open the corresponding template, Save as… your filename and get designing! Here’s a link to some nice Inkscape tuts: Fedora and Inkscape. Keep it simple and pay extra attention to resizing stuff. You don’t want to change background size and positioning, so don’t move it around. That way all the badges look the same. When resizing other elements always hold CTRL to maintain proportions. Also don’t worry too much, we’ll review your badge and help if necessary.
  • Feel free to reuse and remix other badges elements. Also remember to SAVE! Save all the time 🙂
  • Once you’re done with the first draft, go to Export PNG image, select a place where to export, name your file and choose Export area – Page. Check that your badge is 256×256 and there! All done! Congratulations!
  • Upload png to the ticket and ask one of us to review your design.
  • Now work with a mentor to finish it and with a developer to push it.

Debugging a Flatpak application

Posted by Matthias Clasen on January 20, 2017 04:45 PM

Since I’ve been asking people to try the recipes app with Flatpak, I can’t complain too much if I get bug reports back. But how does one create a useful bug report when something goes wrong in a Flatpak sandbox ? Some of the stacktraces I’ve seen have not been very useful, since they are lacking symbols.

This post is a quick attempt to spread some basics about Flatpak debugging.

Normally, you run your Flatpak app like this:

flatpak run org.gnome.Recipes

Well, that’s not quite true; the ”normal” way to launch the Flatpak is just the same as launching a non-Flatpak app: click on the icon, or hit the Super key, type recipes, hit Enter. But lets assume you’re launching flatpak from the commandline.

What happens behind the scenes here is that flatpak finds the metadata for org.gnome.Recipes, determines which runtime it needs, sets up the sandbox by mounting the app in /app and the runtime in /usr, does some more sandboxy stuff, and eventually launches the app.

First problem for bug reporting: we want to run the app under gdb to get a stacktrace when it crashes.  Here is how you do that:

flatpak run --command=sh org.gnome.Recipes

Running this command, you’ll end up with a shell prompt ”inside” the recipes sandbox.  This is great, because we can now launch our app under gdb (note that the application gets installed in the /app prefix):

$ gdb /app/bin/recipes

Except… this fails because there is no gdb. Remember that we are inside the sandbox, so we can only run what is either shipped with the app in /app/bin or with the runtime in /usr/bin.  And gdb is not among either.

Thankfully, for each runtime, there is a corresponding sdk, which is just like the runtime, except it includes the stuff you need to develop and debug: headers, compilers, debuggers and other useful tools. And flatpak has a handy commandline option to use the sdk instead of the regular runtime:

flatpak run --devel --command=sh org.gnome.Recipes

The –devel option tells flatpak to use the sdk instead of the runtime  and do some other things that make debugging in the sandbox work.

Now for the last trick: I was complaining about stacktraces without symbols at the beginning. In rpm-based distributions, the debug symbols are split off into debuginfo packages. Flatpak does something similar and splits all the debug information of runtimes and apps into separate ”runtime extensions”, which by convention have .Debug appended to their name. So the debug info for org.gnome.Recipes is in the org.gnome.Recipes.Debug extension.

When you use the –devel option, flatpak automatically includes the Debug extensions for the application and runtime, if they are available. So, for the most useful stacktraces, make sure that you have the Debug extensions for the apps and runtimes in question installed.

Hope this helps!

Most of this information was taken from the Flatpak wiki.

Desktop environments in my computer

Posted by Kushal Das on January 20, 2017 01:05 PM

I started my Linux journey with Gnome, as it was the default desktop environment in RHL. I took some time to find out about KDE. I guess I found out accidentally during re-installation. It used to be fun to have a desktop that looks different, behaves differently than the normal. During the earlier years in college while I was trying to find out more about Linux, using KDE marked me as a Linux expert. I was powered with the right syntax of mount command to mount the windows partitions and the xmms-mp3 rpm. I spent most of my time in the terminal.

Initial KDE days for me

I started my FOSS contribution as a KDE translator, and it was also my primary desktop environment. Though I have to admit, I had never heard the word “DE or desktop environment” till 2005. Slowly, I started learning about the various differences, and also the history behind KDE and Gnome. I also felt that the KDE UI looked more polished. But I had one major issue. Sometimes by mistake, I used to change something in the UI, wrong click or wrong drag and drop. I never managed to recover from those stupid mistakes. There was no way for me to go back to the default look and feel without deleting the whole configuration. You may find this really stupid, but my desktop usage knowledge was limited (and still is so), due to the usage of terminal based applications. Not sure about the exact date, but sometime during 2010, I became a full-time Gnome user. Not being able to mess around with my settings actually helped me in this case.

The days of Gnome

There aren’t many things to write about my usage of Gnome. I kept using whatever came through as default Fedora Gnome theme. As I spend a lot of time in terminals, it was never a big deal. I was not sure if I liked Gnome Shell, but I kept using it. Meanwhile, I tried LXDE/XFCE for a few days but went back to the default Fedora UI of Gnome every time. This was the story till the beginning of June 2016.

Introduction of i3wm

After PyCon 2016, I had another two-day meet in Raleigh, Fedora Cloud FAD. Adam Miller was my roommate during the four-day stay there. As he sat beside me in the meeting, I saw his desktop looked different. When asked, Adam gave a small demo on i3wm. Later that night, he pointed me to his dotfiles, and I started my journey with a tiling window manager for the first time. I have made a few minor changes to the configuration over time. I also use a .Xmodmap file to make sure that my configuration stays sane even with my Kinesis Advantage keyboard.

The power of using the keyboard for most of the tasks is what pulled me towards i3wm. It is always faster than moving my hand to the trackball mouse I use. I currently use a few different applications on different workspaces. I kept opening the same application in the same workspace every time. It, hence became muscle memory for me to switch to any application as required. Till now, except a few conference projectors, I never had to move to Gnome for anything else. The RAM usage is also very low as expected.

Though a few of my friends told me i3wm is difficult, I had a complete different reaction when I demoed this to Anwesha. She liked it immediately and started using it as her primary desktop. She finds it much easier for her to move between workspaces while working. I know she already demoed it to many others in conferences. :)

The thing which stayed same over the years is my usage of terminal. Learning about many more command line tools made my terminal having more tabs, and more number of tmux sessions in the servers.

The flatpak security model – part 2: Who needs sandboxing anyway?

Posted by Alexander Larsson on January 20, 2017 11:43 AM

The ability to run an application sandboxed is a very important  feature of flatpak. However, its is not the only reason you might want to use flatpak. In fact, since currently very few applications work in a fully sandboxed environment, most of the apps you’d run are not sandboxed.

In the previous part we learned that by default the application sandbox is very limiting. If we want to run a normal application we need to open things up a bit.

Every flatpak application contains a manifest, called metadata. This file describes the details of the application, like its identity (app-id) and what runtime it uses. It also lists the permissions that the application requires.

By default, once installed, an application gets all the permissions that it requested. However, you can override the permissions each time you call flatpak run or globally on a per-application basis by using flatpak override (see manpages for flatpak-run and flatpak-override for details). The handling of application permissions are currently somewhat hidden in the interface, but the long term plan is to show permissions during installation and make it easier to override them.

So, what kind of permissions are there?

First apps need to be able to produce output and get input. To do this we have permissions that allow access to PulseAudio for sound and X11 and/or Wayland for graphical output and input. The way this works is that we just mount the unix domain socket for the corresponding service into the sandbox.

It should be noted that X11 is not very safe when used like this, you can easily use the X11 protocol to do lots of malicious things. PulseAudio is also not very secure, but work is in progress on making it better. Wayland however was designed from the start to isolate clients from each other, so it is pretty secure in a sandbox.

But, secure or not, almost all Linux desktop applications currently in existence use X11, so it is important that we are able to use it.

Another way for application to integrate with the system is to use DBus. Flatpak has a filtering dbus proxy, which lets it define rules for what the application is allowed to do on the bus. By default an application is allowed to own its app-id and subnames of it (i.e. org.gnome.gedit and org.gnome.gedit.*) on the session bus. This means other clients can talk to the application, but it can only talk to the bus itself, not any other clients.

Its interesting to note this connection between the app-id and the dbus name. In fact, valid flatpak app-ids are defined to be the same form as valid dbus names, and when applications export files to the host (such as desktop files, icons and dbus service files), we only allow exporting files that start with the app-id. This ties very neatly into modern desktop app activation were the desktop and dbus service files also have to be named by the dbus name. This rule ensures that applications can’t accidentally conflict with each other, but also that applications can’t attack the system by exporting a file that would be triggered by the user outside the sandbox.

There are also permissions for filesystem access. Flatpak always uses a filesystem namespace, because /usr and /app are never from the host, but other directories from the host can be exposed to the sandbox. The permission here is quite fine grained, starting with access to all host files, to your home-directory only or to individual directories. The directories can also be exposed read-only.

The default sandbox only has a loopback network interface and thius has no connection to the network, but if you grant network access then the app will get full network access. There are no partial access for network access however. For instance one would like to be able to set up a per-application firewall configuration. Unfortunately, it is quite complex and risky to set up networking so we can’t expose it in a safe way for unprivileged use.

There are also a few more specialized permissions, like various levels of hardware device access and some other details. See man flatpak-metadata for the available settings.

All this lets us open up exactly what is needed for each application, which means we can run current Linux desktop applications without modifications. However, the long term goal is to introduce features so that applications can run without opening the sandbox. We’ll get to this plan in the next part.

Until then, happy flatpaking.

PHP version 5.6.30, 7.0.15 and 7.1.1

Posted by Remi Collet on January 20, 2017 06:07 AM

RPM of PHP version 7.1.1 are available in remi-php71 repository for Fedora 23-25 and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 7.0.15 are available in remi repository for Fedora 25 and in remi-php70 repository for Fedora 22-24 and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 5.6.30 are available in remi repository for Fedora 22-24 and  remi-php56 repository for Enterprise Linux.

emblem-important-2-24.pngPHP version 5.5 have reached its end life and is no longer maintained by the project.

These versions are also available as Software Collections.

security-medium-2-24.pngThese versions fix some security bugs, so update is strongly recommended.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update

Parallel installation of version 7.1 as Software Collection (x86_64 only):

yum install php71

Replacement of default PHP by version 7.0 installation (simplest):

yum-config-manager --enable remi-php70
yum update

Parallel installation of version 7.0 as Software Collection (x86_64 only):

yum install php70

Replacement of default PHP by version 5.6 installation (simplest):

yum-config-manager --enable remi-php56
yum update

Parallel installation of version 5.6 as Software Collection (x86_64 only):

yum install php56

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.2
  • EL6 rpm are build using RHEL-6.8
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70)

Fedora Atomic Working Group update from 2017-01-17

Posted by Kushal Das on January 20, 2017 04:18 AM

This is an update from Fedora Atomic Working Group based on the IRC meeting on 2017-01-17. 14 people participated in the meeting, the full log of the meeting can be found here.

OverlayFS partition

We had a decision to have a docker partition in Fedora 26. The root partition sizing will also need to be fixed. One can read all the discussion on the same at the Pagure issue.

We also require help in writing the documentation for migration from Devicemapper -> Overlay -> Back.

How to compose your own Atomic tree?

Jason Brooks will update his document located at Project Atomic docs.

docker-storage-setup patches require more testing

There are pending patches which will require more testing before merging.

Goals and PRD of the working group

Josh Berkus is updating the goals and PRD documentation for the working group. Both short term and long term goals can be seen at this etherpad. The previous Cloud Working Group’s PRD is much longer than most of the other groups’ PRD, so we also discussed trimming the Atomic WG PRD.

Open floor discussion + other items

I updated the working group about a recent failure of the QCOW2 image on the Autocloud. It appears that if we boot the images with only one VCPU, and then after disabling chroyd service when we reboot, there is no defined time to have ssh service up and running.

Misc talked about the hardware plan for FOSP., and later he sent a detailed mail to the list on the same.

Antonio Murdaca (runcom) brought up the discussion about testing the latest Docker (1.13) and pushing it to F25. We decided to spend more time in testing it, and then only push to Fedora 25, otherwise it may break Kubernetes/Openshift. We will schedule a 1.13 testing week in the coming days.

Android apps, IMEIs and privacy

Posted by Matthew Garrett on January 19, 2017 11:36 PM
There's been a sudden wave of people concerned about the Meitu selfie app's use of unique phone IDs. Here's what we know: the app will transmit your phone's IMEI (a unique per-phone identifier that can't be altered under normal circumstances) to servers in China. It's able to obtain this value because it asks for a permission called READ_PHONE_STATE, which (if granted) means that the app can obtain various bits of information about your phone including those unique IDs and whether you're currently on a call.

Why would anybody want these IDs? The simple answer is that app authors mostly make money by selling advertising, and advertisers like to know who's seeing their advertisements. The more app views they can tie to a single individual, the more they can track that user's response to different kinds of adverts and the more targeted (and, they hope, more profitable) the advertising towards that user. Using the same ID between multiple apps makes this easier, and so using a device-level ID rather than an app-level one is preferred. The IMEI is the most stable ID on Android devices, persisting even across factory resets.

The downside of using a device-level ID is, well, whoever has that data knows a lot about what you're running. That lets them tailor adverts to your tastes, but there are certainly circumstances where that could be embarrassing or even compromising. Using the IMEI for this is even worse, since it's also used for fundamental telephony functions - for instance, when a phone is reported stolen, its IMEI is added to a blacklist and networks will refuse to allow it to join. A sufficiently malicious person could potentially report your phone stolen and get it blocked by providing your IMEI. And phone networks are obviously able to track devices using them, so someone with enough access could figure out who you are from your app usage and then track you via your IMEI. But realistically, anyone with that level of access to the phone network could just identify you via other means. There's no reason to believe that this is part of a nefarious Chinese plot.

Is there anything you can do about this? On Android 6 and later, yes. Go to settings, hit apps, hit the gear menu in the top right, choose "App permissions" and scroll down to phone. Under there you'll see all apps that have permission to obtain this information, and you can turn them off. Doing so may cause some apps to crash or otherwise misbehave, whereas newer apps may simply ask for you to grant the permission again and refuse to do so if you don't.

Meitu isn't especially rare in this respect. Over 50% of the Android apps I have handy request your IMEI, although I haven't tracked what they all do with it. It's certainly something to be concerned about, but Meitu isn't especially rare here - there are big-name apps that do exactly the same thing. There's a legitimate question over whether Android should be making it so easy for apps to obtain this level of identifying information without more explicit informed consent from the user, but until Google do anything to make it more difficult, apps will continue making use of this information. Let's turn this into a conversation about user privacy online rather than blaming one specific example.

comment count unavailable comments

Traduction des langues, pays et autres - merci Unicode

Posted by Jean-Baptiste Holcroft on January 19, 2017 11:00 PM

Il arrive parfois d’avoir à traduire le nom de langues dans les logiciels. Et quand vous tombez sur « Fulani », vous êtes un peu perplexe !

Heureusement, des experts sont déjà passés par là et ont déjà fait le travail de traduction !

La débrouille

Les contributeurs wikipédia qui tracent presque tout par des publications sont assez utiles pour les mots spécifiques déjà traduits de façon standardisée. Le Fulani existe sous le nom Fula language qui nous mène à la page française Peul. Merci les contributeurs wikipédia 😏.

Vérifier/automatiser/accélérer avec l’Unicode Common Locale Data Repository

Le consortium Unicode maintien à jour une liste immense, qu’on trouve traduit Wikipédia ont traduit en « Répertoire de données de paramètres régionaux classiques ».

On y trouve toute sorte de traduction de langue, de pays, de mesures, des jours de la semaine, etc. En se rendant sur le site du CLDR on télécharge la dernière version publique 30.0.3 puis on récupère cldr-common-30.0.3.zip.

Dans l’arborescence, on va dans le sous-dossier common/main puis on ouvre le fichier xml fr.xml qui contient ce qu’on cherche. Sans surprise, on trouve dans « languages » les traductions des langues, dans « scripts » la traduction des écritures, etc.

Bon, le Peul tout come le Fulani ont comme code “ff” ou “ful”, donc la traduction est cohérente.

Ami développeurs, si vous avez ce type de vocabulaire à traduire, n’hésitez pas à capitaliser sur l’Unicode !

Le risque de ne pas utiliser Unicode : Roumanie = Romanche

Dans l’application Dictionary qui récupère du contenu du Wiktionnaire, on a des listes de langues traduites. Ayant moi-même besoin de traduire en roumain, j’ai détecté deux problèmes : le nom du dictionnaire est traduit en “Romanche” à la place de “Roumain”, on traduit des listes de valeurs déjà traduites dans android (voir le tableau de support).

Tout est dans le rapport d’anomalie.

Bonne traduction !

Which movie most accurately forecasts the Trump presidency?

Posted by Daniel Pocock on January 19, 2017 07:31 PM

Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?

As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.

Countdown to Looking Glass

On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.

cleaning out the swamp?

The Omen

Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?

LXQt Spin proposed for Fedora 26, new test build available

Posted by Christian Dersch on January 19, 2017 05:58 PM
Around christmas we announced some initial effort for a Fedora LXQt remix/spin. After some weeks of testing and tuning, reworking translation packages and updating whole LXQt to 0.11.x (x>0) the LXQt SIG decided to propose the LXQt Spin for inclusion in Fedora 26.

The current selection of applications:

  • LXQt 0.11.x
  • PCManFM-Qt (LXQt file manager)
  • Ark (archiver, from KDE)
  • Dragon (media player, from KDE)
  • KCalc (calculator, from KDE)
  • KWrite (text editor, from KDE)
  • LXImage-Qt (image viewer)
  • Psi+ (XMPP client)
  • qBittorrent (torrent client)
  • Qlipper (clipboard tool)
  • qpdfview (pdf and ps viewer)
  • Quassel (IRC client)
  • QupZilla (web browser)
  • Trojita (IMAP mail client)
  • Yarock (music player)
The set of applications is not yet fixed, we've chosen some KDE applications as they are Qt5 based and integrate well while having a small dependency footprint. In cases where LXQt provides an application (e.g. LXImage-Qt image viewer), this one has been selected.

For configuration we included the LXQt config tools (lxqt-config and obconf-qt) of course, in addition we added lxappearance to be able to change GTK themes too. The theme itself is the Breeze theme known from KDE, it looks nice and is provided for GTK in addition, so the user can have a unified look. By default we've chosen the Openbox window manager, in addition the spin will contain KWin for those who like to have compositing etc.

For software management we included dnfdragora, a nice graphical frontend for DNF providing a nice Qt based GUI in our case (but as it uses libyui abstraction layer, it can use GTK and curses too, as known from SUSE YaST). This is not yet included in Fedora, but on a good way to arrive soon. Right now Kevin Kofler provides a COPR for it.

A new test build is available in the usual location, comments and ideas (like different applications which may fit better) should be shared in our project on pagure.

Episode 28 - RSA Conference 2017

Posted by Open Source Security Podcast on January 19, 2017 02:00 PM
Josh and Kurt discuss their involvement in the upcoming 2017 RSA conference: Open Source, CVEs, and Open Source CVE. Of course IoT and encryption manage to come up as topics.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/303432626&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Early module development with Fedora Modularity

Posted by Adam Samalik on January 19, 2017 12:01 PM

So you like the Fedora Modularity project – where we separate the lifecycle of software from the distribution – and you want to start with module development early? Maybe to have it ready for the modular Fedora 26 Server preview? Start developing your modulemd on your local system now, and have it ready for later when the Module Build Service is in production!

Defining your module

To have your module build, you need to start with writing a modulemd file which is a definition of your module including the components, API, and all the information necessary to build your module like specifying the build root and a build order for the packages. Let’s have a look at an example Vim module:

document: modulemd
version: 1
data:
    summary: A ubiquitous text editor
    description: >
        Vim is a highly configurable text editor built to make creating and
        changing any kind of text very efficient.
    license:
        module:
            - MIT
        content: []
    xmd: ~
    dependencies:
        buildrequires:
            generational-core: master
        requires:
            generational-core: master
    references:
        community: http://www.vim.org/
        documentation: http://www.vim.org/docs.php
        tracker: https://github.com/vim/vim/issues
    profiles:
        default:
            rpms:
                - vim-enhanced
        minimal:
            rpms:
                - vim-minimal
        graphical:
            rpms:
                - vim-X11
    api:
        rpms:
            - vim-minimal
            - vim-enhanced
            - vim-X11
    filter:
        rpms: ~
    components:
        rpms:
            vim-minimal:
                rationale: The minimal variant of VIM, /usr/bin/vi.
            vim-enhanced:
                rationale: The enhanced variant of VIM.
            vim-X11:
                rationale: The GUI variant of VIM.
            vim-common:
                rationale: Common files needed by all VIM variants.
            vim-filesystem:
                rationale: The directory structure used by VIM packages.

Notice that there is no information about the name or version of the module. That’s because the build system takes this information from the git repository, from which the module is build:

  • Git repository name == module name
  • Git repository branch == module stream
  • Commit timestamp == module version

You can also see my own FTP module for reference.

To build your own module, you need to create a Git repository with the modulemd file. The name of your repo and the file must match the name of your module:

$ mkdir my-module
$ touch my-module/my-module.yml

The core idea about modules is that they include all the dependencies in themselves. Well, except the base packages found in the Base Runtime API – which haven’t been defined yet. But don’t worry, you can use this list of binary packages in the meantime.

So the components list in your modulemd need to include all the dependencies except the ones mentioned above. You can get a list of recursive dependencies for any package by using repoquery:

$ repoquery --requires --recursive --resolve PACKAGE_NAME

When you have this ready, you can start with building your module.

Building your module

To build a modulemd, you need to have the Module Build Service installed on your system. There are two ways of achieving that:

  1. Installing the module-build-service package with all its dependencies.
  2. Using a pre-built docker image and a helper script.

Both options provide the same result, so choose whichever you like better.

Option 1: module-build-service package

On Fedora rawhide, just install it by:

$ sudo dnf install module-build-service

I have also created a Module Build Service copr repo for Fedora 24 and Fedora 25:

$ sudo dnf copr enable asamalik/mbs
$ sudo dnf install module-build-service

To build your modulemd, run a command similar to the following:

$ mbs-manager build_module_locally file:////path/to/my-module?#master

An output will be a yum/dnf repository in the /tmp directory.

Option 2: docker image and a helper script

With this option you don’t need to install all the dependencies on your system, but it requires you to setenforce 0 before the build. :-(

You only need to clone the asamalik/build-module repository on GitHub and use the helper script as follows:

$ build_module ./my-module ./results

An output will be a yum/dnf repository in the patch you have specified.

What’s next?

The next step would be installing your module on the Base Runtime and testing if it works. But as we are doing this pretty early, there is no Base Runtime at the moment I’m writing this. However, you can try your module in a container using a pre-build fake Base Runtime image.

To handcraft your modular container, please follow the Developing and Building Modules guide on our wiki which provides you all the necessary steps while showing you a way how modular containers might be built in the future infrastructure!

DevConf.cz 2017

Are you visiting DevConf.cz in Brno? There is a talk about Modularity and a workshop where you can try building your own module as well. Both can be found in the DevConf.cz 2017 Schedule.

  • Day 1 (Friday) at 12:30 – Fedora Modularity – How does it work?
  • Day 2 (Saturday) at 16:00 – Fedora Modularity – Build your own module!

See you there!

 

spyder 3 for Fedora

Posted by nonamedotc on January 19, 2017 02:56 AM

Spyder 3 was released sometime back and the latest version 3.1.0 was released yesterday. I have working on updating Spyder to 3.x for sometime now. Towards this effort, I got the following packages reviewed and included in Fedora - 

  1. python-QtPy
  2. python-QtAwesome
  3. python-flit
  4. python-entrypoints
  5. python-nbconvert
  6. python-entrypoints
  7. python-pickleshare

In addition to this, the package python-ipykernel had to be reviewed. This was completed sometime towards the end of last year.

Now that all the packages are available (in different forms), I have put together a COPR repo where spyder 3.1.0 package resides. I would like to get these packages tested before I submit it as a big update to Fedora 25.

COPR repo is here - nonamedotc/spyder3 - COPR repo

Of course, this repo can be directly enabled from a terminal -

dnf copr enable nonamedotc/spyder3

To install spyder along with ipython console from this repo, do

dnf install python{2,3}-{spyder,ipython}


Note: ipython package provided by this repo is version 5.1.0 (since ipykernel needs ipython >= 4.0.0). This will necessitate removing the ipython package provided by the Fedora repo. I have requested an update to ipython already [1].


When spyder3 (python3 version of spyder) is launched, there will be a pop-up complaining that rope is not  installed. This is because we do not yet have a python3 version of rope. Ignoring that should not cause major issue.

Obligatory screenshot -


 

Please test these packages and let me know if there are issues so that I can fix and submit an update. I am hoping to submit this as an update as soon as ipython is done.


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1400383

 

The flatpak security model – part 1: The basics

Posted by Alexander Larsson on January 18, 2017 08:59 PM

This is the first part of a series talking about the approach flatpak takes to security and sandboxing.

First of all, a lot of people think of container technology like docker, rkt or systemd-nspawn when they think of linux sandboxing. However, flatpak is fundamentally different to these in that it is unprivileged.

What I mean is that all the above run as root, and to use them you either have to be root, or your access to it is equivalent to root. For instance, if you have access to the docker socket then you can get a full root shell with a command like:

docker run -t -i --privileged -v /:/host fedora chroot /host

Flatpak instead runs everything as the regular user.  To do this it uses a project called bubblewrap which is like a super-powered version of chroot, only you don’t have to be root to run it.

Bubblewrap can do more than just change the root, it lets you construct a custom filesystem mount tree for the process. Additionally it lets you create namespaces to further isolate things from the host. For instance if use –unshare-pid then your process will not see any processes from outside the sandbox.

Now, chroot is a root-only operation. How can it be that bubblewrap lets you do the same thing but doesn’t require root privileges? The answer is that it uses unprivileged user namespaces.

Inside such a user namespace you get a lot of capabilities that you don’t have outside it, such as creating new bind mounts or calling chroot. However, in order to be allowed to use this you have to set up a few process limits. In particular you need to set a process flag called PR_SET_NO_NEW_PRIVS. This causes all forms of privilege escalation (like setuid) to be disabled, which means the normal ways to escape a chroot jail don’t work.

Actually, I lied a bit above. We do use unprivileged user namespaces if we can, but many distributions disable them. The reason is that user namespaces open up a whole new attack surface against the kernel, allowing an unprivileged user access to lots of things that may not be perfectly adapted user access. For instance CVE-2016-3135 was a local root exploit which used a memory corruption in an iptables call. This is normally only accessible by root, but user namespaces made it user exploitable.

If user namespaces are disabled, bubblewrap can be built as a setuid helper instead. This still only lets you use the same features as before, and in many ways it is actually safer this way, because only a limited subset of the full functionality is exposed. For instance you cannot use bubblewrap to exploit the iptable bug above because it doesn’t set up iptable (and if it did it wouldn’t pass untrusted data to it).

Long story short, flatpak uses bubblewrap to create a filesystem namespace for the sandbox. This starts out with a tmpfs as the root filesystem, and in this we bind-mount read-only copies of the runtime on /usr and the application data on /app. Then we mount various system things like a minimal /dev, our own instance of /proc and symlinks into /usr from /lib and /bin. We also enable all the available namespaces so that the sandbox cannot see other processes/users or access the network.

On top of this we use seccomp to filter out syscalls that are risky. For instance ptrace, perf, and recursive use of namespaces, as well as weird network families like DECnet.

In order for the application to be able to write data anywhere we bind mount $HOME/.var/app/$APPID/ into the sandbox, but this is the only persistent writable location.

In this sandbox we then spawn the application (after having dropped all increased permissions). This is a very limited environment, and there isn’t much the application can do. In the next part of this series we’ll start looking into how things can be opened up to allow the app to do more.

Recipes for you and me

Posted by Matthias Clasen on January 18, 2017 08:31 PM

Since I’ve last written about recipes, we’ve started to figure out what we can achieve in time for GNOME 3.24, with an eye towards delivering a useful application. The result is this plan, which should be doable.

But: your help is needed. We need more recipe contributions from the GNOME community to have a well-populated initial experience. Everybody who contributes a recipe before 3.24 will get a little thank-you from us, so don’t delay…

The 0.8.0 release that I’ve just created already contains the first steps of this plan. One thing we decided is that we don’t have the time and resources to make the ingredients view useful by March, so the Ingredients tab is gone for now.

At the same time, there’s a new feature here, and that is the blue tile leading to the shopping list view:

The design for this page is still a bit up in the air, so you should expect this to change in the next releases. I decided to merge it already anyway, since I am impatient, and this view already provides useful functionality. You can print the shopping list:

Beyond this, I’ve spent some time on polishing and fixing bugs. One thing that I’ve discovered to my embarrassment earlier this week is that exporting recipes from the flatpak did not actually work. I had only ever tested this with an un-sandboxed local build.

Sorry to everyone who tried to export their recipe and was left wondering why it didn’t work!

We’ve now fixed all the bugs that were involved here, both in recipes and in the file chooser portal and in the portal infrastructure itself, and exporting recipes works fine with the current flatpak, which, as always, you can install from here:

https://alexlarsson.github.io/test-releases/gnome-recipes.flatpakref

One related issue that became apparent during this bug hunt is that things work less than perfectly if the portals are not present on the host system. Until that becomes less likely, I’ve added a bit of code to make the failure less mysterious, and give you some idea how to fix it:

I think recipes is proving its value as  a test bed and early adopter for flatpak and portals. At this point, it is using the file chooser portal, the account information portal, the print portal, the notification portal, the session inhibit portal, and it would also use the sharing portal, if we had that already.

I shouldn’t close this post without mentioning that you will have a chance to hear a bit from Elvin about the genesis of this application in the Fosdem design devroom. See you there!

Litecoin mining on "whatever"

Posted by Joerg Stephan on January 18, 2017 07:06 PM
I started mining litecoins for fun, so this is really not a post on what the best way would be or how to get rich.

This post is more about what happens if you use "what is lying around", so I installed "minerd" on my raspberry and banana Pi and also looked what phones are still in the desk and installed a mining software from the individual store.

In the last 5 days I managed to get

0.000157304676 LTC

by using some devices, whereas the PIs have been running all time and additional Hardware has been switched on and off sometimes.


  1. Banana pi M1
    ARM Cortex-A7 Dual-core (ARMv7-A) 1 GHz
    Mining average in 24 hrs : 0,8 KH/s
  2. Raspberrys Pi 3 B
    64bit ARMv8 QUAD Core CPU op 1.2GHz
    Mining average in 24 hrs: 1,2 KH/s
  3. Microsoft LUMIA 640
    Quad-core 1.2 GHz Cortex-A7
    Mining average in 24 hrs: 1,3 KH/s
  4. Lenovo A390t
    Dual-core 1.0 GHz Cortex-A9
    Mining average in 24 hrs: 0,6 KH/s
Some devices which have never been running 24 Hrs
  1. Thinkpad Lenovo T410i
    Intel(R) Core(TM) i5 CPU       M 430  @ 2.27GHz
    Mining peeks: 14 KH/s

Secure your Elasticsearch cluster and avoid ransomware

Posted by Peter Czanik on January 18, 2017 04:26 PM

Last week,  news came out that unprotected MongoDB databases are being actively compromised: content copied and replaced by a message asking for a ransom to get it back. As The Register reports: Elasticsearch is next.

Protecting access to Elasticsearch by a firewall is not always possible. But even in environments where it is possible, many admins are not protecting their databases. Even if you cannot use a firewall, you can secure connection to Elasticsearch by using encryption. Elasticsearch by itself does not provide any authentication or encryption possibilities. Still, there are many third-party solutions available, each with its own drawbacks and advantages.

X-pack (formerly: Shield) is the solution developed by Elastic.co, the company behind Elasticsearch. It is a commercial product (on first installation a 30 day trial license is installed) and offers many more possibilities than just securing your Elasticsearch cluster, including monitoring, reporting and alerting. Support is available in syslog-ng for Elasticsearch versions 2.X since version 3.7.

SearchGuard is developed by floragunn. It is a plugin for Elasticsearch offering encryption and authentication. All basic security features are open source and are available for free, enterprise features are available for a fee. Support is available in syslog-ng since version 3.9.1 when using the native Elasticsearch transport protocol. The SearchGuard component utilized by syslog-ng does not require a commercial license.

Right now the HTTP client in syslog-ng does not support encrypted (HTTPS) connections. Proof-of-concept-level code is already available by Fabien Wernli (also known as Faxm0dem) on GitHub, hopefully it will be ready for general use soon.

As you can see, syslog-ng provides many different ways to connect securely to your Elasticsearch cluster. If you have not secured it yet and want to avoid paying a ransom, secure it now!

The post Secure your Elasticsearch cluster and avoid ransomware appeared first on Balabit Blog.

Improve your sleep by using Redshift on Fedora

Posted by Fedora Magazine on January 18, 2017 06:20 AM

The blue light emitted by most electronic devices, is known for having a negative impact on our sleep. We could simply quit using each of our electronic devices after dark, as an attempt to improve our sleep. However, since that is not really convenient for most of us, a better way is to adjusts the color temperature of your screen according to your surroundings. One of the most popular ways to achieve this is with the Redshift utility. Jon Lund Steffensen , the creator of Redshift, describes his program in the following way:

Redshift adjusts the color temperature of your screen according to your surroundings. This may help your eyes hurt less if you are working in front of the screen at night.

The Redshift utility only works in the X11 session on Fedora Workstation. So if you’re using Fedora 24, Redshift will work with the default login session. However, on Fedora 25, the default session at login is Wayland, so you will have to use the GNOME shell extension instead. Note, too that the GNOME Shell extension also works with X11 sessions.

Redshift utility

Installation

Redshift is in the Fedora’s repos, and thus, all we have to do to install is run this command:

sudo dnf install redshift

The package also provides a GUI. To use this, install redshift-gtk instead. Remember, though, that the utility only works on X11 sessions.

Using the Redshift utility

Run the utility from the command line with a command like the following:

redshift -l 23.6980:133.8807 -t 5600:3400

In the above command, the -l 23.6980:133.8807 means we are informing Redshift that our current location is 23.6980° S, 133.8807° E. The -t 5600:3400 declares that during the day you want a colour temperature of 5600, and 3400 at night.

The temperature is proportional to the amount of blue light emitted: a lower temperature, implies a lower amount of blue light.  I prefer to use 5600K (6500K is neutral daylight) during the day, and 3400K at night (anything lower makes me feel like I’m staring at a tomato), but feel free to experiment with it.

If you don’t specify a location, Redshift attempts to use the Geoclue method in order to determine your location coordinates. If this method doesn’t work, you could use multiple websites and online maps to find the coordinates.

screenshot1

Don’t forget to set Redshift as an autostart command, and to check Jon’s website for more information.

Redshift GNOME Shell extension

The utility does not work when running the Wayland display server (which is standard in Fedora 25). Fortunately, there is a handy GNOME Shell extension that will do the same job. To install, run the the following commands:

sudo dnf copr enable mystro256/gnome-redshift
sudo dnf install gnome-shell-extension-redshift

After installing from the COPR repo, log out and log back in of your Fedora Workstation, then enable it in the GNOME Tweak tool. For more information, check the gnome-redshift copr repo, or the github repo.

After enabling the extension, a little sun (or moon) icon appears in the top right of your GNOME shell. The extension also provides a settings dialog to tweak the times of the redshift and the temperature.

screenshot-from-2017-01-18-15-21-47

 

Relative software

F.lux

Redshift could be seen as the open-source variant of F.lux. There is a linux version of F.lux now. You could consider using it if you don’t mind using closed-source software, or if Redshift doesn’t work properly.

Twilight for Android

Twilight is similar to Redshift, but for Android. It makes reading on your smartphone or tablet late at night more comfortable.

Redshift plasmoid

This is the Redshift GUI version for KDE. You can find more information on github.

 

 

 

 

 

 

 

 

 

Save

Save

Save

Save

Save

Save

All systems go

Posted by Fedora Infrastructure Status on January 18, 2017 12:48 AM
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar, Kerberos

Major service disruption

Posted by Fedora Infrastructure Status on January 18, 2017 12:47 AM
New status major: network outage at main DC for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar, Kerberos

Mea Culpa: Fedora Elections

Posted by Stephen Smoogen on January 17, 2017 05:30 PM
As announced here, here , and here, the Fedora Election cycle for the start of the 25 release has been done. Congratulations on the winners. Now if you notice there were less than 250 voters for any of the elections out of multiple thousand of eligible voters.. I am not one of them.

It is not like the elections were announced before, at the start, or right before they ended. Yet somehow.. I missed everyone of these emails. I caught various emails on NFS changing configurations, proposed changes to Fedora 26 and 27, or various retired packages.. but I completely spaced the elections. I was actually sending an email asking when they would be held when someone congratulated Kevin Fenzi on IRC about winning.

So to the winners of this cycle of elections. Congratulations. To all the people who put in the hard work of running elections (and having run several it is a LOT of hard work), my sincere apologies for somehow missing it.

Elections Retrospective, January 2017

Posted by Fedora Community Blog on January 17, 2017 04:49 PM

The results are in! The Fedora Elections for the Fedora 25 release cycle of FESCo, FAmSCo and the Council concluded on Tuesday, January 17th. The results are posted on the Fedora Voting Application and announced on the mailing lists. You can also find the full list of winning candidates below. I would also like to share some interesting statistics in this January 2017 Elections Retrospective.

January 2017 Elections Retrospective Report

In this election cycle, the voter turnout is above its average level. It is great news as it shows increased interest of the Fedora people in community affairs.

This election cycle was hit by some planning issues as we were running the Elections over Christmas 2016 period. At the beginning I was worrying about the turnout due to the Christmas, but fortunately this was odd and we are more than good from this point of view.

Fedora Engineering Steering Committee (FESCo)

We had five vacant seats and seven nominations for the F25 cycle, with 267 voters casting their votes.

FESCo Winning Candidates Votes
Kevin Fenzi (nirik / kevin) [info] 1401
Adam Miller (maxamillion / maxamillion) [info] 1075
Jared Smith (jsmith / jsmith) [info] 988
Justin Forbes (jforbes / jforbes) [info] 735
Kalev Lember (kalev / kalev) [info] 691

Out of the five elected nominees, four (nirik, maxamillion, jsmith, and kalev) have been elected for a repeat term. One elected nominee (jforbes) has been elected for the first time.

 Compared to the historical data, with 267 voters, we are above the average of 215 voters.
fesco-elections-2017-01
The following statistic shows how many people voted each day during the voting period.
fesco-elections-per-day-2017-01
More FESCo statistic data can be found in the voting application.

Fedora Council

We had one vacant seat and five nominations for the Fedora 25 cycle, with 260 voters casting their votes.

Council Winning Candidate Votes
Robert Mayr (robyduck) [info] 743

The Fedora Council came into existence in November 2014, and hence, we do not have much previous data. Historically, before we had a Council, there was a Board. On the chart below you can see the comparison between voter turnout for the Fedora Board elections vs Council Elections. The average voters turnout for Council & Board elections is 223, and for Council only is the average 220.

council-elections-2017-01

The profile for number of voters per day was similar to the one we saw for FESCo.

council-elections-per-day-2017-01

More Council statistic data can be found in the voting application.

Fedora Ambassadors Steering Committee (FAmSCo)

We had seven vacant seat and thirteen nominations for the Fedora 25 cycle, with 247 voters casting their votes.

FAmSCo Winning Candidates Votes
Robert Mayr (robyduck) [info] 1623
Jona Azizaj (jonatoni) [info] 1576
Gabriele Trombini (mailga) [info] 1274
Giannis Konstantinidis (giannisk) [info] 1168
Itamar Reis Peixoto (itamarjp) [info] 1110
Frederico Lima (fredlima) [info] 1010
Sylvia Sanchez (Kohane / lailah) [info] 964

Due to the effort spent during the last several years to convert FAmSCo to FOSCo,  it is difficult to directly compare the  data from election’s turnout. However we can state that during this election cycle we hit the best turnout ever (as far as records are available). The average turnout for FAmSCo is 161 voters. This cycle we hit 247 voters.

famsco-elections-2017-01

Again, we can see the same distribution of voters over the voting period as we have seen in FESCo and Council.

famsco-elections-per-day-2017-01

More statistic data can be found in the Voting application.

Special Thanks

Congratulations to the winning candidates, and thank you to all the candidates who ran this election! Community governance is core to the Fedora Project, and we couldn’t do it without your involvement and support.

A special thanks to bee2502 and jflory7 as well as to the members of the CommOps Team for helping organize another successful round of Elections!

And last but not least, thank YOU to all the Fedora community members who participated and voted this election cycle. Stay tuned for future Elections Retrospective articles for future Elections!

The post Elections Retrospective, January 2017 appeared first on Fedora Community Blog.

negativo17.org nvidia packages should now work out of the box on optimus setups

Posted by Hans de Goede on January 17, 2017 01:31 PM
In this blog post I promised I would get back to people who want to use the nvidia driver on an optimus laptop.

The set of xserver patches I blogged about last time have landed upstream and in Fedora 25 (in xorg-x11-server 1.19.0-3 and newer), allowing the nvidia driver packages to drop a xorg.conf snippet which will make the driver atuomatically work on optimus setups.

The negativo17.org nvidia packages now are using this, so if you install these, then the nvidia driver should just work on your laptop.

Note that you should only install these drivers if you actually have a supported (new enough) nvidia GPU. These drivers replace the libGL implementation, so installing them on a system without a nvidia GPU will cause things to break. This will be fixed soon by switching to libglvnd as the official libGL provider and having both mesa and the nvidia driver provide "plugins" for libglvnd. I've actually just completed building a new enough libglvnd + libglvnd enabled mesa for rawhide, so rawhide users will have libglvnd starting tomorrow.

Factory 2, Sprint 8 Report

Posted by Ralph Bean on January 16, 2017 10:06 PM

Happy New Year from the Factory 2.0 team.

Here's a reminder of our current priorities. We are:

  • Preparing elementary build infrastructure for the Fedora 26 Alpha release.
  • Deserializing pipeline processes that could be done more quickly in parallel.
  • Building a dependency chain database, so that we can build smarter rebuild automation and pipeline analytics.
  • Monitoring pipeline performance metrics, so that as we later improve things we can be sure we had an effect.

We are on track with respect to three of the four priorities: module build infrastructure will be ready before the F26 Alpha freeze. Our VMs are provisioned, we're working through the packaging rituals, and we'll be ready for an initial deployment shortly after devconf. Internally, our MvP of resultsdb and resultsdb-updater are working and pulling data from some early-adopter Platform Jenkins masters and our internal performance measurement work is bearing fruit slowly but steadily: we have two key metrics updating automatically on our kibana dashboard, with two more in progress to be completed in the coming sprints.

We have made a conscious decision to put our work on the internal depedency chain database on hold. We're going to defer our deployment to production for a few months to ensure that our efforts don't collide with a separate release engineering project ongoing now.

Tangentially, we're glad to be assisting with the adoption of robosignatory for automatic rpm signing. It's an excellent example of upstream/downstream cooperation between the Fedora and RHEL services teams.

mbs-optimization, by jkaluza

This demo shows optimizations of module build in module build service comparing the diagrams from old and new version of MBS.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-008//jkaluza-mbs-optimization.ogv"> </video>

resultsdb-updater-pdc-updater-updates, by mprahl

This demo shows the changes in ResultsDB-Updater and how it reflects in ResultsDB. Additionally, progress is shown on pdc-updater working internally.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-008//mprahl-resultsdb-updater-pdc-updater-updates.mp4"> </video>

developers-instance, by fivaldi

In this video, I am presenting local devleoper's instance of MBS using docker-compose. The aim is to provide the simplest way of a custom MBS instance for dev/testing purposes. At the end, I'm showing how to submit a testing module build.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-006//fivaldi-developers-instance.ogv"> </video>

fedpkg-pdc-modulemd, by jkaluza

This demo shows the newly implemented "fedpkg module-build" command workflow and improved storage of module metadata in the PDC.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-006//jkaluza-fedpkg-pdc-modulemd.ogv"> </video>

mbs-scheduler-and-resultsdb-prod, by mprahl

This demo briefly explains the changes in the Module Build Service scheduler to use fedmsg's "Hub-Consumer" approach. Additionally, ResultsDB is briefly shown in production with results populated from ResultsDB-Updater.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-006//mprahl-mbs-scheduler-and-resultsdb-prod.mp4"> </video>

module-checks-in-taskotron, by threebean

In this demo, I show how we've wrapped elementary tests produced by the base-runtime team so that they can be executed and managed by the taskotron CI system in place in Fedora Infrastructure. Benefits include:

  • No lock-in to taskotron. Jenkins-job-builder could wrap the core test in a similar way.
  • An avocado-to-resultsdb translator is written which will be generally useful in future sprints.

Work on taskotron-trigger to automatically respond to dist-git events was implemented and merged upstream, but is pending a release and deployment.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-006//threebean-module-checks-in-taskotron.ogv"> </video>

What does security and USB-C have in common?

Posted by Josh Bressers on January 16, 2017 06:39 PM
I've decided to create yet another security analogy! You can’t tell, but I’m very excited to do this. One of my long standing complaints about security is there are basically no good analogies that make sense. We always try to talk about auto safety, or food safety, or maybe building security, how about pollution. There’s always some sort of existing real world scenario we try warp and twist in a way so we can tell a security story that makes sense. So far they’ve all failed. The analogy always starts out strong, then something happens that makes everything fall apart. I imagine a big part of this is because security is really new, but it’s also really hard to understand. It’s just not something humans are good at understanding.

The other day this article was sent to me by @kurtseifried
How Volunteer Reviewers Are Saving The World From Crummy—Even Dangerous—USB-C Cables

The TL;DR is essentially the world of USB-C cables is sort of a modern day wild west. There’s no way to really tell which ones are good and which ones are bad, so there are some people who test the cables. It’s nothing official, they’re basically volunteers doing this in their free time. Their feedback is literally the only real way to decide which cables are good and which are bad. That’s sort of crazy if you think about it.

This really got me thinking though, it’s has a lot in common with our current security problems. We have a bunch of products and technologies. We don’t have a good way to tell if something is good or bad. There are some people who try to help with good information. But fundamentally most of our decisions are made with bad or incomplete data.

In the case of the cables, I see two practical ways out of this. Either have some sort of official testing lab. If something doesn’t pass testing, it can’t be sold. This makes sense, there are plenty of things on the market today that go through similar testing. If the products fails, it doesn’t get sold. In this case the comparable analogies hold up. Auto safety, electrical safety, hdmi; there are plenty of organizations that are responsible for ensuring the quality and safety of certain products. The cables would be no different.

A possible alternative to deal with this problem is you make sure every device will exist in a way that assumes bad cables are possible and deal with this situation in hardware. This would mean devices being smart enough to not draw too much power, or not provide too much power. To know when there will be some sort of failure mode and disconnect. There are a lot of possibilities here, and to be perfectly honest, no device will be able to do this with 100% accuracy. More importantly though, no manufacturer will be willing to add this functionality because it would add cost, probably a lot of cost. It’s still a remote possibility though, and for the sake of the analogy, we’re going to go with it.

The first example twisted to cybersecurity would mean you need a nice way to measure security. There would be a lab or organization that is capable of doing the testing, then giving some sort of stamp of approval. This has proven to be a really hard thing to do in the past. The few attempts to do this have failed. I suspect it’s possible, just very difficult to do right. Today Mudge is doing some of this with the CITL, but other than that I’m not really aware of anything of substance. It’s a really hard problem to solve, but if anyone can do it right, it’s probably Mudge.

This then leads us to the second possibility which is sort of how things work today. There is a certain expectation that an endpoint will handle certain situations correctly. Each endpoint has to basically assume anything talking to it is broken in some way. All data transferred must be verified. Executables must be signed and safely distributed. The networks the data flows across can’t really be trusted. Any connection to the machine could be an attacker and must be treated as such. This is proving to be very hard though and in the context of the cables, it’s basically the crazy solution. Our current model of security is the crazy solution. I doubt anyone will argue with that.

This analogy certainly isn’t perfect, but the more I think about it the more I like it. I’m sure there are problems thinking about this in such a way, but for the moment, it’s something to think about at least. The goal is to tell a story that normal people can understand so we can justify what we want to do and why. Normal people don’t understand security, but they do understand USB cables.


Do you have a better analogy? Let me know @joshbressers on Twitter.

Episode 27 - Prove to me you are human

Posted by Open Source Security Podcast on January 16, 2017 03:45 PM
Josh and Kurt discuss NTP, authentication issues, network security, airplane security, AI, and Minecraft.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/302981179&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Digest of Fedora 25 Reviews

Posted by Jiri Eischmann on January 16, 2017 02:35 PM

Fedora 25 has been out for 2 months and it seems like a very solid release, maybe the best in the history of the distro. And feedback from the press and users has also been very positive. I took the time and put together a digest of the latest reviews:

Phoronix: Fedora 25 Is Quite Possibly My Most Favorite Release Yet

As a long-time Fedora fan and user going back to Fedora Core, Fedora 25 is quite possibly my most favorite Fedora release yet. With the state as of this week, it feels very polished and reliable and haven’t encountered any glaring bugs on any of my test systems. Thanks in large part due to the heavy lifting on ensuring GNOME 3.22 is a super-polished desktop release, Fedora 25 just feels really mature yet modern when using it.

Phoronix: Fedora 25 Turned Out Great, Definitely My Most Favorite Fedora Release

That’s the first time I’ve been so ambitious with a Fedora release, but in testing it over the past few weeks (and months) on a multitude of test systems, the quality has been excellent and by far is most favorite release going back to the Fedora Core days — and there’s Wayland by default too, as just the icing on the cake.

Distrowatch: Fedora 25 Review

Even when dealing with the various Wayland oddities and issues, Fedora 25 is a great distribution. Everything is reasonably polished and the default software provides a functional desktop for those looking for a basic web browsing, e-mail, and word processing environment. The additional packages available can easily turn Fedora into an excellent development workstation customized for a developer’s specific needs. If you are programming in most of the current major programming languages, Fedora provides you the tools to easily do so. Overall, I am very pleased using Fedora 25, but I am even more excited for future releases of Fedora as the various minor Wayland issues get cleaned up.

ZDNet: Fedora 25 Linux arrives with Wayland display support

Today, Fedora is once more the leading edge Linux distribution.

ArsTechnica: Fedora 25: With Wayland, Linux has never been easier (or more handsome)

Fedora 24 was very close to my favorite distro of the year, but with Fedora 25 I think it’s safe to say that the Fedora Project has finally nailed it. I still run a very minimal Arch install (with Openbox) on my main machine, but everywhere else—family and friends who want to upgrade, clients looking for a stable system and so on—I’ve been recommending Fedora 25.

…I have no qualms recommending both Fedora and Wayland. The best Linux distro of 2016 simply arrived at the last moment.

Hectic Geek: Fedora 25 Review: A Stable Release, But Slightly Slow to Boot (on rotational disks)

If you have a rotational disk, then Fedora 25 will be a little slow to boot and there is nothing you or I can do to fix it. But if you have an SSD, then you shall have no issues here. Other than that, I’m quite pleased with this release actually. Sure the responsiveness sucked the first time on, but as mentioned, it can be fixed, permanently. And the stability is also excellent.

Dedoimedo: And the best distro of 2016 is…

The author prefers Fedora 24 to 25, but Fedora is still the distro of the year for him:

Never once had I believed that Fedora would rise so highly, but rise it did. Not only is the 24th release a child of a long succession of slowly, gradually improving editions, it also washed away my hatred for Gnome 3, and I actually started using it, almost daily, with some fairly good results. Fedora 24 was so good that it broke Fedora. The latest release is not quite as good, but it is a perfectly sane compromise if you want to use the hottest loaf of modern technology fresh from the Linux oven.

OCS-Mag: Best GNOME distro of 2016

The same author, and again not surprisingly prefers 24 which is the best GNOME distro in his opinions:

Fedora 24 is a well-rounded and polished operating system, and with the right amount of proverbial pimping, its Gnome desktop offers a stylish yet usable formula to the common user, with looks and functionality balanced to a fair degree. But, let us not forget the extensions that make all this possible. Good performance, good battery life and everyday stuff aplenty should keep you happy and entertained. Among the Gnome bunch, it’s Funky Fedora that offers the best results overall. And thus we crown it the winner of the garden ornament competition of 2016.

The Register: Fedora 25: You’ve got that Wayland feelin’, oh, that Wayland feelin’

Fedora 25 WorkStation is hands down the best desktop Linux distro I tested in 2016. With Wayland, GNOME 3.22 and the excellent DNF package manager, I’m hard-pressed to think of anything missing. The only downside? Fedora lacks an LTS release, but now that updating is less harrowing, that’s less of a concern.

Bit Cannon: Finding an Alternative to Mac OS X

Wesley Moore was looking for an alternative to Mac OS X and his three picks were: Fedora, Arch Linux, and elementaryOS.

Fedora provided an excellent experience. I installed Fedora 25 just after its release. It’s built on the latest tech like Wayland and GNOME 3.22.

The Huffington Post: How To Break Free From Your Computer Operating System — If You Dare

Fedora is a gorgeous operating system, with a sleek and intuitive interface, a clean aesthetic, and it’s wicked fast.

ArsTechnica: Dell’s latest XPS 13 DE still delivers Linux in a svelte package

Not really a review of Fedora, but the author tried to install Fedora 25 on the new XPS13 and this is what he had to say:

As a final note, I did install and test both Fedora 25 and Arch on the new hardware and had no problems in either case. For Fedora, I went with the default GNOME 3.22 desktop, which, frankly, is what I think Dell should ship out of the box. It’s got far better HiDPI support than Ubuntu, and the developer tools available through Fedora are considerably more robust than most of what you’ll find in Ubuntu’s repos.

Looks like we’re on the right track and I’m sure Fedora 26 will be an even better release. We’ve got very interesting things in the works.


Fedora assembly IDE - SimpleASM.

Posted by mythcat on January 16, 2017 12:35 PM
This integrated development environment (IDE) named SimpleASM (SASM) let you to make application using assembly language.
The good part for linux users is the crossplatform IDE for NASM, MASM, GAS, FASM with syntax highlighting and debugger.
I used fasm so this help me.
The debugger is gdb - GNU Project Debugger and supports working with many opened project.
If you want to use this IDE with Fedora then you can get from Fedora 24
The official web page is here.

Use Docker remotely on Atomic Host

Posted by Fedora Magazine on January 16, 2017 08:00 AM

Atomic Host from Project Atomic is a lightweight container based OS that can run Linux containers. It’s been optimized to use as a container run-time system for cloud environments. For instance, it can host a Docker daemon and containers. At times, you may want to run docker commands on that host and manage the server from elsewhere. This article shows you how to remotely access the Docker daemon of the Fedora Atomic Host, which you can download here. The entire process is automated by Ansible — which is a great tool when it comes to automating everything.

A note on security

We’ll secure the Docker daemon with TLS, since we’re connecting via the network. This process requires a client certificate and server certificate. The OpenSSL package is used to to create the certificate keys for establishing a TLS connection. Here, the Atomic Host is running the daemon, and our local Fedora Workstation acts as a client.

Before you follow these steps, note that any process on the client that can access the TLS certs now has full root access on the server. Thus, the client can do anything it wants to do on the server. Therefore, we need to give cert access only to the specific client host that can be trusted. You should copy the client certificates only to a client host completely under your control. Even in that case, client machine security is critical.

However, this method is only one way to remotely access the daemon. Orchestration tools often provide more secure controls. The simple method below works for personal experimenting, but may not be appropriate for an open network.

Getting the Ansible role

Chris Houseknecht wrote an Ansible role that creates all the certs required. This way you don’t need to run openssl commands manually. These are provided in an Ansible role repository. Clone it to your present working host.

$ mkdir docker-remote-access
$ cd docker-remote-access
$ git clone https://github.com/ansible/role-secure-docker-daemon.git

Create config files

Next, you must create an Ansible configuration file, inventory and playbook file to setup the client and daemon. The following instructions create client and server certs on the Atomic Host. Then, they fetch the client certs to the local machine. Finally, they configure the daemon and client so they talk to each other.

Here is the directory structure you need. Create each of the files below as shown.

$ tree docker-remote-access/
docker-remote-access/
├── ansible.cfg
├── inventory
├── remote-access.yml
└── role-secure-docker-daemon

ansible.cfg

 $ vim ansible.cfg
[defaults]
inventory=inventory

inventory

 $ vim inventory
[daemonhost]
'IP_OF_ATOMIC_HOST' ansible_ssh_private_key_file='PRIVATE_KEY_FILE'

Replace IP_OF_ATOMIC_HOST in the inventory file with the IP of your Atomic Host. Replace PRIVATE_KEY_FILE with the location of the SSH private key file on your local system.

remote-access.yml

$ vim remote-access.yml
---
- name: Docker Client Set up
  hosts: daemonhost
  gather_facts: no
  tasks:
    - name: Make ~/.docker directory for docker certs
      local_action: file path='~/.docker' state='directory'

    - name: Add Environment variables to ~/.bashrc
      local_action: lineinfile dest='~/.bashrc' line='export DOCKER_TLS_VERIFY=1\nexport DOCKER_CERT_PATH=~/.docker/\nexport DOCKER_HOST=tcp://{{ inventory_hostname }}:2376\n' state='present'

    - name: Source ~/.bashrc file
      local_action: shell source ~/.bashrc

- name: Docker Daemon Set up
  hosts: daemonhost
  gather_facts: no
  remote_user: fedora
  become: yes
  become_method: sudo
  become_user: root
  roles:
    - role: role-secure-docker-daemon
      dds_host: "{{ inventory_hostname }}"
      dds_server_cert_path: /etc/docker
      dds_restart_docker: no
  tasks:
    - name: fetch ca.pem from daemon host
      fetch:
        src: /root/.docker/ca.pem
        dest: ~/.docker/
        fail_on_missing: yes
        flat: yes
    - name: fetch cert.pem from daemon host
      fetch:
        src: /root/.docker/cert.pem
        dest: ~/.docker/
        fail_on_missing: yes
        flat: yes
    - name: fetch key.pem from daemon host
      fetch:
        src: /root/.docker/key.pem
        dest: ~/.docker/
        fail_on_missing: yes
        flat: yes
    - name: Remove Environment variable OPTIONS from /etc/sysconfig/docker
      lineinfile:
        dest: /etc/sysconfig/docker
        regexp: '^OPTIONS'
        state: absent

    - name: Modify Environment variable OPTIONS in /etc/sysconfig/docker
      lineinfile:
        dest: /etc/sysconfig/docker
        line: "OPTIONS='--selinux-enabled --log-driver=journald --tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server-cert.pem --tlskey=/etc/docker/server-key.pem -H=0.0.0.0:2376 -H=unix:///var/run/docker.sock'"
        state: present

    - name: Remove client certs from daemon host
      file:
        path: /root/.docker
        state: absent

    - name: Reload Docker daemon
      command: systemctl daemon-reload
    - name: Restart Docker daemon
      command: systemctl restart docker.service

Access the remote Atomic Host

Now, run the Ansible playbook:

$ ansible-playbook remote-access.yml

Make sure that the tcp port 2376 is opened on your Atomic Host. If you’re using Openstack, add TCP port 2376 in your security rule. If you’re using AWS, add it to your security group.

Now, a docker command run as a regular user on your workstation talks to the daemon of the Atomic host, and executes the command there. You don’t need to manually ssh or issue a command on your Atomic host. This allows you to launch containerized applications remotely and easily, yet securely.

If you want to clone the playbook and the config file, there is a git repository available here.

docker-daemon


Image courtesy of Axel Ahoi — originally posted to Unsplash.