Fedora People

Canaries in a coal mine (apropos nothing)

Posted by Stephen Smoogen on May 24, 2017 10:29 PM

[This post is brought to you by Matthew Inman. Reading http://theoatmeal.com/comics/believe made me realize I don't listen enough and Verisatium's https://www.youtube.com/watch?v=UBVV8pch1dM made me realize why thinking is hard. I am writing this to remind myself when I forget and jump on some phrase.]

Various generations ago, part of my family was coal miners and some of their lore was still passed down many many years later. One of those was about the proverbial canary. A lot of people like to think that they are being a canary when they bring up a problem that they believe will cause great harm.. singing louder because they have run out of air.

That isn't what a canary does. The birds in the mines go silent when the air runs out. They may have died or are on the verge of being dead. They got quieter and quieter and what the miners listened for was the lack of noise from birds versus more noise. Of course it is very very hard to hear the birds in the first place in a mine because they aren't quiet places. There is hammering, and shoveling and footsteps echoing down long tubes.. so you might think.. bring more birds.. that just added more distractions and miners would get into fights because the damn birds never shut up. So the birds were few and far between and people would have to check up on the birds every now and then to see if they were still kicking. Safer mines would have some old fellow stay near the bird and if it died/passed out they would begin ringing a bell which could be heard down the hole.

So if analogies were 1:1, the time to worry is not when people are complaining a lot on a mailing list about some change. In fact if everyone complains, then you could interpret that you have too many birds and not enough miners so go ahead. The time to worry would be when things have changed but no one complains. Then you probably really need to look at getting out of the mine (or most likely you will find it is too late).

However analogies are rarely 1:1 or even 1:20. People are not birds, and you should pay attention to when changes cause a lot of consternation. Listen to why the change is causing problems or pain. Take some time to process it, and see what can be done to either alter the change or find a way for the person who is in pain to get out of pain.

Best password management tool.

Posted by mythcat on May 24, 2017 06:40 PM
This suite of tools come with many features free and one good premium option.
The Password Tote tools provides secure password management through software and services on multiple platforms and work very well with software downloads for Windows, Mac OS X, Safari, Chrome, Firefox, iOS (iPhone, iPod Touch, iPad), Android.
You can download this from downloads page.

Features OutlineFreePremium
Website Access
Browser Extensions
Desktop Software
Mobile Software
Password Sharing
YubiKey Support
PriceFree$2.99 a month or 2 Years at a 16% savings
DescriptionThis will allow you to use the website version completely free. It also gives you access to fill your passwords from the browser extensions. It does not provide access to the desktop software or mobile phone software.Premium gives you access to your passwords from all versions of Password Tote, including the desktop software and mobile phone versions.

Synchronization between browser extensions and utilities is fast and does not confuse the user in navigation. Importing files is fast for the csv file dedicated to dozens of passwords.
A very good aspect was the compromise solution for custom import with a generic csv file.
The utility generates this file and you can fill it with the necessary login data for your web sites.
The other csv import options did not work for me, I guess the problems are incompatible with the other files exported by the dedicated software.
I used it with YubiKey and it worked very well. It's the only utility that allowed me to connect with YubiKey, the other utilities demand a premium version.

How to enable YubiKeys and password tote.
  • First log in to your Password Tote account. 
  • Click Account, then Manage YubiKeys. You will arrive at the YubiKey Management page. 
  • Click Add YubiKey to register your YubiKey with your Password Tote account. 
  • Fill in the required details. If successful, your YubiKey will be displayed in the list as shown in the screen shot below.

Formatting a new extFAT USB on Fedora

Posted by Julita Inca Chiroque on May 24, 2017 06:31 PM

I have a new 64GB USB and it was not show up at first time:

Thanks to this video I typed fdisk -l, then I was able to see 58.2 GB

After trying to install the exfat package with dnf -y install fuse-exfat, I failed

What I did after many failings was, setting the partition using the GUI:

Then you can see the new format as Ext4:

It is OK to have a little FreeSpace with no extension. It is time to write into the USB:

Now we can see the USB device in the list of devices 😀

Screenshot from 2017-05-24 13:33:06


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: device, extfat, extfat mount, fedora, format, GNOME, Julita Inca, Julita Inca Chiroque, mnt, mount, USB, USB flash drive

Getting started with helm on OpenShift

Posted by Adam Young on May 24, 2017 05:20 PM

After attending in on a helm based lab at the OpenStack summit, I decided I wanted to try it out for myself on my OpenShift cluster.

Since helm is not yet part of Fedora, I used the upstream binary distribution Inside the tarball was, among other things, a standalone binary named helm, which I moved to ~/bin (which is in my path). Once I had that in place:

$ helm init
Creating /home/ayoung/.helm 
Creating /home/ayoung/.helm/repository 
Creating /home/ayoung/.helm/repository/cache 
Creating /home/ayoung/.helm/repository/local 
Creating /home/ayoung/.helm/plugins 
Creating /home/ayoung/.helm/starters 
Creating /home/ayoung/.helm/repository/repositories.yaml 
$HELM_HOME has been configured at /home/ayoung/.helm.

Tiller (the helm server side component) has been installed into your Kubernetes Cluster.
Happy Helming!

Checking on that Tiller install:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
default       docker-registry-2-z91cq          1/1       Running   0          23h
default       registry-console-1-g4qml         1/1       Running   0          1d
default       router-5-4w3zt                   1/1       Running   0          23h
kube-system   tiller-deploy-3210876050-8gx0w   1/1       Running   0          1m

But trying a helm command line operation fails.

$ helm list
Error: User "system:serviceaccount:kube-system:default" cannot list configmaps in project "kube-system"

This looks like an RBAC issue. I want to assign the role ‘admin’ to the user “system:serviceaccount:kube-system:tiller” on the project “kube-system”

$ oc project kube-system
Now using project "kube-system" on server "https://munchlax:8443".
[ansible@munchlax ~]$ oadm policy add-role-to-user admin system:serviceaccount:kube-system:tiller
role "admin" added: "system:serviceaccount:kube-system:tiller"
[ansible@munchlax ~]$ ./helm list
[ansible@munchlax ~]$

Now I can follow the steps outlined in the getting started guide:

[ansible@munchlax ~]$ ./helm create mychart
Creating mychart
[ansible@munchlax ~]$ rm -rf mychart/templates/
deployment.yaml  _helpers.tpl     ingress.yaml     NOTES.txt        service.yaml     
[ansible@munchlax ~]$ rm -rf mychart/templates/*.*
[ansible@munchlax ~]$ 
[ansible@munchlax ~]$ 
[ansible@munchlax ~]$ vi mychart/templates/configmap.yaml
[ansible@munchlax ~]$ ./helm install ./mychart
NAME:   esteemed-pike
LAST DEPLOYED: Wed May 24 11:46:52 2017
NAMESPACE: kube-system
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME               DATA  AGE
mychart-configmap  1     0s
[ansible@munchlax ~]$ ./helm get manifest esteemed-pike

---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: mychart-configmap
data:
  myvalue: "Hello World"
[ansible@munchlax ~]$ ./helm delete esteemed-pike
release "esteemed-pike" deleted

Exploring OpenShift RBAC

Posted by Adam Young on May 24, 2017 03:27 PM

OK, since I did it wrong last time, I’m going to try creating an user in OpenShift, and grant that user permissions to do various things. 

I’m going to start by removing the ~/.kube directory on my laptop and perform operations via SSH on the master node.  From my last session I can see I still have:

$ oc get users
NAME UID FULL NAME IDENTITIES
ayoung cca08f74-3a53-11e7-9754-1c666d8b0614 allow_all:ayoung
$ oc get identities
NAME IDP NAME IDP USER NAME USER NAME USER UID
allow_all:ayoung allow_all ayoung ayoung cca08f74-3a53-11e7-9754-1c666d8b0614

What openshift calls projects (perhaps taking the lead from Keystone?) Kubernetes calls namespaces:

$ oc get projects
NAME DISPLAY NAME STATUS
default Active
kube-system Active
logging Active
management-infra Active
openshift Active
openshift-infra Active
[ansible@munchlax ~]$ kubectl get namespaces
NAME STATUS AGE
default Active 18d
kube-system Active 18d
logging Active 7d
management-infra Active 10d
openshift Active 18d
openshift-infra Active 18d

According to the documentation here I should be able to log in from my laptop, and all of the configuration files just get magically set up.  Lets see what happens:

$ oc login
Server [https://localhost:8443]: https://munchlax:8443 
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y

Authentication required for https://munchlax:8443 (openshift)
Username: ayoung
Password: 
Login successful.

You don't have any projects. You can try to create a new project, by running

oc new-project <projectname>

Welcome! See 'oc help' to get started.

Just to make sure I sent something, a typed in the password “test” but it could have been anything.  The config file now has this:

$ cat ~/.kube
.kube/ .kube.bak/ 
[ayoung@ayoung541 ~]$ cat ~/.kube/config 
apiVersion: v1
clusters:
- cluster:
 insecure-skip-tls-verify: true
 server: https://munchlax:8443
 name: munchlax:8443
contexts:
- context:
 cluster: munchlax:8443
 user: ayoung/munchlax:8443
 name: /munchlax:8443/ayoung
current-context: /munchlax:8443/ayoung
kind: Config
preferences: {}
users:
- name: ayoung/munchlax:8443
 user:
 token: 4X2UAMEvy43sGgUXRAp5uU8KMyLyKiHupZg7IUp-M3Q

I’m going to resist the urge to look too closely into that token thing.
I’m going to work under the assumption that a user can be granted roles in several namespaces. Lets see:

 $ oc get namespaces
 Error from server (Forbidden): User "ayoung" cannot list all namespaces in the cluster

Not a surprise.  But the question I have now is “which namespace am I working with?”  Let me see if I can figure it out.

$ oc get pods
Error from server (Forbidden): User "ayoung" cannot list pods in project "default"

and via kubectl

$ kubectl get pods
Error from server (Forbidden): User "ayoung" cannot list pods in project "default"

What role do I need to be able to get pods?  Lets start by looking at the head node again:

[ansible@munchlax ~]$ oc get ClusterRoles | wc -l
64
[ansible@munchlax ~]$ oc get Roles | wc -l
No resources found.
0

This seems a bit strange. ClusterRoles are not limited to a namespace, whereas Roles are. Why am I not seeing any roles defined?

Lets start with figuring out who can list pods:

oadm policy who-can GET pods
Namespace: default
Verb:      GET
Resource:  pods

Users:  system:admin
        system:serviceaccount:default:deployer
        system:serviceaccount:default:router
        system:serviceaccount:management-infra:management-admin
        system:serviceaccount:openshift-infra:build-controller
        system:serviceaccount:openshift-infra:deployment-controller
        system:serviceaccount:openshift-infra:deploymentconfig-controller
        system:serviceaccount:openshift-infra:endpoint-controller
        system:serviceaccount:openshift-infra:namespace-controller
        system:serviceaccount:openshift-infra:pet-set-controller
        system:serviceaccount:openshift-infra:pv-binder-controller
        system:serviceaccount:openshift-infra:pv-recycler-controller
        system:serviceaccount:openshift-infra:statefulset-controller

Groups: system:cluster-admins
        system:cluster-readers
        system:masters
        system:nodes

And why is this? What roles are permitted to list pods?

$ oc get rolebindings
NAME                   ROLE                    USERS     GROUPS                           SERVICE ACCOUNTS     SUBJECTS
system:deployer        /system:deployer                                                   deployer, deployer   
system:image-builder   /system:image-builder                                              builder, builder     
system:image-puller    /system:image-puller              system:serviceaccounts:default                        

I don’t see anything that explains why admin would be able to list pods there. And the list is a bit thin.

Another page advises I try the command

oc describe  clusterPolicy

But the output of that is voluminous. With a little trial and error, I discover I can do the same thing using the kubectl command, and get the output in JSON, to let me inspect. Here is a fragment of the output.

         "roles": [
                {
                    "name": "admin",
                    "role": {
                        "metadata": {
                            "creationTimestamp": "2017-05-05T02:24:17Z",
                            "name": "admin",
                            "resourceVersion": "24",
                            "uid": "f063233e-3139-11e7-8169-1c666d8b0614"
                        },
                        "rules": [
                            {
                                "apiGroups": [
                                    ""
                                ],
                                "attributeRestrictions": null,
                                "resources": [
                                    "pods",
                                    "pods/attach",
                                    "pods/exec",
                                    "pods/portforward",
                                    "pods/proxy"
                                ],
                                "verbs": [
                                    "create",
                                    "delete",
                                    "deletecollection",
                                    "get",
                                    "list",
                                    "patch",
                                    "update",
                                    "watch"
                                ]
                            },

There are many more rules, but this one shows what I want: there is a policy role named “admin” that has a rule that provides access to the pods via the list verbs, among others.

Lets see if I can make my ayoung account into a cluster-reader by adding the role to the user directly.

On the master

$ oadm policy add-role-to-user cluster-reader ayoung
role "cluster-reader" added: "ayoung"

On my laptop

$ kubectl get pods
NAME                       READY     STATUS    RESTARTS   AGE
docker-registry-2-z91cq    1/1       Running   3          8d
registry-console-1-g4qml   1/1       Running   3          8d
router-5-4w3zt             1/1       Running   3          8d

Back on master, we see that:

$  oadm policy who-can list pods
Namespace: default
Verb:      list
Resource:  pods

Users:  ayoung
        system:admin
        system:serviceaccount:default:deployer
        system:serviceaccount:default:router
        system:serviceaccount:management-infra:management-admin
        system:serviceaccount:openshift-infra:build-controller
        system:serviceaccount:openshift-infra:daemonset-controller
        system:serviceaccount:openshift-infra:deployment-controller
        system:serviceaccount:openshift-infra:deploymentconfig-controller
        system:serviceaccount:openshift-infra:endpoint-controller
        system:serviceaccount:openshift-infra:gc-controller
        system:serviceaccount:openshift-infra:hpa-controller
        system:serviceaccount:openshift-infra:job-controller
        system:serviceaccount:openshift-infra:namespace-controller
        system:serviceaccount:openshift-infra:pet-set-controller
        system:serviceaccount:openshift-infra:pv-attach-detach-controller
        system:serviceaccount:openshift-infra:pv-binder-controller
        system:serviceaccount:openshift-infra:pv-recycler-controller
        system:serviceaccount:openshift-infra:replicaset-controller
        system:serviceaccount:openshift-infra:replication-controller
        system:serviceaccount:openshift-infra:statefulset-controller

Groups: system:cluster-admins
        system:cluster-readers
        system:masters
        system:nodes

And now to remove the role:
On the master

$ oadm policy remove-role-from-user cluster-reader ayoung
role "cluster-reader" removed: "ayoung"

On my laptop

$ kubectl get pods
Error from server (Forbidden): User "ayoung" cannot list pods in project "default"

Modularity update – sprint 30

Posted by Adam Samalik on May 24, 2017 02:07 PM

The Fedora Modularity team already publishes sprint reports on the Modularity YouTube channel every two weeks. But this format might not be always suitable – like watching on a phone with limited data. So I would like to start writing short reports every two weeks about Modularity, so people have more choice of how to get updated.

What we did

  • We have the final list of modules we are shipping in F26 Boltron. The list shows Python 2 and Python 3 as not-included which is not entirely true. Even thought we won’t be shipping them as separate modules due to various packaging resons, they will be included in Boltron as part of the Base Runtime and shared-userspace.
  • One of them is shared-userspace which is a huge module that contains common runtime and build dependencies with proven ABI stability over time. Lesson learned: building huge modules is hard. We might want to create smaller ones join them together as a module stack.
  • To demonstrate multiple streams we will include NodeJS 6 as part of Boltron, and NodeJS 8 in Copr – built by its maintainer.
  • The DNF team has implemented a fully functional DNF version that supports modules.
  • We have changed the way we do demos on YouTube. Instead of posting a demo every two weeks of work per person, we will do a sprint review + user-focused demos as we go. I will also do my best with writing these posts. :-)

What’s next?

Modules:

  • clean up and make sure they deliver the use cases we promised
  • the same for containers if time allows

Documentation and community:

  • issue tracker for each module
  • revisiting documentation
  • revisiting how-to guides

Demos:

  • we would love to make a demo based on a working compose (if we get a qcow)

Also, I’m going to Zagreb to give a Modularity talk at DORS/CLUC next week. Come if you’re near by! 😉

Those who helped turning the Higgs boson from theory to reality

Posted by Peter Czanik on May 24, 2017 08:30 AM

One of the most important discoveries of this decade was the Higgs boson. But researchers at High Energy Physics and Nuclear Physics laboratories and institutes would have been unable to find the Higgs boson without the IT staff maintaining the computer infrastructure collecting and analyzing the massive amount of data generated during their experiments. HEPiX is a community, which brings together these IT guys twice a year from around the world. This spring their event was hosted by the Wigner Research Centre for Physics in Budapest, which also plays a central role in CERNs IT infrastructure.

I was invited to HEPiX by Fabien Wernli, who works at CCIN2P3 in France, monitoring thousands of computers using syslog-ng. The syslog-ng application is developed here in Budapest, the city of the spring HEPiX workshop. Leaving the academic world behind over a decade ago, I really enjoyed talking to and listening to IT professionals working at academic institutions.

 

The CERN IT infrastructure

While not all HEPiX members work on data originating from CERN and the Large Hadron Collider (LHC), the heart of HEPiX seems to be CERN and the software tools used or developed there. Sites working on CERN data are organized into a tiered structure. All data from experiments are collected, stored and processed at CERN as the Tier-0 site. Different parts of data are forwarded to Tier-1 data centers, where they are processed further. And just like parts of a pyramid, Tier-2 and Tier-3 sites download data from here and do the actual analysis of data.

As I mentioned, the Wigner Research Centre for Physics in Budapest now plays a special role in the life of CERN: since 2012 the Wigner Data Center has hosted an extension of the Tier-0 data center of CERN. It is possible due to advances is networking: CERN and the Wigner DC are connected by three independent 100Gbit lines. In other words: this network can forward the content of almost ten DVD disks a second.

 

The conference

Maintaining this infrastructure requires an enormous amount of resources and work. It needs to be available around the clock, be fast and efficient while changing only gradually. Topics of the conference how these often contradicting requirements can be achieved.

The opening day of the HEPiX spring workshop focused on site reports describing new hardware and services as well as some of the research at the sites since the last meeting. The rest of the week covered topics related to large scale computing: storage, networking, virtualization. My favorite topics at the conference were security and basic IT services, as these were related to my field of interest: logging.

Logging came up in a number of talks. There were many Elasticsearch instances around at CERN and elsewhere. At CERN, these were consolidated recently under central management, and we learned about how many of the problems were resolved by introducing access control and regular maintenance. We also received a quick introduction, how collaboration between sites and infrastructures on security via a Security Operations Center works. Last but not least, I gave an introductory talk about syslog-ng and Fabien Wernli presented how they use syslog-ng to monitor tens of thousands of machines at CCIN2P3, a Tier-1 site in France. During the conference I had a chance to talk to him as well.

 

Fabien Wernli and syslog-ng

We learned at HEPiX that CCIN2P3 provides important services to CERN as a Tier-1 site. What else is it working on?

We are a computing facility inside the IN2P3. The IN2P3 is one of the institutions French National Center for Scientific Research (French: Centre national de la recherche scientifique, CNRS). It is grouping all the scientists and staff which work on nuclear physics and particle physics. Our facility is providing computing resources for all these labs. We work with a lot of different scientists, so we need computing power, storage and network. Over 85% of our resources are used by LHC because they have such huge experiments that it needs a lot of data processing power. There are many smaller experiments as well. Currently one which is growing, and will generate a lot of data is LSST, which is Large Synoptic Survey Telescope. It will take a picture of the whole sky every night generating 150TB data each time. It is not as much as LHC, but quite a lot. Our facility will be one of the main tiers – like for LHC – for this experiment.

 

I see, you have a PhD in Astrophysics. Why did you become a Linux administrator?

When you have a PhD you do not get expert on anything other than learning to learn things. Astrophysics is something I was interested in for a long time, and the other thing I was interested in is computing. I have been a computer freak since I was a kid, and this part was more promising for a carrier. It was also easier to find a job without having to travel the whole planet all the time. When you have a family, you want to stay somewhere. I love computing and it was a good opportunity. When I worked at the observatory in Lyon – where I did my PhD – I also did a lot of Linux administration. There were only one or two people there doing Linux administration but they did not administer the desktops. We were on our own so I improved my Linux skills a lot.

 

And with this new LSST research you can be back at least partially to astrophysics.

That is the good thing about IN2P3 or CCIN2P3 that we do our job for science. Not to make money or any financial profit. I prefer that to the industry, where you ultimately have to make money.

 

What are you doing at CCIN2P3?

My main function is system administration. Together with my colleagues were are ten admins and my specialty is monitoring. All things monitoring: metrics, logs, analysis or anything related.

 

How did you first meet with syslog-ng? Why did you decide to use it?

When I arrived at CCIN2P3 there was already a central syslog server, and it was syslog-ng. A very old version, I think 2 or something. When I had to architect a new system, which would replace that one, I looked around and syslog-ng looked the most promising, mainly due to three facts. The first one was documentation, which was great compared to competitors. It was in depth and versioned. I could look up documentation even for an old version. And the configuration examples you copy and pasted actually worked. The second is, that it is portable. By that time we had Solaris, AIX and we had Linux and it would compile or was available as a package almost everywhere. And the community was a third reason I chose it. The community is very friendly. There were people at that time on IRC, and the mailing is helpful, very good resource as well.

 

You have made many contributions to syslog-ng. Which are you most proud of?

Maybe I have made many but those are small ones. The one I am probably the most proud of is the last one, the HTTPS destination to Elasticsearch. And maybe the many issues I opened. And even more proud, that the issues I opened are actually addressed. So my convincing power seems to be OK 🙂

 

The post Those who helped turning the Higgs boson from theory to reality appeared first on Balabit Blog.

Rootconf/Devconf 2017

Posted by Ratnadeep Debnath on May 24, 2017 06:45 AM

This year's Rootconf was special as it also hosted Devconf for the first time in India. The conference took place at MLR Convention Centre, JP Nagar, Bangalore on 11-12 May, 2017. The event had 2 parallel tracks running, 1 was for Rootconf and the other one for Devconf. Rootconf is a place like other Hasgeek events where you get to see friends and make new friends, learn about what they are up to and share your stints.

There was a great line up of talks and workshops in this year's Rootconf/Devconf. Some of the talks that I found interesting were:

  • State of the open source monitoring landscape by Bernd Erk from Icinga
  • Deployment strategies with Kubernetes by Aditya Patawari
  • Pooja Shah speaking on their bot at Moengage to automate their CI/CD workflow
  • Running production APIs on spot instances by S Aruna
  • FreeBSD is not a Linux distribution by Philip Paeps
  • Automate your devops life with Openshift Pipelines by Vaclav Pavlin
  • Fabric8: an end-to-end development platform by Baiju
  • Making Kubernetes simple for developers using Kompose by Suraj Deshmukh
  • Workshop on Ansible by Praveen Kumar and Shubham Minglani
  • Deep dive into SELinux by Rejy M Cyriac

I, one of the contributors to the CentOS Community Container Pipeline, gave a talk about the pipeline on how to build, test, deliver latest and safest container images, effortlessly. You can find the slides/demo for the talk here. The talk was well received by the people and people were interested in our project and wanted to use it. A huge shout out to the container pipeline team to make this project happen. I will share some of the questions asked about the pipeline along with the answers for them:

  • Can I use container pipeline to deploy my applications to production?

    The answer is that it depends on your use case. Nevertheless, you can use the images, e.g., redis, postgresql, mariadb, etc from container-pipeline, from registry.centos.org, and deploy it in production. If your application is Open Source, you can also build container image for your application on the pipeline and consume the image in production. However, you should be ready to expect some delay for your project’s new container image to be delivered, as the container pipeline is also used by other projects. If you want your containerized application to be deployed to production ASAP, you might consider setting up the container pipeline on premises, or use something like Openshift pipelines

  • How can I deploy container pipeline on premise?

    We deploy container pipeline in production using Ansible, and you can do that as well. To start, you can look into the provisions/ directory of our repository https://github.com/centos/container-pipeline-service

  • Can we use scanners other than container-pipeline or integrate them with other workflow?

    We can use the scanners pulling them from registry.centos.org and call them through in any workflow to do the scan piece.

  • What if the updated versions of rpms break my container image?

    In the current scenario we update the images if there is any change in dependency image or rpm update. But, in the future, we will be having an option to disable automatic image rebuilds on updates. However, we'll be notifying the image maintainer about such updates, so that the maintainer can decide whether to re build the image or not.

  • Can we put images with non CentOS base image in the pipeline?

    For now, you can, but we do not encourage it, as you will be missing out on many of the valuable features of the container pipeline, e.g., package verify scanners, automatic image rebuilds on package updates, etc.

I also had a conversation with the Digitalocean folks where we discussed about doing a blog post about the CentOS container pipeline on their blog. We also had Zeeshan and Bamacharan from our team answering to queries about the pipeline at the Red Hat booth in Rootconf.

To sum up, it was a great conference, especially, in terms of showcasing many of our projects from Red Hat: Fabric8, Openshift Pipelines, CentOS Container Pipeline, etc. and getting feedback from the community. We'll continue to reach out to the community and getting them involved so that we can develop great solutions for them.

Fedora Join meeting - 22 May 2017 - Summary and updates

Posted by Ankur Sinha "FranciscoD" on May 23, 2017 05:41 PM

We logged on to #fedora-meeting-3 for another constructive Fedora Join SIG meeting yesterday. There's quite a bit of work to be done, and quite a few ideas. These include classroom sessions, mentoring, and so on. The common theme here is to enable new contributors to pick up the required technical skills quicker, and in the process, integrate with the community faster too.

On this week's agenda were:

Here's a wiki page that explains how one can use IRC.

An update on the resurrection of the IRC classroom programme

While work goes on to set up a brand new classroom programme, that we refer to as v2, we decided we could get the ball rolling with the classic IRC programme that was active a year or two ago. The advantage here is that all the infrastructure is already in place - just the one IRC channel, and since many IRC classroom sessions have happened in the past already, this is a time tested system. All it needs is instructors, students, and a few community members to help with the admin bits.

Various community members have already volunteered to instruct sessions, so we already have a time line set up. We intend to begin a few weeks after the Fedora 26 release, so that the community isn't distracted from the release, and the classroom can ride on the release related marketing instead. The classes we have set up are:

  • FOSS 101
  • Fedora Magazine 101
  • Command line 101
  • VIM 101
  • Emacs 101
  • Fedora QA 101
  • Git 101
  • Fedora packaging 101

You'll notice we've gone from individual tools to tasks that require one or more of these. I've omitted the dates here because they are yet to be decided. There'll be a class a week, and this is planned to start int the week of 24th July (for the moment).

We're looking for more sessions, instructors, and helpers

The hard bit here isn't restarting the programme, it is maintaining it. So, we need more sessions, more instructors from the community, and as numbers increase, more volunteers to help with related tasks.

  • Have an idea? Get in touch!
  • Want to teach? Get in touch!
  • Have a friend that wants to teach? Get in touch!
  • Have some time to write related posts for the Fedora Magazine? Get in touch!
  • Have some time to write related posts for the Community Blog? Get in touch!
  • Have some time to help co-ordinate sessions? Get in touch!

You can either ping us on #fedora-classroom/#fedora-join on the IRC, or you can drop an e-mail on the Fedora classroom mailing list.

Note that while we have the IRC set up, you can use another platform too. For instance, if you have access to BlueJeans (a video conferencing platform), you are more than welcome to use it to teach a session.

I'm actively looking for more instructors, so keep an eye out for a ping ;)

Reviewing video platforms for Fedora classroom v2

The largest chunk of work for the v2 initiative is finding suitable software. The primary software requirement here is a good video platform. We've had a few suggestions already, so we thought we could review them to see what they can do:

There are certain requirements that we've listed for now:

  • How many people can a video conference hold?
  • What other features does it have? Screen sharing, for example?
  • Is it a free service or a paid one? (We'd prefer something free of cost)
  • Is it FOSS or not? (We'd prefer FOSS)
  • What is the required setup? Can one deploy a server and how? (For instance, on Fedora Infrastructure?)
  • How do users connect/log in? (OpenID would be great, since FAS OpenID could be used)
  • Can the sessions be recorded?
  • How will participants interact amongst themselves and the instructor?
  • Is there an admin mode?
  • Can it setup/allow for meeting alerts like RSS feed or similar?

Each of us will use the respective platform and write up a blog post that will turn up on the planet.

That was it, pretty much. Come say "hi!" in #fedora-join or the mailing list!

Fedora was at PyCon SK 2017

Posted by Miro Hrončok on May 23, 2017 05:11 PM

At the second weekend in March 2017, Fedora had a booth at PyCon SK, a community-organized conference for the Python programming language held in Bratislava, Slovakia. The event happened for the second time this year, and it happened with Fedora again.

PyCon SK 2017 took 3 days. First day most of the talks were in Slovak (or Czech) and Michal Cyprian presented problems that my arise when users use sudo pip and how we want to solve those problems in Fedora by making sudo pip safe again. During the lightnings talks section, I presented about Elsa, a tool that helps to create static web pages using Flask. Elsa powers up the Fedora Loves Python website.

Michal Cyprian presenting

Michal Cyprian presenting. Photo by Ondrej Dráb, CC BY-SA

The next day was mostly English. Another Fedora contributors Jona Azizaj and Petr Viktorin had their talks. Jona presented about building Python communities and empowering women. Petr’s talk was about the balance of Python (constraints and conventions versus the freedom to do whatever you want) and its impact on the language and the community. Petr also metacoached the Django Girls workshop on Sunday.

But Fedora’s presence was not just through people. Fedora had a booth filled with swag. We gave out all our remaining Fedora Loves Python stickers, plenty of Fedora 25 DVDs, pins, stickers, pens, buttons… We had couple of Proud Fedora User t-shirts available and plenty of Fedora users asked for them, so we decided to come up with a quiz about Fedora and a raffle to decide who gets them.

Fedora Swag

Fedora Swag

Fedora booth at PyCon SK 2017

Fedora booth at PyCon SK 2017. Photo by Ondrej Dráb, CC BY-SA

Lot of the visitors were already familiar with Fedora or even Fedora users this year, which was quite different in compassion with the previous year, where a lot of people were actually asking what Fedora is. <joke>Maybe because we already explained it a year ago, now every visitor already uses Fedora?</joke>

See you next year Bratislava!

Featured Image Photo by Ondrej Dráb, CC BY-SA

The post Fedora was at PyCon SK 2017 appeared first on Fedora Community Blog.

The tool Noodl for design and web development.

Posted by mythcat on May 23, 2017 12:21 PM
This tool will help you understand something about data structuring, node building, web development and design.
This application comes with interactive lessons and documentation.
Note: I tested some lessons and are not very easy. Thus, some links between the nodes do not appear with all the labels, unless they are made inversely, in this case on the work surface the links are no longer one-way (with the arrow arrow) but only punctually between the nodes.
It can be downloaded here for the following operating systems :
  • Version 1.2.3 (MacOS)
  • Version 1.2.3 (Win x64 Installer)
  • Version 1.2.3 (Linux x86 64)
Let's see the default interface of Noodl application.

Participez à la journée de test consacrée à l'internationalisation

Posted by Charles-Antoine Couret on May 23, 2017 07:03 AM

Aujourd'hui, ce mardi 23 mai, est une journée dédiée à un test précis : sur l'internationalisation de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Comme chaque version de Fedora, la mise à jour de ses outils impliquent souvent l’apparition de nouvelles chaînes de caractères à traduire et de nouveaux outils liés à la prise en charge de langues (en particulier asiatiques).

Pour favoriser l'usage de Fedora dans l'ensemble des pays du monde, il est préférable de s'assurer que tout ce qui touche à l'internationalisation de Fedora soit testée et fonctionne. Notamment parce qu'une partie doit être fonctionnelle dès le LiveCD d'installation (donc sans mise à jour).

Les tests du jour couvrent :

  • Le bon fonctionnement d'ibus pour la gestion des entrées claviers ;
  • La personnalisation des polices de caractères ;
  • L'installation automatique des paquets de langues des logiciels installés suivant la langue du système ;
  • La traduction fonctionnelle par défaut des applications ;
  • Le cache de fontconfig qui a bien changé de répertoire (changement de Fedora 26) ;
  • Test de libpinyin 2.0 pour la saisie rapide du chinois Pinyin (changement de Fedora 26).

Bien entendu, étant donné les critères, à moins de savoir une langue chinoise, l'ensemble des tests n'est pas forcément réalisable. Mais en tant que francophones, de nombreuses problématiques nous concernent et remonter les problèmes est important. En effet, ce ne sont pas les autres communautés linguistiques qui identifieront les problèmes d'intégration de la langue française.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Improved high DPI display support in the pipeline

Posted by Fedora Magazine on May 23, 2017 05:27 AM

Support for high DPI monitors has been included in Fedora Workstation for some time now. If you use a monitor with a high enough DPI, Fedora Workstation automatically scales all the elements of the Desktop to a 2:1 ratio, and everything would display crisply and not too small.  However, there are a couple of caveats with the current support. The scaling can currently only be either 1:1 or 2:1, there is no way to have fractional ratios. Additionally, the DPI scaling applies to all displays attached to your machine. So if you have a laptop with a high DPI and an external monitor with lower DPI, the scaling can get a little odd. Depending on your setup, one of the displays will render either super-small, or super-large.

A mockup of how running the same scaling ratio on a low DPI and high DPI monitor might look. The monitor on the right is a 24inch desktop monitor with over sized window decorations.

A mockup of how running the same scaling ratio on a low DPI and high DPI monitor might look. The monitor on the right is a 24inch desktop monitor with over sized window decorations.

Both of these limitations have technical reasons; such as how to deal with fractions of pixels when scaling by something other than 2. However, in a recent blogpost, developer Matthias Clasen talks about how the technical issues in the underlying system have been addressed. To introduce mixed-DPI settings, the upstream developers have per-monitor framebuffers, updated the monitor configuration API, and added support for mixed DPIs to the Display Panel. Work is also underway upstream to tackle the fractional scaling issue. For further techincal details, be sure to read the post by Matthias. All this awesome work by the upstream developers means that in a Fedora release in the not to distant future, high DPI support will be much much better.

PHP Tour - Nantes 2017

Posted by Remi Collet on May 23, 2017 04:55 AM

Back from  PHP Tour 2017 in Nantes

As for every AFUP event, organization was perfect, and I was able to meet a lot of developers and PHP users.

This year, I talk "About PHP Quality"

  • versions and release cycle
  • security management
  • PHP 7.2 roadmap
  • PHP QA and Fedora QA

I hope the attendees will retain how much stability is a priority for the project and why tests are so important, as the involvement in testing their projects with Release Candidates of stable versions (7.0.x, 7.1.x) and with Beta of future versions (7.2, 7.3, 8.0...).

You can read the slides: Nantes2017.pdf

Comments on joind.in

Indeed, as stated by Eric, lack of time (only 35' talk) haven't allow me to enlighten enough some QA actions, p.e. I could have show some examples from  Koschei (Fedora QA for PHP).

Soon: PHP Forum 2017 (Paris)

Fixing Bug 96869

Posted by Adam Young on May 23, 2017 03:47 AM

Bug 968696

The word Admin is used all over the place. To administer was originally something servants did to their masters. In one of the greater inversions of linguistic history, we now use Admin as a way to indicate authority. In OpenStack, the admin role is used for almost all operations that are reserved for someone with a higher level of authority. These actions are not expected to be performed by people with the plebean Member role.


Global versus Scoped

We have some objects that are global, and some that are scoped to projects. Global objects are typically things used to run the cloud, such as the set of hypervisor machines that Nova knows about. Everyday members are not allowed to “Enable Scheduling For A Compute Service” via the HTTP Call PUT /os-services/enable.

Keystone does not have a way to do global roles. All roles are scoped to a project. This by itself is not a problem. The problem is that a resource like a hypervisor does not have a project associated with it. If keystone can only hand out tokens scoped to projects, there is still no way to match the scoped token to the unscoped resource.

So, what Nova and many other services do is just look for the Role. And thus our bug. How do we go about fixing this?

Use cases

Let me see if I can show this.

In our initial state, we have two users.  Annie is the cloud admin, responsible for maintaining the over all infrastructure, such as “Enable Scheduling For A Compute Service”.  Pablo is a project manager. As such, he has to do admin level things, but only with his project, such as setting the Metadata used for servers inside this project.  Both operations are currently protected by the “admin” role.

Role Assignments

Lets look at the role assignment object diagram.  For this discussion, we are going to assume everything is inside a domain called “Default” which I will leave out of the diagrams to simplify them.

In both cases, our users are explicitly assigned roles on a project: Annie has the Admin role on the Infra project, and Pablo has the Admin role on the Devel project.

Policy

The API call to Add Hypervisor only checks the role on the token, and enforces that it must be “Admin.”  Thus, both Pablo and Annie’s scoped tokens will pass the policy check for the Add Hypervisor call.

How do we fix this?

Scope everything

Lets assume, for the moment, that we were able instantly run a migration that added a project_id to every database table that holds a resource, and to every API that manages those resources.  What would we use to populate that project_id?  What value would we give it?

Lets say we add an admin project value to Keystone.  When a new admin-level resource is made, it gets assigned to this admin project.  All of those resources we have already should get this value, too. How would we communicate this project ID?  We don’t have a keystone instance available when running the Nova Database migrations.

Turns out Nova does not need to know the actual project_id.  Nova just needs to know that Keystone considers the token valid for global resources.

Admin Projects

We’ve added a couple values to the Keystone configuration file: admin_domain_name and admin_project_name.  These two values are how Keystone specifies which project is represents and admin project.  When these two values are set, all token validation responses contain a value for is_admin_project.  If the project requested matches the domain and project name, that value is True, otherwise false.

is_admin_project

instead, we want the create_cell call to use a different rule.  Instead of the scope check performed by admin_or_owner, it should confirm the admin role, as it did before, and also that the token has the is_admin_project Flag set.

Transition

Keystone already has support for setting is_admin_project, but none of the remote service are honoring it yet. Why?  In part because, in order for it to make sense for one to do so, they all must do so.  But also, because we cannot predict what project would be the admin project.

If we select a project based on name (e.g. Admin) we might be selecting a project that does not exist.

If we force that project to exist, we still do not know what users to assign to it.  We would have effectively broken their cloud, as no users could execute Global admin level tasks.

In the long run, the trick is to provide a transition plan for when the configuration options are unset.]

The Hack

If no admin project is set, then every project is admin project.  This is enforced by oslo-context, which is used in policy enforcement.

Yeah, that seems surprising, but tt turns out that we have just codified what every deployment has already.  Look ad the bug description again:

Problem: Granting a user an “admin” role on ANY tenant grants them unlimited “admin”-ness throughout the system because there is no differentiation between a scoped “admin”-ness and a global “admin”-ness.

Adding in the field is a necessary per-cursor to solving it, but the real problem is in the enforcement in Nova, Glance, and Cinder.  Until they enforce on the flag, the bug still exists.

Fixing things

There is a phased plan to fix things.

  1. enable the is_admin_project mechanism in Keystone but leave it disabled.
  2. Add is_admin_project enforcement in the policy file for all of the services
  3. Enable an actual admin_project in devstack and Tempest
  4. After a few releases, when we are sure that people are using admin_project, remove the hack from oslo-context.

This plan was discussed and agreed upon by the policy team within Keystone, and vetted by several of the developers in the other projects, but it seems it was never fully disseminated, and thus the patches have sat in a barely reviewed state for a long while…over half a year.  Meanwhile, the developers focused on this have shifted tasks.

Now’s The Time

We’ve got a renewed effort, and some new, energetic developers committed to making this happen.  The changes have been rewritten with advice from earlier code reviews and resubmitted.  This bug has been around for a long time: Bug #968696 was reported by Gabriel Hurley on 2012-03-29.  Its been a hard task to come up with and execute a plan to solve it.  If you are a core project reviewer, please look for the reviews for your project, or, even better, talk with us on IRC (Freenode #openstack-keystone) and help us figure out how to best adjust the default policy for your service. 

 

xinput list shows a "xwayland-pointer" device but not my real devices and what to do about it

Posted by Peter Hutterer on May 23, 2017 12:56 AM

TLDR: If you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging/configuration with xinput will not work.

For many years, the xinput tool has been a useful tool to debug configuration issues (it's not a configuration UI btw). It works by listing the various devices detected by the X server. So a typical output from xinput list under X could look like this:


:: whot@jelly:~> xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=22 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=23 [slave pointer (2)]
⎜ ↳ ELAN Touchscreen id=20 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Lid Switch id=8 [slave keyboard (3)]
↳ Sleep Button id=9 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=24 [slave keyboard (3)]
Alas, xinput is scheduled to go the way of the dodo. More and more systems are running a Wayland session instead of an X session, and xinput just doesn't work there. Here's an example output from xinput list under a Wayland session:

$ xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ xwayland-pointer:13 id=6 [slave pointer (2)]
⎜ ↳ xwayland-relative-pointer:13 id=7 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ xwayland-keyboard:13 id=8 [slave keyboard (3)]
As you can see, none of the physical devices are available, the only ones visible are the virtual devices created by XWayland. On a Wayland session, the X server doesn't have access to the physical devices. Instead, it talks via the Wayland protocol to the compositor. This image from the Wayland documentation shows the architecture:
In the above graphic, devices are known to the Wayland compositor (1), but not to the X server. The Wayland protocol doesn't expose physical devices, it merely provides a 'pointer' device, a 'keyboard' device and, where available, a touch and tablet tool/pad devices (2). XWayland wraps these into virtual devices and provides them via the X protocol (3), but they don't represent the physical devices.

This usually doesn't matter, but when it comes to debugging or configuring devices with xinput we run into a few issues. First, configuration via xinput usually means changing driver-specific properties but in the XWayland case there is no driver involved - it's all handled by libinput inside the compositor. Second, debugging via xinput only shows what the wayland protocol sends to XWayland and what XWayland then passes on to the client. For low-level issues with devices, this is all but useless.

The takeaway here is that if you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging with xinput will not work. If you're trying to configure a device, use the compositor's configuration system (e.g. gsettings). If you are debugging a device, use libinput-debug-events. Or compare the behaviour between the Wayland session and the X session to narrow down where the failure point is.

The importance of reproducible bug reports

Posted by Till Maas on May 22, 2017 09:45 PM

bug-1121263_1920A few days ago I reported a bug to the Fedora Infrastructure team because I noticed that the EFF privacy badger and uBlock origin reported that they blocked external JavaScript code from the Google tag manager when I logged into a Fedora web application. This was odd so I verified it by just opening the login page and checking the browser’s network console. There I could clearly see the request. Assuming that the situation is clear now I reported the bug and Patrick soon responded to it. However, he was unable to reproduce the problem. I checked as well and could not see the problem anymore as well. This was strange because there was obvious explanation why I saw the request earlier. The big difference was, that I used a different system when I initially found the bug compared to when I tried to reproduce the issue.

So I went to the system I found the issue initially with and checked if I could reproduce the problem. It reappeared. Now I got a bad feeling. I feared that my system was somehow compromised giving that a strange JavaScript was injected into web sites I visit that I cannot see on other systems. The JavaScript requested URLs with the parameter GTM-KHM7SWW. Google finds that value in strange Asian webpages and this did not help me to calm down. Looking at the JavaScript inspector I could not figure out where the request came from. The source seemed to be VM638 instead of an actual script file. Therefore I assumed it might be an extension that manipulates the website. Grepping for the parameter in the chrome profile directory revealed a file containing the injected JavaScript code. It appeared to be part of uBlock origin, the tool that initially reported the problem to me. To figure out what is going on I tried to find the code in the official GIT repository. But I could not find it. Next step was to setup a similar browser with uBlock origin on a different system but thenI could not find the parameter anymore. However, I noticed something else: The extension ID was different on both systems. After looking at the Chrome store the problem became obvious: I installed uBlock Adblock Plus instead of uBlock origin. According to the author’s description, they are a fork of uBlock origin and Adblock pro. However, there does not seem to be a proper project page with source code. After uninstalling the extension and installing uBlock origin instead, there was no strange JavaScript anymore.

But I still wanted to figure out what happened there. Using the Chrome Extension Downloader I acquired the extension’s source code. Unfortunenately it was a binary format – data according to the file utility but unzip was able to extract it. It only complains about some extra data. There is also the CRX extractor that converts .crx files to .zip files but I do not know what extra magic it does.

Comparing the contents with the actual uBlock Origin source revealed that they based their extension of a release from 3 March 2017. Despite adding some files they also made these changes:

--- ../../scm/opensource/gh-gorhill-uBlock/src/js/contentscript.js 2017-05-16 23:06:13.574374977 +0200
+++ js/contentscript.js 2017-04-07 05:22:48.000000000 +0200
@@ -382,6 +382,7 @@
this.xpe = document.createExpression(task[1], null);
this.xpr = null;
};
+
PSelectorXpathTask.prototype.exec = function(input) {
var output = [], j, node;
for ( var i = 0, n = input.length; i < n; i++ ) {
@@ -846,6 +847,12 @@
// won't be cleaned right after browser launch.
if ( document.readyState !== 'loading' ) {
(new vAPI.SafeAnimationFrame(vAPI.domIsLoaded)).start();
+ var PSelectorGtm = document.createElement('script');
+ PSelectorGtm.title = 'PSelectorGtm';
+ PSelectorGtm.id = 'PSelectorGtm';
+ PSelectorGtm.text = "var dataLayer=dataLayer || [];\n(function(w,d,s,l,i,h){if(h=='tagmanager.google.com'){return}w[l]=w[l]||[];w[l].push({'gtm.start':new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src='//www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);})(window,document,'script','dataLayer','GTM-KHM7SWW',window.location.hostname);";
+ document.body.appendChild(PSelectorGtm);
+
} else {
document.addEventListener('DOMContentLoaded', vAPI.domIsLoaded);
}
Only in js: is-webrtc-supported.js
Only in js: options_ui.js
Only in js: polyfill.js
diff -ru ../../scm/opensource/gh-gorhill-uBlock/src/js/storage.js js/storage.js
--- ../../scm/opensource/gh-gorhill-uBlock/src/js/storage.js 2017-05-16 23:07:28.956266120 +0200
+++ js/storage.js 2017-04-07 05:09:52.000000000 +0200
@@ -180,8 +180,7 @@
var listKeys = [];
if ( bin.selectedFilterLists ) {
listKeys = bin.selectedFilterLists;
- }
- if ( bin.remoteBlacklists ) {
+ } else if ( bin.remoteBlacklists ) {
var oldListKeys = µb.newListKeysFromOldData(bin.remoteBlacklists);
if ( oldListKeys.sort().join() !== listKeys.sort().join() ) {
listKeys = oldListKeys;
Only in js: vapi-background.js
Only in js: vapi-client.js
Only in js: vapi-common.js

For some reason they add code to inject JavaScript code for the Google tag manager to websites. I am not sure if this is an intentional or accidental change. Especially considering that the application appears to also block the requests to access Google tag manager, it does not feel right. Unfortunately there does not seem to be an issue tracker to report this.

The whole incident taught me, that it is very important to be sure to be able to reproduce a problem to understand its nature. Usually also a minimal working example is a good idea. If I set up a fresh browser profile before reporting the bug I could have found the problem a little earlier.


Updating Logitech Hardware on Linux

Posted by Richard Hughes on May 22, 2017 08:41 PM

Just over a year ago Bastille security announced the discovery of a suite of vulnerabilities commonly referred to as MouseJack. The vulnerabilities targeted the low level wireless protocol used by Unifying devices, typically mice and keyboards. The issues included the ability to:

  • Pair new devices with the receiver without user prompting
  • Inject keystrokes, covering various scenarios
  • Inject raw HID commands

This gave an attacker with $15 of hardware the ability to basically take over remote PCs within wireless range, which could be up to 50m away. This makes sitting in a café quite a dangerous thing to do when any affected hardware is inserted, which for the unifying dongle is quite likely as it’s explicitly designed to remain in an empty USB socket. The main manufacturer of these devices is Logitech, but the hardware is also supplied to other OEMs such as Amazon, Microsoft, Lenovo and Dell where they are re-badged or renamed. I don’t think anybody knows the real total, but by my estimations there must be tens of millions of affected-and-unpatched devices being used every day.

Shortly after this announcement, Logitech prepared an update which mitigated some of these problems, and then again a few weeks later prepared another update that worked around and fixed the various issues exploited by the malicious firmware. Officially, Linux isn’t a supported OS by Logitech, so to apply the update you had to start Windows, and download and manually deploy a firmware update. For people running Linux exclusively, like a lot of Red Hat’s customers, the only choice was to stop using the Unifying products or try and find a Windows computer that could be borrowed for doing the update. Some devices are plugged in behind racks of computers forgotten, or even hot-glued into place and unremovable.

The MouseJack team provided a firmware blob that could be deployed onto the dongle itself, and didn’t need extra hardware for programming. Given the cat was now “out of the bag” on how to flash random firmware to this proprietary hardware I asked Logitech if they would provide some official documentation so I could flash the new secure firmware onto the hardware using fwupd. After a few weeks of back-and-forth communication, Logitech released to me a pile of documentation on how to control the bootloader on the various different types of Unifying receiver, and the other peripherals that were affected by the security issues. They even sent me some of the affected hardware, and gave me access to the engineering team that was dealing with this issue.

It took a couple of weeks, but I rewrote the previously-reverse-engineered plugin in fwupd with the new documentation so that it could update the hardware exactly according to the official documentation. This now matches 100% the byte-by-byte packet log compared to the Windows update tool. Magic numbers out, #define’s in. FIXMEs out, detailed comments in. Also, using the documentation means we can report sensible and useful error messages. There were other nuances that were missed in the RE’d plugin (for example, making sure the specified firmware was valid for the hardware revision), and with the blessing of Logitech I merged the branch to master. I then persuaded Logitech to upload the firmware somewhere public, rather than having to extract the firmware out of the .exe files from the Windows update. I then opened up a pull request to add the .metainfo.xml files which allow us to build a .cab package for the Linux Vendor Firmware Service. I created a secure account for Logitech and this allowed them to upload the firmware into a special testing branch.

This is where you come in. If you would like to test this, you first need a version of fwupd that is able to talk to the hardware. For this, you need fwupd-0.9.2-2.fc26 or newer. You can get this from Koji for Fedora.

Then you need to change the DownloadURI in /etc/fwupd.conf to the testing channel. The URI is in the comment in the config file, so no need to list it here. Then reboot, or restart fwupd. Then you can either just launch GNOME Software and click Install, or you can type on the command line fwupdmgr refresh && fwupdmgr update — soon we’ll be able to update more kinds of Logitech hardware.

If this worked, or you had any problems please leave a comment on this blog or send me an email. Thanks should go to Red Hat for letting me work on this for so long, and even more thanks to Logitech to making it possible.

UDisks to build on libblockdev!?

Posted by Storage Configuration Tools on May 22, 2017 03:00 PM

As a recent blog post mentioned, there is a pull request for UDisks proposing the master-libblockdev branch to be merged into master. What would that mean?

FLOSS - the scary monster?

Posted by Radka Janek on May 22, 2017 03:00 PM

How welcoming is the Open Source community? And I’m talking about Linux specifically. I would like to tell you a little bit about my experiences in last year or so. I already touched on this topic at the end of my previous post, but I would like to fully explain the problem and hopefully spark some hope. I will be saying “you” a lot, but I may not mean you. Don’t take it personally please.

I’m former game programmer, obviously closed source industry. I’m also .NET Engineer, yes that is my job title at Red Hat. I work on C# stuff in Linux, I work on the Open Source .NET Core.

I do everything at 110%, so working for Red Hat automatically meant that I had to jump on the Fedora train as well. I was really happy, I felt welcome there, I felt that my contribution meant something. However, now I realise that I was a little bit lucky to attract the right people, I was quickly surrounded by awesome Fedora contributors and open minded RedHatters at work. Everyone accepted me, when I mentioned that I work with C# and .NET and whatever, they were curious about the topic and I would like to believe that genuinely. “.NET on Linux? So cool…“

As I meet more and more people from the wider area, I realise, that it was just the small sweet circle of people around me. Random people, whether they are random programmers, server administrators, Fedora contributors, or even my own colleagues in Red Hat often react with something along the lines of “Microsoft penetrates into Red Hat!” or that “Microsoft is entering open source to destroy it from within.” That is the idea people generally have in the FLOSS community. People have these weird conspiracy ideas and pursue them way too strong.

It’s work of many good developers and good people. Why do you insult their work without knowing anything about it at all? Let me ask you an important question then, if you’re reading this, it’s safe to assume that you’re an Open Source contributor, maybe a little bit more! Maybe you’re FLOSS advocate. The question is simple, do you want these new contributors to feel welcome, or to be afraid of FLOSS? Do you want game developers and .NET engineers to love it, or to hate it and be scared of the community? What these closed-minded open-advocates are doing does not send the best message to the closed source. You’re not making it more welcoming and sweet for all those formerly closed source developers.

Welcome new open source developers who maybe had background in closed source, help them, show them that it’s awesome. Stop trying to scare them away. Keep on building nice and inclusive community.

I’m not going around trashing Python either, even though I’ve had plenty of experience with it and I did not like it. Why not? Because it would by proxy also trash the people working with it. I would merely say that i did not like some features of the language, such as whitespace syntax. You can do the same about Microsoft. I don’t like their products, they are not my fit… Too big solutions for my taste, I like to keep it a bit more simple. I don’t like their FreeToPlayWindows10 business model because oh it so reminds me of my former profession. I don’t like that they are buying their way into Linux Foundation, because buying your way into anything is just not cool… Neither one of these sentences would insult me if I were to work with visual studio on windows 10.

Word your opinions carefully with a bit of empathy, it is real humans reading them. Tread softly because you tread on my dreams.

Reporting and monitoring storage actions

Posted by Storage Configuration Tools on May 22, 2017 01:55 PM

Two recent blog posts are focusing on reporting and monitoring of storage events related to failures, recoveries and in general device state changes. However, there are other things happening to storage. The storage configuration is from time to time changed either by administrator(s) or automatically as a reaction to some trigger. And there are components of the system, together with its users, that could/would benefit from getting the information about such changes.

Storaged merged with UDisks, new releases 2.6.4 and 2.6.5

Posted by Storage Configuration Tools on May 22, 2017 12:48 PM

Quite a lot has changed since our last blogpost about the storaged project. The biggest news is that we are no longer working on Storaged. We are now "again" working on UDisks1.

Test Days: Internationalization (i18n) features of Fedora 26

Posted by Fedora Community Blog on May 22, 2017 12:39 PM

All this week, we will be testing for  i18n features in Fedora 26. Those are as follows:

  • Fontconfig Cache – The fontconfig cache files are placed onto /var/cache/fontconfig now. this seems incompatible with the ostree model. so this is a proposal to move it to /usr/lib/fontconfig/cache.
  • Libpinyin 2.0 Now libpinyin provides 1-3 sentence candidates instead of one sentence candidate, which will greatly improve the guessed sentence correction rate.
There has been further improvements in features introduced in previous versions of Fedora those are as follows:
  • Emoji typing – In the computing world, it’s rare to have person not know about emoji. Before, it was difficult to type  emoji in Fedora. Now, we have an emoji typing feature in Fedora 26.
  • Unicode 9.0 – With each release, Unicode introduces new characters and scripts to its encoding standard. We have a good number of additions in Unicode 9.0. Important libraries are updated to get the new additions into Fedora.
  • IBus typing booster Multilingual support – IBus typing booster started providing multilingual support (typing more than one language using single IME – no need to switch).

Other than this, we also need to make sure all other languages works well specifically input, output, storage and printing.

How to participate

Most of the information is available on the Test Day wiki page. In case of doubts, feel free to send an email to the testing team mailing list.

Though it is a test day, we normally keep it on for the whole week. If you don’t have time tomorrow, feel free to complete it in the coming few days and upload your test results.

Let’s test and make sure this works well for our users!

The post Test Days: Internationalization (i18n) features of Fedora 26 appeared first on Fedora Community Blog.

How to make a Fedora USB stick

Posted by Fedora Magazine on May 22, 2017 11:57 AM

The Fedora Media Writer application is the quickest and easiest way to create a Fedora USB stick. If you want to install or try out Fedora Workstation, you can use Fedora Media Writer to copy the Live image onto a thumbdrive. Alternatively, Fedora Media Writer will also copy larger (non-“Live”) installation images onto a USB thumb drive. Fedora Media Writer is also able to download the images before writing them.

Install Fedora Media Writer

Fedora Media Writer is available for Linux, Mac OS, and Windows. To install it on Fedora, find it in the Software application.

Screenshot of Fedora Media Writer in GNOME Software

Alternatively, use the following command to install it from a terminal:

sudo dnf install mediawriter

Links to the installers for Mac OS and Windows versions of the Fedora Media Writer are available from the Downloads page on getfedora.org

Creating a Fedora USB

After launching Fedora Media Writer, you will be greeted with a list of the Fedora editions available to download and copy to your USB drive. The two main options here are Fedora Workstation and Fedora Server. Alternatively, you can click the icon at the bottom of the list to display all the additional Spins and Labs that the Fedora community provides. These include the KDE Spin, the Cinnamon Spin, the XFCE spin, the Security lab, and the Fedora Design Suite.

Screenshot of the Fedora Media Writer main screen, showing all the Fedora Editions, Labs and Spins

Click on the Fedora edition, Spin or Lab you want to download and copy to your new USB. A description of the software will be presented to you:

Screenshot of the Fedora Workstation details page in Fedora Media Writer

Click the Create Live USB button in the top right to start the download of your new Fedora image. While the image is downloading, insert your USB drive into your computer, and choose that drive in the dropdown. Note that if you have previously downloaded a Fedora image with the Media Writer, it will not download it again; it will simply use the version you have already downloaded.

Screenshot of a Fedora Workstation ISO downloading in Fedora Media Writer

After the download is complete, double check you are writing to the correct USB drive, and click the red Write to Disk button.

Screenshot of writing Fedora Workstation to a Fedora USB in Fedora Media Writer

 

Already have an ISO downloaded?

But what if you have previously an ISO through your web browser?. Media Writer also has an option to copy any ISO already on your filesystem to a USB. Simply choose the Custom Image option from the main screen of Fedora Media Writer, then pick the ISO from the file browser, and choose Write to Disk.

Slice of Cake #8

Posted by Brian "bex" Exelbierd on May 22, 2017 10:22 AM

Diet cake this week …

A slice of cake

Last week as FCAIC I:

  • I had a bunch of meetings and flailed around in my email. Not every week is exciting, fun or dramatic :). The week was also very short because I returned from OSCAL in Albania on Monday and lost a day to travel.

A la Mode

  • As a human I took some holiday (vacation) and was not at work on Friday or the first half of Monday (today). I got to see beautiful Cluj-Napoca, Romania and relax :).

Cake Around the World

I’ll be traveling to:

  • Open Source Summit in Tokyo, Japan from 31 May - 2 June.
  • LinuxCon in Beijing, China from 19-20 June where I am helping to host the Fedora/CentOS/EPEL Birds of a Feather.
  • Working from Gdansk, Poland from 3-4 July.
  • Flock on Cape Cod, Massachusetts, USA from 29 August - 1 September.

You know how to fix enterprise patching? Please tell me more!!!

Posted by Josh Bressers on May 22, 2017 12:54 AM
If you pay attention to Twitter at all, you've probably seen people arguing about patching your enterprise after the WannaCry malware. The short story is that Microsoft fixed a very serious security flaw a few months before the malware hit. That means there are quite a few machines on the Internet that haven't applied a critical security update. Of course as you imagine there is plenty of back and forth about updates. There are two basic arguments I keep seeing.

Patching is hard and if you think I can just turn on windows update for all these computers running Windows 3.11 on token ring you've never had to deal with a real enterprise before! You out of touch hipsters don't know what it's really like here. We've seen thing, like, real things. We party like it's 1995. GET OFF MY LAWN.

The other side sounds a bit like this.

How can you be running anything that's less than a few hours old? Don't you know what the Internet looks like! If everyone just applied all updates immediately and ran their business in the cloud using agile scrum based SecDevSecOps serverless development practices everything would be fine!

Of course both of these groups are wrong for basically the same reason. The world isn't simple, and whatever works for you won't work for anyone else. The tie that binds us all together is that everything is broken, all the time. All the things we use are broken, how we use them is broken, and how we manage them is broken. We can't fix them even though we try and sometimes we pretend we can fix things.

However ...

Just because everything is broken, that's no excuse to do nothing. It's easy to declare something too hard and give up. A lot of enterprises do this, a lot of enterprise security people are using this defense why they can't update their infrastructure. On the other side though, sometimes moving too fast is more dangerous than moving too slow. Reckless updates are no better than no updates. Sometimes there is nothing we can do. Security as an industry is basically a big giant Kobayashi Maru test.

I have no advice to give on how to fix this problem. I think both groups are silly and wrong but why I think this is unimportant. The right way is for everyone to have civil conversations where we put ourselves in the other person's shoes. That won't happen though, it never happens even though basically ever leader ever has said that sort of behavior is a good idea. I suggest you double down on whatever bad practices you've hitched your horse to. In the next few months we'll all have an opportunity to show why our way to do things is the worst way ever, and we'll also find an opportunity to mock someone else for noting doing things the way we do.

In this game there are no winners and losers, just you. And you've already lost.

GIMP rocks!

Posted by Julita Inca Chiroque on May 21, 2017 08:08 PM

One problem that I used to have is to hide two main windows such as Toolbox-Tool Options Layer Brushes, that make my GIMP seems with no accessible icons on it.

Go to Windows -> Dockable Dialogs to have them back:

After cutting the edges of the original picture using sicssors select tools:

I added another layer with a full black rectangle Edit – Fill FGBlack Color

As well as another horizontal rectangle and I used Text tool to add words:

To set in black and white, Colors – Components – Channel Mixer

Check the Monochrome option. To color, I selected Color – Colorize

Blur the edge of my hair gave a nice look at the end.

Last and not least, I added another layer to the GNOME logo to put it inside my eye 😉

* Zoom out was done with SHIFT + and ESC in case CTRL Z is not working.


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: blanco negro gimp, colorear gimp, fedora, GIMP, GNOME, GNOME Peru Challenge, Julita Inca, Julita Inca Chiroque, letras GIMP

Episode 48 - Machine Learning: Not actually magic

Posted by Open Source Security Podcast on May 21, 2017 07:53 PM
Josh and Kurt have a guest! Mike Paquette from Elastic discusses the fundamentals and basics of Machine Learning. We also discuss how ML could have helped with WannaCry.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/323810101&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes



Using Ansible to automate oVirt and RHV environments

Posted by Luc de Louw on May 21, 2017 01:29 PM

Bored of clicking in the WebUI of RHV or oVirt? Automate it with Ansible! Set up a complete virtualization environment within a few minutes. Some time ago, Ansible includes a module for orchestrating RHV environments. It allows you to automate … Continue reading

The post Using Ansible to automate oVirt and RHV environments appeared first on Luc de Louw's Blog.

Un petit point sur les options SSL et les en-têtes HTTP

Posted by Didier Fabert (tartare) on May 21, 2017 10:54 AM

Apache Logo

Options générales

  • ServerTokens définit ce qu’apache peut diffuser comme information à propos de lui-même. Le fanfaronnage est incompatible avec la sécurité.
    • Prod: c’est la valeur la plus sécurisée et le serveur n’enverra que son nom:
      Server: Apache
    • Major: le serveur n’enverra que son nom et son numéro de version majeur:
      Server: Apache/2
    • Minor: le serveur n’enverra que son nom et son numéro de version complet:
      Server: Apache/2.4.25
    • Os: le serveur n’enverra que son nom, son numéro de version complet et le nom du système d’exploitation:
      Server: Apache/2.4.25 (Fedora)
    • Full: le serveur enverra son nom, son numéro de version, le nom du système d’exploitation et la liste des modules actifs avec leur numéro de version:
      Server: Apache/2.4.25 (Fedora) OpenSSL/1.0.2k-fips mod_auth_kerb/5.4 mod_wsgi/4.4.23 Python/2.7.13 PHP/7.0.18 mod_perl/2.0.10 Perl/v5.24.1
  • ServerSignature définit la signature (pied de page) apposée par apache sur les page générées par lui-même (typiquement les pages d’erreurs)
    • Off: aucune signature sur les page générées
    • On: signature présente avec les mêmes informations défini par la directive ServerTokens, le domaine et le port
      <address>Apache/2.4.25 (Fedora) OpenSSL/1.0.2k-fips mod_auth_kerb/5.4 mod_wsgi/4.4.23 Python/2.7.13 PHP/7.0.18 mod_perl/2.0.10 Perl/v5.24.1 Server at www.tartarefr.eu Port 80</address>
    • Email: signature présente avec les mêmes informations défini par la directive ServerTokens, le domaine, le port et l’email de l’administrateur du domaine
      <address>Apache/2.4.25 (Fedora) OpenSSL/1.0.2k-fips mod_auth_kerb/5.4 mod_wsgi/4.4.23 Python/2.7.13 PHP/7.0.18 mod_perl/2.0.10 Perl/v5.24.1 Server at <a href="mailto:fake@tartarefr.eu">www.tartarefr.eu</a> Port 80</address>
      
  • TraceEnable définit si la méthode HTTP Trace est autorisée. Cette méthode sert surtout pour des tests ou des diagnostiques et n’a pas sa place sur un serveur en production. Elle peut prendre deux valeurs: On (la méthode est permise) ou Off (méthode désactivée)

Options SSL

Hormis les directives habituelles (SSLEngine, SSLCertificateFile, SSLCertificateKeyFile, SSLCertificateChainFile) pour le paramétrage SSL, il y en a quelques unes qui méritent une petite explication.

  • SSLCipherSuite définit la liste des ciphers autorisés. Actuellement, pour obtenir un A+ sur SSLlabs, il faut désactiver certains ciphers medium ou high.
    HIGH:MEDIUM:!aNULL:!eNULL:!MD5:!RC4:!EXP:!3DES:!LOW:!SEED:!IDEA:!CBC
  • SSLHonorCipherOrder définit si l’ordre des ciphers de la directive SSLCipherSuite doit être suivi. Il est recommandé de suivre l’ordre défini en mettant la valeur On. Typiquement ici, le serveur essaiera d’abord tous les ciphers HIGH avant d’essayer les MEDIUM.
  • SSLProtocol définit les protocoles autorisés: ici on accepte tous les protocoles sauf SSL version 2 et version 3. On peut commencer à envisager d’exclure aussi TLSv1 (TLSv1.0)
    SSLProtocol all -SSLv2 -SSLv3
  • SSLCompression active ou désactive la compression sur SSL. Comme la faille CRIME exploite une faille de la compression, on désactive cette fonctionnalité en mettant le paramètre à off.

En-têtes HTTP concernant la sécurité

  • Set-Cookie permet de sécuriser les cookies en ajoutant deux paramètres qui rendent les cookies non accessibles aux scripts du client (HttpOnly) et ils ne sont transmis que sur une connexion sécurisée (Secure), c’est à dire en HTTPS.
    Header always edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
  • X-Frame-Options autorise le navigateur à encapsuler ou non la page web dans une frame. En empêchant la page d’être encapsulée par un site externe, on se protège du clickjacking.
    Il y a 3 valeurs possibles:

    • DENY: aucune encapsulation possible
    • SAMEORIGIN: encapsulation uniquement en provenance du même domaine et du même protocole (HTTP ou HTTPS)
    • ALLOW-FROM: encapsulation uniquement en provenance de l’URI passée en argument
  • X-XSS-Protection active les filtres cross-site scripting embarqués dans la plupart des navigateurs. La meilleure configuration est d’activer la protection des navigateurs: “X-XSS-Protection: 1; mode=block“.
  • X-Content-Type-Options désactive la détection automatique du type MIME par le navigateur et force celui-ci à utiliser uniquement le type déclaré avec Content-Type. La seule valeur valide est nosniff.
  • Referrer-Policy définit la politique d’envoi d’information de navigation dans l’en-tête Referer
    • no-referrer: L’en-tête sera absente de la réponse à la requête.
    • no-referrer-when-downgrade: L’en-tête sera absente s’il y a une diminution de la sécurité (HTTPS->HTTP). Sinon elle sera envoyée.
    • same-origin: L’en-tête sera présente que si le domaine de destination est identique à celui d’origine.
    • origin: L’en-tête sera présente mais ne comprendra que le domaine de la page d’origine.
    • strict-origin: L’en-tête sera absente s’il y a une diminution de la sécurité (HTTPS->HTTP). Sinon elle ne comprendra que le domaine de la page d’origine.
    • origin-when-cross-origin: L’en-tête sera présente et l’URI sera complète si le domaine de destination est identique à celui d’origine, mais ne comprendra que le domaine de la page d’origine si le domaine de destination diffère de celui d’origine.
    • strict-origin-when-cross-origin: L’en-tête sera absente s’il y a une diminution de la sécurité (HTTPS->HTTP). Sinon elle sera complète si le domaine de destination est identique à celui d’origine, mais ne comprendra que le domaine de la page d’origine si le domaine de destination diffère de celui d’origine.
    • unsafe-url: L’en-tête sera présente et l’URI sera complète
  • Content-Security-Policy regroupe les sources autorisées à être incluses dans la page web. En listant uniquement les sources nécessaires, on empêche le téléchargement de sources malicieuses par le navigateur. Le mot clé self représente le domaine appelé.
    • default-src : Définit les sources autorisées par défaut de tous les types de ressources
    • script-src : Définit les sources autorisées pour les scripts
    • object-src : Définit les sources autorisées pour les objets
    • style-src : Définit les sources autorisées pour les feuilles de styles
    • img-src : Définit les sources autorisées pour les images
    • media-src : Définit les sources autorisées pour les médias (vidéo et audio)
    • frame-src : Définit les sources autorisées pour les frames
    • font-src : Définit les sources autorisées pour les polices de caractères
    • connect-src : Définit les sources autorisées à être chargée par les scripts
    • form-action : Définit les sources autorisées pour l’action d’un formulaire
    • plugin-types : Définit les sources autorisées pour les plugins
    • script-nonce : Définit les sources autorisées pour les scripts ayant le même argument nonce
    • sandbox : Définit la politique de bac-à-sable
    • reflected-xss : Active ou désactive les filtres des navigateurs pour la protection XSS
    • report-uri : Définit une URI vers laquelle envoyer un rapport en cas de violation de politique
  • HTTP-Strict-Transport-Security force le navigateur à modifier tous les liens non sécurisés par des liens sécurisés (HTTP->HTTPS) durant le temps indiqué par le paramètre max-age. Celui-ci doit donc être configuré pour être supérieur à la durée de navigation. La page ne sera pas affichée si le certificat SSL n’est pas valide.
  • Public-Key-Pins protège contre les attaques man-in-the-middle avec des certificats X.509 volés (mais valides). En spécifiant l’empreinte des certificats du site au navigateur, celui-ci ne fera pas
    confiance aux autres certificats valides non listés pour le site.
  • Expect-CT annonce le statut du site en rapport aux futurs pré-requis de Chrome. L’en-tête comporte le mot enforce s’il est prêt. Mais dans un premier temps il vaut mieux le mettre en test avec la directive max-age=0 et éventuellement le paramètre report-uri

Vérification

Exemple de configuration Apache

On peut mettre la définition de ces en-têtes dans le fichier /etc/httpd/conf.d/common.conf (On le créé s’il n’existe pas), il sera inclut par la directive IncludeOptional conf.d/*.conf

ServerTokens    Prod
ServerSignature Off

# Disable Trace
# Otherwise host is vulnerable to XST
TraceEnable Off

# Secure cookie with HttpOnly
Header always edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure

Header always set X-Frame-Options SAMEORIGIN
Header always set X-XSS-Protection: "1;mode=block"
Header always set X-Content-Type-Options nosniff
Header always set Referrer-Policy same-origin
Header always set Content-Security-Policy "default-src 'self' ; script-src 'self' report.tartarefr.eu https://s.w.org ; style-src 'self' fonts.googleapis.com fonts.gstatic.com ; report-uri https://report.tartarefr.eu/"
Header always set Strict-Transport-Security "max-age=31536000;includeSubDomains"
Header always set Expect-CT 'max-age=0; report-uri="https://report.tartarefr.eu/"'
Header always set Public-Key-Pins 'pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg=";pin-sha256="1B/6/luv+TW+JQWmX4Qb8mcm4uFrNUwgNzmiCcDDpyY=";max-age=2592000;includeSubdomains; report-uri="https://report.tartarefr.eu/"'

Unix Sockets For Auth

Posted by Robbie Harwood on May 21, 2017 04:00 AM

Let's not talk about the Pam/NSS stack and instead talk about a different weird auth thing on Linux.

So sockets aren't just for communication over the network. And by that I don't mean that one can talk to local processes on the same machine by connecting to localhost (which is correct, but goes over the "lo" network), but rather something designed for this purpose only: Unix domain sockets. Because they're restricted to local use only, their features can take advantage of both ends being managed by the same kernel.

I'm not interested in performance effects (and I doubt there are any worth writing home about), but rather what the security implications are. So of particular interest is SO_PEERCRED. With the receiving end of an AF_UNIX stream socket, if you ask getsockopt(2) nicely, it will give you back assurances about the connecting end of the socket in the form of a struct ucred. When _GNU_SOURCE is defined, this will contain pid, uid, and gid of the process on the other end.

It's worth noting that these are set while in the syscall connect(2). Which is to say that they can be changed by the process on the other end by things like dropping privileges, for instance. This isn't really a problem, though, in that it can't be exploited to gain a higher level of access, since the connector already has that access.

Anyway, the uid information is clearly useful; one can imagine filtering such that a connection came from apache, for instance (or not from apache, for that matter), or keeping per-user settings, or any number of things. The gid is less clearly useful, but I can immediately see uses in terms of policy setting, perhaps. But what about the pid?

Linux has a relative of plan9's procfs, which means there's a lot of information presented in /proc. (/proc can be locked down pretty hard by admins, but let's assume it's not.) proc(5) covers more of these than I will, but there are some really neat ones. Within /proc/[pid], the interesting ones for my purposes are:

  • cmdline shows the process's argv.

  • cwd shows the current working directory of the process.

  • environ similarly shows the process's environment.

  • exe is a symlink to the executable for the process.

  • root is a symlink to the process's root directory, which means we can tell whether it's in a chroot.

So it seems like we could use this to implement filtering by the process being run: for instance, we could do things only if the executable is /usr/bin/ssh. And indeed we can; /proc/[pid]/exe will be a symlink to the ssh binary, and everything works out.

There's a slight snag, though: /usr/bin/ssh is a native executable (in this case, an ELF file). But we can also run non-native executables using the shebang - e.g., #!/bin/sh, or #!/usr/bin/python2, and so on. While this is convenient for scripting, it makes the /proc/[pid]/exe value much less useful, since it will just point at the interpreter.

The way the shebang is implemented causes the interpreter to be run with argv[1] set to the input file. So we can pull it out of /proc/[pid]/cmdline and everything is fine, right?

Well, no. Linux doesn't canonicalize the path to the script file, so unless it was originally invoked using a non-relative path, we don't have that information.

Maybe we can do the resolution ourselves, though. We have the process environment, so $PATH-based resolution should be doable, right? And if it's a relative path, we can use /proc/[pid]/cwd, right?

Nope. Although inspecting the behavior of shells would suggest that /proc/[pid]/cwd doesn't change, this is a shell implementation detail; the program can just modify this value if it wants.

Even if we nix relative paths, we're still not out of the woods. /proc/[pid]/environ looks like exactly what it want, as the man page specifies that even getenv(3)/setenv(3) do not modify this. However, the next paragraph indicates the syscall needed to just move what region of memory it points to, so we can't trust that value either.

There's actually a bigger problem, though. Predictably, from the way the last two went, processes can just modify argv. So: native code only.

Anyway, thanks for reading this post about a piece of gssproxy's guts. Surprise!

Setting GNOME Pomodoro, a time limit app

Posted by Julita Inca Chiroque on May 20, 2017 06:00 PM

Having a discipline to manage workshops includes controlling time, besides the organization of the topics, among other factors. One extension that GNOME offers is called GNOME Pomodoro. It is an easy sofware that let you set your breaks, focusing in a specific task to accomplish your goals

  1. Installing GNOME Pomodoro

Some dependences are required as follow:

… et voilà

2. Configuring GNOME Pomodoro

Now you can see the setting chart where you can set the Pomodoro duration as well as the break time. Click on an upright screen to handle it.

On first attempt, I have tested the Pomodoro duration and the short break duration in 1 minute. I have also changed the appearance to have a less stressful clock on my screen, and set the ticking sound with birds on the background. Finally click ON on top to the GNOME Pomodoro to have ready the alarm!

From now on, during my training Python 8 hour classes, I am going to set 25 hours to accomplish 3 pythons exercises with 5 minutes for break 🙂


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: challenge, challenge Peru, fedora, GNOME, GNOME Perú, GNOME Pomodoro, Julita Inca, Julita Inca Chiroque, Pomodoro

Fractional scaling goes east

Posted by Matthias Clasen on May 19, 2017 06:34 PM

When we introduced HiDPI support in GNOME a few years ago, we took the simplest possible approach that was feasible to implement with the infrastructure we had available at the time.

Some of the limitations:

  • You either get 1:1 or 2:1 scaling, nothing in between
  • The cut-off point that is somewhat arbitrarily chosen and you don’t get a say in it
  • In multi-monitor systems, all monitors share the same scale

Each of these limitations had technical reasons. For example, doing different scales per-monitor is hard to do as long as you are only using a single, big framebuffer for all of them. And allowing scale factors such as 1.5 leads to difficult questions about how to deal with windows that have a size like 640.5×480.5 pixels.

Over the years, we’ve removed the technical obstacles one-by-one, e.g. introduced per-monitor framebuffers. One of the last obstacles was the display configuration API that mutter exposes to the control-center display panel, which was closely modeled on XRANDR, and not suitable for per-monitor and non-integer scales. In the last cycle, we introduced a new, more suitable monitor configuration API, and the necessary support for it has just landed in the display panel.

With this, all of the hurdles have been cleared away, and we are finally ready to get serious about fractional scaling!

Yes, a hackfest!

Jonas and Marco happen to both be in Taipei in early June, so what better to do than to get together and spend some days hacking on fractional scaling support:

https://wiki.gnome.org/Hackfests/FractionalScaling2017

If you are a compositor developer (or plan on becoming one), or just generally interested in helping with this work, and are in the area, please check out the date and location by following the link. And, yes, this is a bit last-minute, but we still wanted to give others a chance to participate.

Sumantro Mukherjee: How Do You Fedora?

Posted by Fedora Magazine on May 19, 2017 09:30 AM

We recently interviewed Sumantro Mukherjee on how he uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming a interviewee.

Who is Sumantro Mukherjee?

Sumantro Mukherjee started using Linux in his freshman year. His interest in web development exposed him to open standards which ignited a desire to use an open source operating system. He learned about Fedora from a post on ‘Digit’ about Fedora 13. He enjoys listening to music, traveling and collecting currencies of different countries. Mukherjee was involved with open source before using Linux. Sumantro contributed to Firefox OS, Mozilla and Wikipedia.

Biryani, a mixed rice dish, is Mukherjee’s favorite food. Interstellar and Inception are Sumantro’s favorite movies. “My favorite parts of of the movie is the docking scene.” Sumantro continued, “space exploration and science fiction are some things which I love reading and watching about.” He enjoys movies that promote how humans can achieve what seems impossible with patience and effort. “Interstellar portrays how people humans can do anything which might seem impossible at first but with patience and effort , everything is possible!”

The Fedora Community

Sumantro found the Fedora community open and receptive to new contributors. “The very first impression was warm and welcoming. Adamw, Kamil, Petr Schindl and Sudhir helped me a lot in getting started.” Mukherjee would like to see improvement in the onboarding process for new contributors. “The Project invites users and contributors from designers to documentation and coders to testers.” Every Fedora user has something good to contribute to Fedora. Making it easier for potential contributors to find an area to contribute in important. On March 2016 Sumantro joined Red Hat as an intern for Fedora Quality Assurance. “Adam Williamson and I started running onboarding calls for Fedora QA which was another essential part to welcome the new contributors and help them understand the testing process.”

Getting started guides:

Mukherjee is passionate about getting new contributors invovled in the Fedora project. “There is no harm in breaking things while learning and know that the community will help you if you ask the right question and follow the open source etiquette.” His recommendation to new contributors is to “Be vocal , be bold and ask as many times as you want.”

What Hardware and Software?

Sumantro prefers Lenovo T460s and X220s. His T460 is a beast. It has 20GiB of ram, an Intel Skylake i7 and handles his virtual machines with ease. Despite having a laptop he prefers a big screen and uses a Dell monitor. Makherjee also loves to boot Fedora on ARM processors. “I currently use a Raspberry Pi 3 and a Samsung Artik to test Fedora ARM.”

Sumantro Applications

His desktop environment is Gnome with Wayland. Sumantro uses Sublime for web development, and Vim for shell scripts and Python. Mukeherjee’s terminal of choice is Terminator. For version control he uses Git, Github and Arcanist. “Arcanist is a wrapper script that sits on top of other tools (e.g., Differential, linters, unit test frameworks, Git, Mercurial, and SVN) and provides pretty good command-line access to manage code review and perform some related revision control operations.”

Install and test Visual Studio Code on Ubuntu or Fedora

Posted by Luca Ciavatta on May 19, 2017 08:24 AM

Visual Studio Code is a source code editor developed by Microsoft for Windows, Linux and macOS X. Launched in 2015, it became one of the preferred code editors in the StackOverflow developer community and has a huge marketplace with a lot of plug-ins and extensions.

 

<figure class="wp-caption alignnone" id="attachment_478" style="width: 750px">Debug code right from the editor of Visual Studio Code on Linux<figcaption class="wp-caption-text">Debug code right from the editor of Visual Studio Code on Linux</figcaption></figure>

Visual Studio Code powered by Electron, fast and reliable

Visual Studio Code includes support for debugging, embedded Git control, syntax highlighting, intelligent code completion, snippets, and code refactoring. It is also customizable, so users can change the editor’s theme, keyboard shortcuts, and preferences. It is free and open-source, although the official download is under a proprietary license.

Visual Studio Code has syntax highlighting and autocomplete with IntelliSense, which provides smart completions based on variable types, function definitions, and imported modules. Review diffs, stage files, and make commits right from the editor is fast and simple. Push and pull from any hosted Git service has never been easier.

 

<figure class="wp-caption alignnone" id="attachment_479" style="width: 894px">Snap issue with Visual Studio Code on Fedora Linux<figcaption class="wp-caption-text">Snap issue with Visual Studio Code on Fedora Linux</figcaption></figure>

Easy to install with .deb, .rpm and snap packages

You can easily install Visual Studio Code thank to the availability of many different packages. At the official download page, you can find .deb and .rpm packages, so install it on Debian and Fedora based platforms is really a simple thing.

There is also a third way to install Visual Studio Code on Linux systems like Fedora and Ubuntu: the snap package.

To install Visual Studio Code as a snap in Ubuntu:

$ sudo snap install --classic vscode

To install Visual Studio Code as a snap in Fedora:

$ sudo dnf install snapd
$ sudo snap install --classic vscode

But there is an issue because Snap on Fedora is still under development and so you got an error:

$ sudo snap install --classic vscode
error: cannot install "vscode": classic confinement is not yet supported on
your distribution

If you try to install it without the –classic option, you got:

$ sudo snap install vscode
error: This revision of snap "vscode" was published using classic confinement
and thus may perform arbitrary system changes outside of the security
sandbox that snaps are usually confined to, which may put your system at
risk.

If you understand and want to proceed repeat the command including
--classic.

So, actually, the only way to install Visual Studio Code on Fedora is through a classic .rpm package, because the Flatpack option is still far away.

The post Install and test Visual Studio Code on Ubuntu or Fedora appeared first on cialu.net.

Fedora Infrastructure Hackathon 2017

Posted by Adam Miller on May 19, 2017 02:36 AM

Fedora Infrastructure Hackathon 2017

Last week, hot on the heels of my trip to Boston for Red Hat Summit, I attended the 2017 Edition Fedora Infrastructure Hackathon. The primary goal of the Hackathon was to make a lot of progress in a relatively short amount of time on defining Fedora Infrastructure requirements necessary to support upcoming Fedora Project objectives, as defined by the Council and FESCo, and doing work to satisfy those requirements. In some cases this was simply "define policies around how this should work with the infrastructure", but in most it scenarios is meant digging in and doing work such as patching multiple code bases to support new AuthN/AuthZ protocols and providers, deploying net-new infrastructure services, or even bringing up services in a new datacenter hosted by a fellow Open Source Community Project in order to leverage newly donated hardware. We'll cover all of that in the recap of the journey below.

It all started Monday 2017-05-08, we were hosted graciously in the Red Hat Tower, which as a proud Red Hatter and overall Red Hat fanboi it was extremely cool to get to spend a week there, and worked as hard as we could to get a lot done in about 4.5 days (Monday-Friday, but most people had to travel home on Friday evening). Representative members of various aspects of the Fedora Community were in attendance, the obvious Fedora Infrastructure Team was well represented, but also Fedora QA, Fedora Modularity, Fedora Atomic CI, CentOS, and Fedora RelEng.

Things kicked off by defining an agenda, all notes held in a Gobby Doc. We effectively came up with a loose fitting outline of the following:

  • Monday:

    • AuthN / AuthZ - FAS, FreeIPA, CommunityIPA, Ipsilon, CentOS Infra overlap
    • Modularity
    • CI
  • Tuesday:

    • OpenShift in the AM, CI in the PM
  • Wed/Thu:

    • Hack sessions on OpenShift and CI (break out into teams)
  • Friday:

    • Breakout hack sessions and wrap up

AuthN/AuthZ

Things started off with Patrick explaining many aspects of various AuthN/AuthZ protocols and technologies that are currently in use within the Fedora Infrastructure as well as migration plans to bring systems and services using older technology in line with newer technologies. There were discussions focused around Fedora Authentication, OAuth2, Kerberos, OpenID, OpenID Connect, FreeIPA, FAS2, and how different Fedora Apps are using different combinations of these technologies. From there and identification of what apps need to be ported away from older technologies was done along with work assigned to people in the room with the intent of accomplishing these tasks over the next few days (and beyond, if necessary).

Bi-Directional Replication

Something that's come up a lot in recent history within the Fedora Infrastructure is database high availability. The Fedora Infrastructure Team already maintains a high level of best practices around database administration but being able to do maintenance with extremely minimal or zero downtime to the database servers is an extremely nice-to-have. Therefore a section of time was dedicated to working through an approach to roll out Postgres BDR for certain applications in the Fedora Infrastructure.

App Porting and Libraries

The Fedora Apps developers in the room had a some targeted breakout session focusing on porting old Fedora web applications away from outdated or no longer recommended libraries and frameworks in order to bring more uniformity to how the applications are developed and maintained, but also make them easier to support by reigning in the spread of tooling required by the group to have to follow along with upstream developments.

Modularity

Members from the Fedora Modularity Team presented on the Module Build Service and Arbitrary Branching concept in order to discuss integration points into the Fedora Infrastructure's existing systems. This was a lot of discussion that resulted in documentation of processes, identification of issues to resolve, and establishing a realistic timeline for a phased approach to accomplish these tasks.

OpenShift

The Fedora Infrastructure Team is always trying to make the most out of the hardware that it has, and as such has been evaluating container technologies for use in the Infrastructure. Recently an evaluation of OpenShift began and the decision was made to move forward with using it for applications within Fedora. During this session we worked through a series of questions about OpenShift as they would pertain to a production deployment and had the good fortune of being able to ask for best practices and general recommendations from the OpenShift Online Ops Team. We then formulated a plan to have an OpenShift Environment up and running fully automated with Ansible Playbooks (based on openshift-ansible and ansible-ansible-openshift-ansible) in stage by the end of the week. We were successful in this endeavor but are waiting on a certificate for new domain names.

Continuous Integration

Next up we hear from a group within Fedora who are taking on the massive task of attempting to perform Continuous Integration on the entire Fedora Operating System. Alright, maybe not the entire set of packages but they are targeting an installable Fedora Operating System via Fedora Atomic Host. For more information, check out the Fedora Atomic CI wiki page.

During this working session we were joined by our good friends from the CentOS team because they were graciously offering up hardware resources in their very own CentOS CI environment. There was a lot of work done here in the initial days discussion around how to tie the two infrastructures together as well as bridge things like account systems and grant appropriate permissions throughout. Action items were tackled as the week continued.

Wrap Up

We met at the end of the week for a short time before most folks departed to travel home and tallied up the score. All in all we accomplished all but one of the objectives we set out for the hack days and the one that wasn't had progress made on it but it was too large a piece of work to accomplish in just a couple days and is still being worked on at this time. There's all sorts of great info on the Fedora Infrastructure Hackathon wiki page for anyone who's interested in digging into the details (also, check the CI-Infrastructure-Hackathon-2017 Gobby Doc for a pay-by-play).

It was absolutely fantastic to get so many members of the Fedora Community into one room and hack on things. It's also great just to get to spend time hanging out with everyone since we rarely see one another in person. I'm even more excited about Flock 2017 than I was before!

Until next time...

Plans for the next GNOME docs hackfest

Posted by Petr Kovar on May 18, 2017 10:18 PM

The GNOME documentation team started planning the next docs hackfest after some (rather long) months of decreased activity on that front. The previous docs sprint was actually held in Cincinnati, OH, in 2015, produced lots of content updates and we’d like to repeat that experience again this year from August 14th through 16th, 2017.

As with the previous event, the 2017 docs sprint will happen right after the Open Help Conference, which is returning this year thanks to Shaun.

What we want to do differently this year is extending invitation to all people interested in GNOME content, whether it is upstream or downstream. We would especially like to see some Ubuntu folks attending. With Ubuntu moving to upstream GNOME, we are already seeing an increased number of docs patches coming from Ubuntu contributors, which is great, and I think having a joint documentation event could strengthen and expand the connections even more!

<figure class="wp-caption aligncenter" id="attachment_290" style="width: 660px">GNOME docs<figcaption class="wp-caption-text">GNOME docs are a friendly bunch of people!</figcaption></figure>

Interested? Let us know! I’ve set up a wiki page with details on the event where you can sign up and propose your own ideas for agenda.

As always, you can find GNOME docs folks in #docs on irc.gnome.org.

Hope to see you all at the sprint!

Children’s Perspectives on Critical Data Literacies

Posted by Sayamindu Dasgupta on May 18, 2017 08:55 PM

Last week, we presented a new paper that describes how children are thinking through some of the implications of new forms of data collection and analysis. The presentation was given at the ACM CHI conference in Denver last week and the paper is open access and online.

Over the last couple years, we’ve worked on a large project to support children in doing — and not just learning about — data science. We built a system, Scratch Community Blocks, that allows the 18 million users of the Scratch online community to write their own computer programs — in Scratch of course — to analyze data about their own learning and social interactions. An example of one of those programs to find how many of one’s follower in Scratch are not from the United States is shown below.


 

Last year, we deployed Scratch Community Blocks to 2,500 active Scratch users who, over a period of several months, used the system to create more than 1,600 projects.

As children used the system, Samantha Hautea, a student in UW’s Communication Leadership program, led a group of us in an online ethnography. We visited the projects children were creating and sharing. We followed the forums where users discussed the blocks. We read comment threads left on projects. We combined Samantha’s detailed field notes with the text of comments and forum posts, with ethnographic interviews of several users, and with notes from two in-person workshops. We used a technique called grounded theory to analyze these data.

What we found surprised us. We expected children to reflect on being challenged by — and hopefully overcoming — the technical parts of doing data science. Although we certainly saw this happen, what emerged much more strongly from our analysis was detailed discussion among children about the social implications of data collection and analysis.

In our analysis, we grouped children’s comments into five major themes that represented what we called “critical data literacies.” These literacies reflect things that children felt were important implications of social media data collection and analysis.

First, children reflected on the way that programmatic access to data — even data that was technically public — introduced privacy concerns. One user described the ability to analyze data as, “creepy”, but at the same time, “very cool.” Children expressed concern that programmatic access to data could lead to “stalking“ and suggested that the system should ask for permission.

Second, children recognized that data analysis requires skepticism and interpretation. For example, Scratch Community Blocks introduced a bug where the block that returned data about followers included users with disabled accounts. One user, in an interview described to us how he managed to figure out the inconsistency:

At one point the follower blocks, it said I have slightly more followers than I do. And, that was kind of confusing when I was trying to make the project. […] I pulled up a second [browser] tab and compared the [data from Scratch Community Blocks and the data in my profile].

Third, children discussed the hidden assumptions and decisions that drive the construction of metrics. For example, the number of views received for each project in Scratch is counted using an algorithm that tries to minimize the impact of gaming the system (similar to, for example, Youtube). As children started to build programs with data, they started to uncover and speculate about the decisions behind metrics. For example, they guessed that the view count might only include “unique” views and that view counts may include users who do not have accounts on the website.

Fourth, children building projects with Scratch Community Blocks realized that an algorithm driven by social data may cause certain users to be excluded. For example, a 13-year-old expressed concern that the system could be used to exclude users with few social connections saying:

I love these new Scratch Blocks! However I did notice that they could be used to exclude new Scratchers or Scratchers with not a lot of followers by using a code: like this:

when flag clicked
if then user’s followers < 300
stop all.

I do not think this a big problem as it would be easy to remove this code but I did just want to bring this to your attention in case this not what you would want the blocks to be used for.

Fifth, children were concerned about the possibility that measurement might distort the Scratch community’s values. While giving feedback on the new system, a user expressed concern that by making it easier to measure and compare followers, the system could elevate popularity over creativity, collaboration, and respect as a marker of success in Scratch.

I think this was a great idea! I am just a bit worried that people will make these projects and take it the wrong way, saying that followers are the most important thing in on Scratch.

Kids’ conversations around Scratch Community Blocks are good news for educators who are starting to think about how to engage young learners in thinking critically about the implications of data. Although no kid using Scratch Community Blocks discussed each of the five literacies described above, the themes reflect starting points for educators designing ways to engage kids in thinking critically about data.

Our work shows that if children are given opportunities to actively engage and build with social and behavioral data, they might not only learn how to do data analysis, but also reflect on its implications.

This blog-post and the work that it describes is a collaborative project by Samantha Hautea, Sayamindu Dasgupta, and Benjamin Mako Hill. We have also received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Hal Abelson from MIT CSAIL. Financial support came from the US National Science Foundation.

El hijo de internet Aaron Swartz

Posted by Fernando Espinoza on May 18, 2017 08:52 PM

por Pedro Sanz Si estás leyendo estas palabras es porque hace años (unos pocos años) a alguien se le ocurrió inventar una nueva forma de comunicarnos a través de una red de alcance mundial llamada Internet. El impacto que causó fue tan enorme que ha cambiado por completo nuestra vida, nuestro trabajo, el acceso a [...]


Fedora CI and Infrastructure Hackathon 2017

Posted by Jeremy Cline on May 18, 2017 04:36 PM

Last week, the Fedora Infrastructure folks joined forces with people from CentOS, Fedora QE, Factory 2.0, and others to work on the design and implementation of continuous integration for the Fedora Project in the great Fedora CI and Infrastructure Hackathon of 2017.

This was especially interesting for me since I had not had the opportunity to meet the majority of the people I’ve been working with for the last eight months.

The work we did can be broadly split into two categories as the title implies: general infrastructure work, and CI work specifically.

Infrastructure

The general infrastructure mostly took the form of documentation. Fedora Infrastructure had a set of Standard Operating Procedures (SOPs) used to document how to maintain the various applications we use. I recently took this documentation (which was already conveniently written in ReStructured Text) and turned it into a Sphinx project. The project, infra-docs, aims to cover both developer and system administrator best practices.

As part of the Hackathon we worked on expanding the developer documentation with two sections.

Authentication Documentation

The first day of the hackathon we covered how authentication works in Fedora Infrastructure with Ipsilon. The goal is to move applications from the old OpenID 2.0 to OpenID Connect which is based on OAuth 2.0.

The documentation introduces developers to OpenID Connect and points them to the various RFCs and specs for in-depth reading. It also overs the set of libraries that should be used for OpenID Connect so that all our applications use the same set of libraries.

BDR Documentation

Bi-directional replication is a multi-master replication system for PostgreSQL designed for replication across geographically distributed clusters via asynchronous logical replication.

Our staging environment has a BDR cluster, but applications need to be aware of (and designed for) BDR. The initial developer documentation for BDR in Fedora were written as part of the hackathon.

Continuous Integration

The second big focus of the hackathon for me was the application changes required for the continuous integration effort in Fedora.

A New FedMSG Relay

One task we needed to tackle was getting the CentOS CI infrastructure publishing fedmsgs that the Fedora infrastructure could consume. We require that messages be signed using x509 certificates since messages are transmitted without any other form of authenticity. The ultimate plan is for the Jenkins plugin to sign the messages before initially sending them, but it cannot currently do that, so as a temporary work-around I worked with Brian Stinson to introduce a new type of fedmsg relay that signs messages it relays. This allows all messages coming out of ci.centos.org to be signed as they’re relayed out to the world.

GreenWave

GreenWave (formally known as PolicyEngine) is a new web service we’re introducing to Fedora Infrastructure that allows users to define policies about what checks need to pass before an artifact is considered “good”. Policies are defined globally, per product (e.g. Fedora 26) and for a product and components (e.g. python-requests in Fedora 26). For example, we could define that all builds must pass a depcheck test to be considered “good”.

This will be used by, among other things, Bodhi to automatically gate updates. Given that, we need a prototype of the service sooner rather than later, so I started working on that.

Configuring offlineimap + dovecot + thunderbird

Posted by Cole Robinson on May 18, 2017 12:01 PM
Recently some internal discussions at Red Hat motivated me to look into using offlineimap. I had thought about doing this for some time as a step towards giving mutt a try, but for now I decided to keep my original thunderbird setup. This turned out to be a bit more work than I anticipated, so I'm documenting it here.

The primary difficulty is that offlineimap stores mail locally in Maildir format, but thunderbird only reads mbox format. The common solution to this is to serve the offlineimap mail via a local mail server, and have thunderbird connect to that. For the mail server I'm using dovecot. Getting offlineimap output and dovecot to play nicely together in a format that thunderbird can consume was a bit tricky...

Here's the ~/.offlineimaprc I settled on:

[general]
accounts = work 
 
 
[Account work]
localrepository = local-work
remoterepository = remote-work

# Do a full check every 2 minutes
# autorefresh = 2
# Do 5 quick checks between every full check
# quick = 5


[Repository local-work]
type = Maildir
localfolders = ~/.maildir

# Translate your maildir folder names to the format the remote server expects
# So this reverses the change we make with the remote nametrans setting
nametrans = lambda name: re.sub('^\.', '', name)


[Repository remote-work]
type = IMAP
keepalive = 300
ssl = yes
sslcacertfile = /etc/ssl/certs/ca-bundle.crt
remotehost = $YOUR-WORK-MAIL-SERVER
remoteuser = $YOUR-USERNAME
# You can specify remotepass= , but my work setup implicitly uses kerberos

# Turn this on if you are manually messing with your maildir at all
# I lost some mail in my experiments :(
#readonly = yes

# Need to exclude '' otherwise it complains about infinite naming loop?
folderfilter = lambda foldername: foldername not in ['']
# For Dovecot to see the folders right I want them starting with a dot,
# and dovecot set to look for .INBOX as the toplevel Maildir
nametrans = lambda name: '.' + name

A few notes here:
  • autorefresh/quick are commented out because I'm not using them: I'm invoking 'offlineimap -o' with cron ever 2 minutes, with a small wrapper that ensures offlineimap isn't already running (not sure if that will have nasty side effects), and also checks that I'm on my work VPN (checking for a /sys/net/class/ path). I went with this setup because running offlineimap persistently will exit if it can't resolve the remote server after a few attempts, which will trigger if I leave the VPN. Maybe there's a setting to keep it running persistently but I couldn't find it.
  • Enable the 'readonly' option and 'offlineimap --dry-run' when initially configuring things or messing with maildir layout: I lost a few hours of mail somehow during the setup process :/
  • My setup implicitly depends on having authenticated with my companies kerberos. Still haven't figured out a good way of keeping the kerberos ticket fresh on a machine that moves on and off the VPN regularly. I know SSSD can kinda handle it but it seems to tie local login to work infrastructure which I'm not sure I want.

For dovecot, I just needed to drop this into /etc/dovecot/local.conf and start/enable the service:

protocols = imap imaps
listen = 127.0.0.1
mail_location = maildir:~/.maildir:INBOX=~/.maildir/.INBOX

Then configure thunderbird to connect to 127.0.0.1. User and password are the same as your local machine user account.

The tricky part seems to be formatting the maildir directory names in a way that dovecot will understand and properly advertise as folders/subfolders. I played with dovecot LAYOUT=fs, various sep/separator values and offlineimap renamings, but the above config is the only thing I found that gave expected results (and I can't take credit for that setup, I eventually found it on an internal wiki page :) )

Here's some (trimmed) directories in my ~/.maildir:

$ ls -1da .maildir/
.Drafts
.INBOX
.INBOX.fedora
.INBOX.libvirt
.INBOX.qemu
.INBOX.virt-tools
.Junk

So .Drafts, .INBOX, .Junk are all top level folders, and things like .INBOX.fedora is a 'fedora' subfolder of my inbox. That's the naming scheme the default dovecot config seems to expect.

Fedora Google Summer of Code Students for 2017

Posted by Fedora Community Blog on May 18, 2017 10:32 AM

On Thursday, May 4th, the official announcement of accepted projects for this year’s Google Summer of Code (GSoC) was released.  Fedora is proud to be one of the selected participating organizations and we’re pleased to announce who will spend the summer hacking on Fedora-related projects!

What is Google Summer of Code?

In case you’ve never heard of the program, you can head to the GSoC homepage. The sub-title on the page sums it up perfectly:

Google Summer of Code is a global program focused on bringing more student developers into open source software development. Students work with an open source organization on a 3 month programming project during their break from school.

That basically means Google, together with FLOSS organizations, selects many talented students. These students are offered the oportunity to have an internship with the FLOSS organization.  The students are paid a stipend by Google to allow them to keep their summer free for the internship.

Google started the program in 2005 and Fedora has been participating since 2006. That means this is the eleventh year Fedora has taken part! Last year, a total of 1,206 students were accepted, 10 of whom were with Fedora.

What projects were accepted?

This year, a total of 1317 students have been accepted and six of them will be working on different Fedora or Fedora-related projects. The areas of those projects can’t be summed up easily so we’re linking to their proposal pages directly (for those who didn’t forget to put it on the wiki). If you’re not in the mood to read them at this point, worry not, a follow-up post will contain a short gist of their proposals.

Now without further a do, here is a list of the 6 students!

What happens next?

Now is the time for community bonding which means the students will now set up their Fedora accounts, start hanging around on the IRC channels, mailing lists and get the overall feel of the Fedora community, while also setting up their blogs to write about their progress during the summer.  This is also the time for you to make friends with them and welcome them to our community.

It is also the time to start setting up their development environment and they can even start sending small patches to their respective projects.

However, the actual coding part (that is, hacking away on what’s included in the proposal) will start on 30th of May and ends on 21st of August.

In a follow-up post, we’ll bring you the links to their blogs, along with the students’ introductions.

The post Fedora Google Summer of Code Students for 2017 appeared first on Fedora Community Blog.

Using syslog-ng with SELinux in enforcing mode

Posted by Peter Czanik on May 18, 2017 06:33 AM

Security-Enhanced Linux (SELinux) is a set of kernel and user-space tools enforcing strict access control policies. It is also the tool behind at least half of the syslog-ng problem reports. SELinux rules in Linux distributions cover all aspects of the syslog-ng configuration coming in the syslog-ng package available in the distribution. But as soon as an unusual port number or directory name is specified in the configuration, syslog-ng fails to work even with a perfectly legitimate configuration. While preventing unusual access is the main feature of SELinux, it also causes lots of headaches for unsuspecting administrators. Learn how you can use syslog-ng with SELinux in enforcing mode.

 

SELinux basics

Even the basics of SELinux would require writing several pages. If you want to learn about SELinux in depth, check the “Further reading” section at the end of my blog. Here, I only focus on some practical information.

First of all, make sure that SELinux is in enforcing mode on your machine. It is often disabled during installation, or a debugging session, and it is usually never turned on again. The following example is from CentOS 7.3, but it should look similar on any other distributions. First we list the configuration file and check if SELinux is in enforcing mode using the getenforce command.

[root@selinux ~]# cat /etc/selinux/config 

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

# enforcing - SELinux security policy is enforced.

# permissive - SELinux prints warnings instead of enforcing.

# disabled - No SELinux policy is loaded.

SELINUX=enforcing

# SELINUXTYPE= can take one of three two values:

# targeted - Targeted processes are protected,

# minimum - Modification of targeted policy. Only selected processes are protected. 

# mls - Multi Level Security protection.

SELINUXTYPE=targeted 

[root@selinux ~]# getenforce 

Enforcing

[root@selinux ~]#

If it is not set to enforcing mode, change the configuration file and reboot. Disabling SELinux completely is not recommended. When SELinux is re-enabled, you also need to relabel the file system using restorecon. You can temporarily disable SELinux using the setenforce command and check the results using getenforce:

[root@selinux ~]# setenforce 0

[root@selinux ~]# getenforce 

Permissive

[root@selinux ~]# setenforce 1

[root@selinux ~]# getenforce 

Enforcing

[root@selinux ~]# 

In permissive mode, SELinux detects policy violations and logs them, but does not enforce the rules. It can be used for debugging purposes. If you use setenforce 0, you can be sure that SELinux will not stay disabled accidentally.

 

Logging

SELinux logs are collected by auditd to the /var/log/audit/audit.log file. When you start syslog-ng with a default distro configuration, the only line you will see in the logs is that a new service was started:

[root@selinux ~]# grep syslog-ng /var/log/audit/audit.log 

type=SERVICE_START msg=audit(1494923144.043:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=syslog-ng comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'

[root@selinux ~]# 

You can parse the content of this file using the linux-audit-parser() feature of recent syslog-ng releases. Read more about it in the documentation at https://www.balabit.com/sites/default/files/documents/syslog-ng-ose-3.9-guides/en/syslog-ng-ose-v3.9-guide-admin/html-single/index.html#linux-audit-parser

In SELinux access is granted based on labels. You can list these labels for files and processes as well, usually by using capital Z as an option in the command.

[root@selinux ~]# ls -dZ /var/log/

drwxr-xr-x. root root system_u:object_r:var_log_t:s0 /var/log/

[root@selinux ~]# ps Zx | grep syslog-ng

system_u:system_r:syslogd_t:s0 47809 ? Ssl 0:00 /usr/sbin/syslog-ng -F -p /var/run/syslogd.pid

[root@selinux ~]# 


Using a different storage directory

You save your local log messages under the /var/log directory. But logs of a central syslog-ng server are often saved to a separate partition, for example to the /data/logs directory.

You modify your syslog-ng.conf, restart syslog-ng and cannot see any logs in the directory. What happened?

In /var/log/audit/audit.log you will find that syslog-ng was denied access to the directory called “logs”:

type=AVC msg=audit(1494927489.413:407): avc: denied { write } for pid=50082 comm="syslog-ng" name="logs" dev="dm-0" ino=35096600 scontext=system_u:system_r:syslogd_t:s0 tcontext=unconfined_u:object_r:default_t:s0 tclass=dir

If you take a look at /var/log/messages, you can find a similar message:

May 16 11:40:44 selinux syslog-ng[50082]: Error opening file for writing; filename='/data/logs/file', error='No such file or directory (2)'

It might even suggest that the problem is SELinux and you should use the audit2allow command to resolve permission problems. As usual, life is not this easy, as audit2allow would change permissions in a way so that syslog-ng receives too much access. You should rather do the changes yourself.

As you can see from the above directory listing the /var/log directory has the var_log_t label. When you list the freshly created directory, it has something different:

[root@selinux conf.d]# ls -ldZ /data/logs/

drwxr-xr-x. root root unconfined_u:object_r:default_t:s0 /data/logs/

[root@selinux conf.d]#

You can change it the label to var_log_t using the chcon command and check the results using the Z option of the ls command:

[root@selinux conf.d]# chcon -t var_log_t /data/logs/

[root@selinux conf.d]# ls -ldZ /data/logs/

drwxr-xr-x. root root unconfined_u:object_r:var_log_t:s0 /data/logs/

[root@selinux conf.d]# 

The problem with this approach is, that the change is temporary. It will be lost when you reboot the machine or restore SELinux labels using the restorecon command.

[root@selinux conf.d]# restorecon -v /data/logs/

restorecon reset /data/logs context unconfined_u:object_r:var_log_t:s0->unconfined_u:object_r:default_t:s0

[root@selinux conf.d]# ls -ldZ /data/logs/

drwxr-xr-x. root root unconfined_u:object_r:default_t:s0 /data/logs/

[root@selinux conf.d]#

Using the following semanage command, you can make the changes permanent:

[root@selinux conf.d]# semanage fcontext -a -t var_log_t '/data/logs(./*)?'

[root@selinux conf.d]# restorecon -v /data/logs/

restorecon reset /data/logs context unconfined_u:object_r:default_t:s0->unconfined_u:object_r:var_log_t:s0

[root@selinux conf.d]# ls -ldZ /data/logs/

drwxr-xr-x. root root unconfined_u:object_r:var_log_t:s0 /data/logs/

Now your logs should appear in the directory.

 

Using a different port

By default, SELinux only allows connections to the default syslog ports. Here we list syslog-related ports as known by SELinux:

[root@selinux conf.d]# semanage port --list | grep syslog

syslog_tls_port_t tcp 6514

syslog_tls_port_t udp 6514

syslogd_port_t tcp 601

syslogd_port_t udp 514, 601

[root@selinux conf.d]# 

This can be problematic, if you have to use another port for any reason. A common scenario is to enable port 514 also for TCP log messages. You can also learn about Life, the Universe and Everything by collecting some logs on port 42. Or more seriously: use different port numbers for different log message types to make parsing of logs easier.

If you add port 42 (or any other ports, 42 is just an example) and restart syslog-ng without configuring SELinux, it will fail to start. You can check the error message in the journal using journalctl. You will find the following:

May 17 09:55:35 selinux.localdomain setroubleshoot[3640]: SELinux is preventing /usr/sbin/syslog-ng from name_bind access on the tcp_socket port 42. For complete SELinux messages. run sealert -l ff7a7f22-128e-4b

May 17 09:55:35 selinux.localdomain python[3640]: SELinux is preventing /usr/sbin/syslog-ng from name_bind access on the tcp_socket port 42.

You can add the TCP port 42 to syslogd_port_t using the following commands, and check it with a port listing to make sure that it is really added:

[root@selinux ~]# semanage port -a -t syslogd_port_t -p tcp 42

[root@selinux ~]# semanage port --list | grep sysl

syslog_tls_port_t tcp 6514

syslog_tls_port_t udp 6514

syslogd_port_t tcp 42, 601

syslogd_port_t udp 514, 601

After this syslog-ng starts fine and netstat shows that it is listening on port 42. The quest is still not over, as sending logs to port 42 from the outside world still does not work. The culprit in this case is firewalld, which makes sure that no connection is made to an unexpected port. Here we open up TCP port 42, reload the firewall rules to enable the configuration and finally we list ports opened on the firewall.

[root@selinux conf.d]# firewall-cmd --permanent --add-port=42/tcp

success

[root@selinux conf.d]# firewall-cmd --reload

success

[root@selinux conf.d]# firewall-cmd --list-ports

42/tcp

Now syslog-ng is able to receive those log messages about Life, the Universe and Everything on port 42.

 

Further reading

Here, I provided some basic information how to use non-default directories or ports for logging. For more in-depth knowledge about SELinux and firewalld, these websites provide useful information:

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

 

The post Using syslog-ng with SELinux in enforcing mode appeared first on Balabit Blog.

10 ans, 10 ans, Galette a 10 ans !

Posted by Johan Cwiklinski on May 18, 2017 05:30 AM

Chers utilisateurs de Galette,

Non, en fait, Galette n'a pas 10 ans ; je la rajeunis un peu. Mais cela fait effectivement 10 ans aujourd'hui que j'ai repris le projet, que je l'ai en quelque sorte « adopté » :-)

10 ans que je tâche de rendre ce logiciel plus fonctionnel, plus moderne avec des technologies et des versions récentes, mais aussi plus joli et agréable à utiliser. J'attache beaucoup d'importance au logiciel libre et Galette est ma pierre à l'édifice.

10 ans d'évolution de Galette, en parallèle avec ma propre vie. Depuis, je me suis marié, j'ai eu 2 petites filles, j'ai déménagé (plusieurs fois...), changé d'emploi (plusieurs fois...), ...

10 ans durant lesquels j'ai investi beaucoup de temps, de patience et d'énergie dans ce beau projet malgré d'autres obligations, mais le jeu en valait la chandelle car Galette est aujourd'hui utilisé par de nombreuses associations aussi diverses que variées (sportives, suport de logiciel libre, collections, village...) et c'est le meilleur remerciement que l'on puisse espérer.

10 ans d'ajout de fonctionnalités (je ne saurai les lister, il y en a bien trop :)) ; certaines fort utiles, comme les plugins. Certains échecs ont aussi été essuyés, qui m'ont souvent fait perdre beaucoup de temps ou ont été légèrement démotivants. C'est le jeu.

10 ans de code. Ce n'est pas rien ! Quelques chiffres (qui concernent le projet uniquement, les plugins ne sont pas comptés) :

  • 57 releases ;
  • 2 ré-écritures majeures (et une troisième sur les rails) ;
  • 2833 commit (depuis le 13 avril 2007) ;
  • 928 239 lignes ajoutées ; 1 059 951 lignes supprimées ;
  • 761 jours d'activité.

Et malgré tout, Galette ne serait pas ce qu'il est aujourd'hui sans vous, qui m'avez soutenu, rapporté des bogues, soumis des plugins, suggéré - voire financé - des améliorations utiles à tous.

La gestion d'un projet Open Source comme Galette est une réelle aventure... Longue, parfois compliquée, mais toujours intéressante et enrichissante :-)

Aussi, à l'occasion de cet anniversaire très spécial, je voudrais remercier tous ceux qui de près ou de loin m'ont accompagné dans cette aventure et particulièrement à Roland, fidèle parmi les fidèles... Qui était présent avant moi :)

Pour terminer... Merci à Loïs de m'avoir - il y a 10 ans - refilé le « bébé et l'eau du bain » :D

<?php
$date = new \DateTime();
$date->modify('-10 years');
echo $date->format('l jS \of F Y'); //Friday 18th of May 2007

A très bientôt pour de nouvelles aventures... 100 pour 100 pur beurre made in France ! :-D

Red Hat Summit 2017

Posted by Adam Miller on May 18, 2017 04:55 AM

Red Hat Summit 2017

Red Hat Summit 2017 concluded two weeks ago but I am just now getting an opportunity to sit down and write about my experience there. I've been a road warrier lately and was only home for a day and a half and then off to the Fedora Infrastructure Hackathon (but more on that later), then once I got home "for good" I've been attempting to play a game of catch-up.

So here it goes...

Red Hat Summit is without doubt one of my favorite events of the year because I am an extremely proud Red Hatter and I love that we have such an opportunity to show off the latest and greatest that we have to offer the community, our customers, and the world at large. This year was bigger and better than ever, we were in a new (larger) venue at the BCEC, broke our previous attendance record, broke our sponsorship record, and had more sessions and labs than ever before as well. Something else I loved, as it's near and dear to my heart, is that the Community Central portion of the Expo hall was front and center in the main "center stage" so the likes of Fedora, CentOS, Ansible, Gluster, Ceph, Foreman, ManageIQ, oVirt, Project Atomic, OpenShift Origin, and many others had an opportunity to share the spotlight with all the pillars of the Red Hat Technology Portfolio. Another favorite of mine was the portion of the Expo Hall dedicated to customer feedback of current and next-gen still-in-development products as this was certainly the best outlet we could possibly ask for to get real focused feedback from those who spent large portions of their lives with our software.

This year was a bit different for me, I'll often spend a lot of time working the Fedora Community booth in the Expo Hall during Red Hat Summit; which is something I genuinely enjoy doing because it gives me the opportunity to talk to a lot of people about all the things that we mutually find interesting about the innovations going on within the platform. However, I didn't have as much time as usual to dedicate to that as I was an extremely busy bee this year as I found myself with five speaking slots. I created a lab around RPM Packaging that is titled, "From Source to RPM in 120 Minutes" which is effectively a "downstream" instructor-led lab based version of the RPM Packaging Guide that I wrote. I've had a lot of fun doing it in the past and hope to get to continue doing it at future Red Hat Summits. My lab ran each of the three days of Summit and as you may have noticed from the title, those are two hours each session. Then I had two other speaking engagements. First, a Fedora "Birds of a Feather" Session where I lead the conversation with other members of the Fedora Project around current developments, where the project was going, and sparked conversations for feedback from the users and various community members in the room about what aspects of The Project are most important to them. Finally, I co-presented a great talk with an extremely kind human being by the name of Nicolas FANJEAU who works for Airbus, the session was called "Ansible All the Things" where I talked about a wide array of things you can accomplish with Ansible from the traditional to the unorthadox and then Nicolas gave a real world example of how he and his team at Airbus are actually doing a lot of those things (including wiring up Ansible Tower to Service Now) to improve efficiency within their Enterprise and actually deliver aircraft faster as a side effect. It was great fun and I hope to get a chance to work with Nicolas again in the future.

From there I had multiple customer engagements where we discussed their use of Red Hat Container Technologies such as OpenShift Container Platform and Red Hat Enterprise Linux Atomic Host are solving real world business needs and helping to advise on best practices around those technologies. These kind of interactions are again something I really enjoy getting an opportunity to do because it gives me a good perspective on how people are putting the technology to use that I have the good fortune as a member of the Fedora Engineering Team to work on and work with upstream.

There were also many many wonderful Red Hat Announcements, so many I've forgotten at least half of them. I highly recommend you checkout the website to find out more if you're interested.

Closing time

All in all I was exhausted by the end of the week and looking forward to getting back to a more normal level of chaos ... except I still had that hackathon to get to. ;)

Until next time...

Cron jobs

Posted by Wilfredo Porta on May 17, 2017 08:56 PM

Cron jobs are perfect for executing a specific task or script at a scheduled time or different time periods.

Cron jobs can be run hourly, daily, weekly and monthly.

Cron configuration file is located:

/etc/crontab

This file should look like this:

# Example of job definition:
# .---------------- minute (0 - 59)
# |  .------------- hour (0 - 23)
# |  |  .---------- day of month (1 - 31)
# |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
# |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# |  |  |  |  |
# *  *  *  *  * user-name command to be executed

Execute a job every X minutes

*/5 * * * * /path/to/script/script.sh

Use */10 for every 10 minutes, */15 for every 15 minutes… and so forth..

Execute a job every X hours

0 */5 * * * /path/to/script/script.sh

Use */2 for every 2 hours, */3 for every 3 minutes… and so forth..

Execute a job every Xth day of the week

Lets assume you want to execute a cron job every Wednesday at midnight:

0 0 * * 3 /path/to/script/script.sh

or

0 0 * * Wed /path/to/script/script.sh

You can use the corresponding number or the three letters for each weekday:

0=Sun
1=Mon
2=Tue
3=Wed
4=Thu
5=Fri
6=Sat

Please note that numbers starts with 0 for Monday, and not 1.

Execute a job every X months

You need to specify the what specific month or months you want to job to be executed. Like January and September:

0 0 1 1,9 * /path/to/script/script.sh

or

0 0 1 Jan,Sep * /path/to/script/script.sh

If you only want January and September, you should use a comma. If you want the job to be executed starting January and ending September, you need to use this format: 1-9.


Hacking the food chain in Switzerland

Posted by Daniel Pocock on May 17, 2017 06:41 PM

A group has recently been formed on Meetup seeking to build a food computer in Zurich. The initial meeting is planned for 6:30pm on 20 June 2017 at ETH, (Zurich Centre/Zentrum, Rämistrasse 101).

The question of food security underlies many of the world's problems today. In wealthier nations, we are being called upon to trust a highly opaque supply chain and our choices are limited to those things that major supermarket chains are willing to stock. A huge transport and storage apparatus adds to the cost and CO2 emissions and detracts from the nutritional value of the produce that reaches our plates. In recent times, these problems have been highlighted by the horsemeat scandal, the Guacapocalypse and the British Hummus crisis.

One interesting initiative to create transparency and encourage diversity in our diets is the Open Agriculture (OpenAg) Initiative from MIT, summarised in this TED video from Caleb Harper. The food produced is healthier and fresher than anything you might find in a supermarket and has no exposure to pesticides.

An open source approach to food

An interesting aspect of this project is the promise of an open source approach. The project provides hardware plans, a a video of the build process, source code and the promise of sharing climate recipes (scripts) to replicate the climates of different regions, helping ensure it is always the season for your favour fruit or vegetable.

Do we need it?

Some people have commented on the cost of equipment and electricity. Carsten Agger recently blogged about permaculture as a cleaner alternative. While there are many places where people can take that approach, there are also many overpopulated regions and cities where it is not feasible. Some countries, like Japan, have an enormous population and previously productive farmland contaminated by industry, such as the Fukushima region. Growing our own food also has the potential to reduce food waste, as individual families and communities can grow what they need.

Whether it is essential or not, the food computer project also provides a powerful platform to educate people about food and climate issues and an exciting opportunity to take the free and open source philosophy into many more places in our local communities. The Zurich Meetup group has already received expressions of interest from a diverse group including professionals, researchers, students, hackers, sustainability activists and free software developers.

Next steps

People who want to form a group in their own region can look in the forum topic "Where are you building your Food Computer?" to find out if anybody has already expressed interest.

Which patterns from the free software world can help more people build more food computers? I've already suggested using Debian's live-wrapper to distribute a runnable ISO image that can boot from a USB stick, can you suggest other solutions like this?

Can you think of any free software events where you would like to see a talk or exhibit about this project? Please suggest them on the OpenAg forum.

There are many interesting resources about the food crisis, an interesting starting point is watching the documentary Food, Inc.

If you are in Switzerland, please consider attending the meeting on at 6:30pm on 20 June 2017 at ETH (Centre/Zentrum), Zurich.

One final thing to contemplate: if you are not hacking your own food supply, who is?

Learning Music

Posted by Fernando Espinoza on May 17, 2017 03:18 PM

Por: Jesus Ocoña Con este software online gratuito aprenderás a componer música como un profesional Si eres un amante de la música pero no sabes componer, ya no tienes excusa, una empresa especializada en sucuenciadores, ha lazado una plataforma de e-learning gratuita para que cualquier aprenda a producir música. Su nombre es Learning Music y para [...]