Fedora People

Fedora not boot on MacBook Pro

Posted by Luca Ciavatta on December 16, 2016 08:00 AM

Dracut initqueque timeout and Dracut Emergency Shell

Fedora 25 - Dracut emergency console

If you, like me, are in troubles installing from an USB key the new Fedora 25 on a old MacBook Pro or on a complicated PC and you’re getting a dracut emergency console, take a breath because it’s a really simple case.

dracut-initqueue[564]: Warning: dracut-initqueue timeout - starting timeout scripts
dracut-initqueue[564]: Warning: /dev/disk/by-label/Fedora-WS-Live-Desktop-25-1-3 does not exist
Starting Dracut Emergency Shell...
Warning: /dev/disk/by-label/Fedora-WS-Live-Desktop-25-1-3 does not exist
Warning: /dev/mapper/live-rw does not exist
Started Setup Virtual Console.
Starting Dracut Emergency Shell...

Simply type exit to solve this problem and to continue the booting process.

Using Scrapy to crawl websites

Posted by Mohan Prakash on December 09, 2016 05:18 PM
A wonderful tool Scrapy came to my notice that can be conveniently used to crawl websites. It has many good features and I tested it successfully on Fedora 25. I am using it currently to do a bit of data mining and data analysis.

Talking to Docker daemon of Fedora Atomic Host

Posted by Trishna Guha on December 09, 2016 11:58 AM

This post will describe how to use Docker daemon of Fedora Atomic host remotely.  Note that we are also going to secure the Docker daemon since we are connecting via Network which we will be doing with TLS.

TLS (Transport Layer Security) provides communication security over computer network. We will create client cert and server cert to secure our Docker daemon. OpenSSL will be used to to create the cert keys for establishing TLS connection.

I am using Fedora Atomic host as remote and workstation as my present host.

Thanks to Chris Houseknecht for writing an Ansible role which creates all the certs required automatically, so that there is no need to issue openssl commands manually. Here is the Ansible role repository: https://github.com/ansible/role-secure-docker-daemon. Clone it to your present working host.

$ mkdir secure-docker-daemon
$ cd secure-docker-daemon
$ git clone https://github.com/ansible/role-secure-docker-daemon.git
$ touch ansible.cfg inventory secure-docker-daemon.yml
$ ls 
ansible.cfg  inventory  role-secure-docker-daemon  secure-docker-daemon.yml

$ vim ansible.cfg
[defaults]
inventory=inventory
remote_user='USER_OF_ATOMIC_HOST'

$ vim inventory 
[serveratomic]
'IP_OF_ATOMIC_HOST' ansible_ssh_private_key_file='PRIVATE_KEY_FILE'

$ vim secure-docker-daemon.yml
---
- name: Secure Docker daemon for Atomic host
  hosts: serveratomic
  gather_facts: no
  become: yes
  roles:
    - role: role-secure-docker-daemon
      dds_host: 'IP_OF_ATOMIC_HOST'
      dds_server_cert_path: /etc/docker
      dds_restart_docker: no

Replace ‘USER_OF_ATOMIC_HOST’ with the user of your Atomic host, ‘IP_OF_ATOMIC_HOST’ with the IP of your Atomic host, ‘PRIVATE_KEY_FILE’ with the ssh private key file of your workstation.

Now we will run the ansible playbook. This will create client and server certs on the Atomic host.

$ ansible-playbook secure-docker-daemon.yml

Now ssh to your Atomic host.

We will copy the client certs created on the Atomic host to the workstation. You will find the client certs file in ~/.docker directory as root user. Now create ~/.docker directory on your workstation for regular user and copy the client certs there. You can use scp to copy the cert files from Atomic host to Workstation or do it manually😉.

We are going to append some Environment variables in the ~/.bashrc file of the workstation for regular user.

$ vim ~/.bashrc
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=~/.docker/
export DOCKER_HOST=tcp://IP_OF_ATOMIC_HOST:2376

Docker’s port is 2376 for TLS (secured port).

Now go your Atomic host. We will add tls options to docker command on atomic host.

Add –tlsverify –tlscacert=/etc/docker/ca.pem –tlscert=/etc/docker/server-cert.pem –tlskey=/etc/docker/server-key.pem -H=0.0.0.0:2376 -H=unix:///var/run/docker.sock in the /etc/sysconfig/docker file.

$ vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server-cert.pem --tlskey=/etc/docker/server-key.pem -H=0.0.0.0:2376 -H=unix:///var/run/docker.sock'

We will need to reload and restart the docker daemon.

$ sudo systemctl docker-reload
$ sudo systemctl restart docker.service

Reboot both of your Atomic host and Workstation.

So now if you try running any docker command as regular user on your workstation it will talk to the docker daemon of the Atomic host and execute the command there. You do not need to manually ssh and issue docker command on your Atomic host🙂.

Here are some screenshots for demonstration:

Atomic Host:

screenshot-from-2016-12-09-10-27-47

screenshot-from-2016-12-09-10-29-46

screenshot-from-2016-12-09-10-26-31

Workstation:

fotoflexer_photo

screenshot-from-2016-12-09-10-26-35

 


Corrigindo copiar e colar com Mouse no vim 8 Fedora 25

Posted by Daniel Lara on December 09, 2016 10:22 AM
Ao Atualizar o Vim para a versão 8 tive problemas com copiar e colar com o mouse , conversando com um Amigo "Diego Neves " expert em Vim
onde o mesmo me passou algumas dicas

Ao entrar no vim  usar o comando

<ESC> : set mouse=r

Até ai ficou , para deixar permanente tive que adicionar no meu vimrc

$ sudo vim /etc/vimrc

e adicionei

set mouse=r

o mesmo foi feito no virc

Espero que ajude alguém com mesmo problema

It’s been an age…..

Posted by Paul Mellors [MooDoo] on December 09, 2016 09:08 AM

It’s been an age since I’ve posted a blog entry, it’s been a longer age since I’ve posted on the Fedora Planet.  How sad.  I’ve been away as my laptop broke [this is now resolved] and the server I was using to host all my ssh/irc/irssi stuff, was shutdown for personal reasons, this hasn’t been resolved yet but hopefully will be soon.

All in all I’ve been away from my Fedora duties far too long and it’s something I’ve been missing.   So I’m going to try an get back to a more stable situation in 2017, it’s pointless trying anything now as it’ll all go to pot around Christmas and from the 16th December I’ll be on my jollies and not available at all until around the 28th.

If there is anything you need from me while I’m away from the front line, then please feel free to contact me at prjmellors -@- gmail.com, I’m always about and quite partial to the odd chin wag.

Any way, until you see me again in Jan – Peace, Out.


C99 features in GCC on Fedora

Posted by Fedora Magazine on December 09, 2016 08:00 AM

The New C

C99 is the C standard ratified by the ANSI and ISO standardizaion groups. It presents a significant amount of changes to the C language. These changes are the result of sibling competition between C and C++. The initial version of C is named after Kernighan and Ritchie. Classic C is K&R C with structure assignment, enumerations, and void. C89, or ANSI C, had some influences from C with Classes. For example, C89 adopted function prototypes in a similar form to what C with Classes provided. The most significant changes in C99, compared to C89, support a variety of new features.

Following is a brief list of C99 features:

  • Boolean data types <stdbool.h>
  • Increased identifier size limits
  • C++ style/line comments
  • Inline functions
  • Restricted pointers
  • Variable Declarations
  • Variable length arrays
  • New long long type

Specific standards

The new C standard for gcc is applied with the following command.

$gcc -Wall -std=c99 filename.c

Let us consider the following program named bool.c:

/*bool.c */
#include <stdio.h>
#include <stdbool.h>
int main(int argc, char *argv[])
{
  bool b = true;
  if(b)
    printf("It is a true Boolean data type supported only in C99\n");
  else
   printf("Boolean data type is not supported in C99\n");
  return 0;
}

$ gcc -Wall -std=c99  bool.c
$ ./a.out
It is a true Boolean data type supported only in C99

Linking C99 programs with external libraries

The C standard library consists of a collection of headers and library routines used by C programs. The external library is usually stored with an extension .a and known as a static library. For instance, the C math library is typically stored in the file /usr/lib/libm.a on Linux. The prototype declarations for the functions in the math library are specified in the header file /usr/include/math.h.

/*hypotenuse.c*/
#include <stdio.h>
#include <math.h>
int main(int argc, char *argv[])
{
  float x,y,h;
  //C99 program to demonstrate the use of external math library function
  printf("\n Enter the values for x and y\n");
  scanf("%f %f", &x,&y);
  h=hypotf(x,y);
  printf(" The Hypotenuse of x and y is %f\n", h);
  return 0;
}

Consider the above program hypotenuse.c. The following commands are then executed:

$ gcc -Wall -std=c99 hypotenuse.c /usr/lib/libm.so -o hypt
$ ./hypt
Enter the values for x and y
6.0
8.0
The Hypotenuse of x and y is 10.000000

gcc provides an -l option to override long paths when linking against libraries. The following command illustrates this option:

$ gcc -Wall -std=c99  hypotenuse.c -lm -o hypt
$ ./hypt
Enter the values for x and y
4.0 8.0
The Hypotenuse of x and y is 8.944272

Mixed declarations

ISO C99 allows declarations and code to simultaneously exist in compound statements. The following programming example illustrates this feature:

/*mixednewdec.c*/
#include<stdio.h>
int main(int argc, char*argv[])
{
  for(int i=2; i>0 ; --i)
  {
    printf("%d", i);
    int j = i * 2;
    printf("\n %d \n", j);
  }
}

$ gcc -Wall -std=c99  mixednewdec.c
$ ./a.out
2
4
1
2

The identifier is visible from where it is declared to the end of the enclosing block.

Variable Length Arrays(VLA)

Variable Length Arrays are not dynamic arrays. Rather, they are created with different sizes each time a declaration is encountered. Only local arrays which are within block scope can be variable arrays.

/*vla.c*/
#include<stdio.h>
int main(int argc, char *argv[])
{
  int j = 10;
  void func(int);
  func(j);
  return 0;
}

void func(int x)
{
  int arr[x];
  for(int i=1; i<=x; i++)
  {
    int j=2;
    arr[i] = j*i;
    printf("%d\n",arr[i]);
  }
}

Previously array sizes were of fixed size. C99 removes this constraint. It also frees you from performing allocate( ) and delete( ) operations on memory explicitly. The output of VLA is illustrated below.

$ gcc -Wall -std=c99 vla.c
$ ./a.out
2
4
6
8
10
12
14
16
18
20

New Long Long Type

Long long is 64 bit wide integer type. This is the biggest integer type in the C language standard. The long long type was specified to give 32-bit machines a way to handle 64-bit data when interactingwith 64 bit machines. Consider the following C program longdt.c:

/*longdt.c*/
#include <stdio.h>
int main(int argc, char *argv[])
{
  long long num1 = 123456789101LL;
  long int num2 = 12345678;
  printf("Size of %lld is %u bytes\n", num1 ,sizeof(num1));
  printf("Size of %ld is %u bytes\n", num2 ,sizeof(num2));
  return 0;
}

$ gcc -Wall -std=c99 longdt.c
$ ./a.out
Size of 123456789101 is 8 bytes
Size of 12345678 is 4 bytes

Restricted Pointers

C99 lets you prefix pointer declarations with the restrict keyword. Thus the pointer itself will be used to access the object it points to. This features takes care of the shortfall of aliasing. It also aids in code optimization. For example, consider the signature of strcat() function in the string.h file:

char *strcat (char* restrict dest,  const char * src)

The source string is appended to the end of the destination string. Here, the destination and source strings can be referenced only through the pointers dest and src. The compiler can then optimize the code generated for the function.

Inline Functions

Inline functions save the overhead of function calls. Consider the following program inlinecode.c to demonstrate the use of inline in C99.

/*myheader.h*/

#ifndef MYHEADER_H
#define MYHEADER_H

inline int min(int a, int b)
{
  return a < b ? a : b;
}

#endif

/*inlinecode.c*/

#include <stdio.h>
#include "myheader.h"
extern int min(int,int);
int main(int argc, char *argv[])
{
  int a =10, b=20;
  int min_value = min(10,20);
  printf(" The minimum of a and b is %d\n", min_value);
  return 0;
}

$ gcc -Wall -std = c99 inlinecode.c
$ ./a.out
The minimum of a and b is 10

Conclusion

C99 is a step ahead in the evolution of ANSI C. It incorporates elegant features like single line comments, boolean data types and larger size data types. C99 also supports code optimization through restricted pointer usage and supports inline functions. Now programmers can exploit these new features and further optimize their code for programming efficiency.

PHP version 5.6.29 and 7.0.14

Posted by Remi Collet on December 09, 2016 06:52 AM

RPM of PHP version 7.0.14 are available in remi repository for Fedora 25 and in remi-php70 repository for Fedora 22-24 and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 5.6.29 are available in remi repository for Fedora 22-24 and  remi-php56 repository for Enterprise Linux.

emblem-important-2-24.pngPHP version 5.5 have reached its end life and is no longer maintained by the project.

These versions are also available as Software Collections.

security-medium-2-24.pngThese versions fix some security bugs, so update is strongly recommended.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.0 installation (simplest):

yum-config-manager --enable remi-php70
yum update

Parallel installation of version 7.0 as Software Collection (x86_64 only):

yum install php70

Replacement of default PHP by version 5.6 installation (simplest):

yum-config-manager --enable remi-php56
yum update

Parallel installation of version 5.6 as Software Collection (x86_64 only):

yum install php56

Replacement of default PHP by version 5.5 installation (simplest):

yum-config-manager --enable remi-php55
yum update

Parallel installation of version 5.5 as Software Collection (x86_64 only):

yum --enablerepo=remi install php55

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.2
  • EL6 rpm are build using RHEL-6.8
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70)

10 years of dgplug summer training

Posted by Kushal Das on December 09, 2016 06:03 AM

In 2017 dgplug summer training will be happening for the 10th time. Let me tell you honestly, I had no clue that we will reach here when I started this back in 2008. The community gathered together, and we somehow managed to continue. In case, you do not know about this training, dgplug summer training is a 3 months long online IRC based course where we help people to become contributors to upstream projects. The sessions start around 6:30PM IST, and continues till 9PM (or sometimes till very late at night) for 3 months. You can read the logs of the sessions here.

The beginning

I remember looking at how people were trying to apply for Google Summer Code back in 2008, but many of them never had minimum experience with software development in Open Source world. dgplug started in 2004, and I think our motto “Learn yourself, teach others” is still as relevant now, as it was back in 2004. I was thinking about one online training where we can teach the basics of using Linux, and also basics of programming in Linux. How to create, and submit patches etc. I chose to use IRC as the place of the training because of bandwidth issue. Looking at how we conducted regular Fedora meetings on IRC, I was sure it would be doable. Of course there were many self doubts, too many open questions for which I never had any answer. When I shared this idea in different mailing lists, and asked for volunteers, Shakthi Kannan (irc: mbuf) also jumped in to mentor in the training. This gave me some hope. We had around 10+ participants in the first year of this training. Mailing lists guidelines, and how to communicate online were important from the beginning. Python was my primary language back then, and we choose to use that in the training too. With help from Prasad J. Pandit (irc: pjp) we had sessions on programming using C. But we found out that debugging segfaults over IRC was just not possible. We can try to do that if there were 2 or 3 people, but for a large group it was just not possible. As I mentioned earlier, contribution is the keyword for this training, we had other sessions than programming from the very first year. Máirín Duffy took a session on Fedora Arts, Paul Frields (then Fedora Project leader) took a session on Documentation. Debarshi took the sessions on shell scripting, and few of our sessions went up to 2AM (starting at 6pm).

Continuing & experimenting

Many participants from the 2008 session were regulars in the IRC channel. A few of them were already contributing in upstream projects. We decided to continue with the training. The number of participants grew from 10 to 170+ (in 2016). We did various kinds of experiments in between these years. Many of our ideas failed. Many ideas have gone through iterations until we found what works better. Below are few ideas we found difficult to implement in a training like this:

  • Teaching C programming over IRC is not an easy task.
  • We have to be regular in the sessions, start at the right hours every day. Changing session timings are bad.
  • Do not give any longer break than 5 days. Participant rate will drop if there are no sessions for more than 5 days.
  • Doing video session is difficult (even in 2016).

Lessons learned (the good points)

Start from scratch

Every year we start from scratch. There are always many participants who does not know much of command line tools, or how to reply inline to a mail in the mailing list. This is why we start from communication tools, basic shell commands, and using editors (we teach Vim). From there the sessions slowly start becoming more intense.

IRC works well on low bandwidth

IRC is still the best medium for a training where people are joining in from all over the world. It works in 2G connection too. Breaking the first entry barrier is difficult, but once we breach that, people generally stays back on the channels.

Stay in touch

Stay in touch with the participants even after the course is finished. Most of the regular participants in the #dgplug channel are actual ex-participants of the summer training. Some were trainees, some were trainers. Talking about other things in life is important to increase the participation in IRC.

Have documentation

We were slow in this point. Even though from the first year we had logs of each session, we never had one single place for the participants to learn from. Now we maintain a Sphinx docs. It is always nice to be able to point people to one link from where they can start.

Meet in real world

Every year we meet during PyCon India. This meeting is known as staircase meeting as we used to block most of the staircase at the venue in Bangalore. Meeting people face to face, talking to them in real life, sharing food & thoughts help to break many hidden barriers. It is true that not all of us can meet in one place, but we try to meet at least once every year. In 2017 we will be meeting during PyCon Pune in February in Pune, then once again in PyCon India in Delhi.

Give them hope, give them examples

Other than our regular sessions, we also get sessions from other known upstream contributors in various projects. They talk about how they have started their journey in Open Source world. Sometimes they sessions on their field. We had sessions from documentation, art work, experiences of past participants, Creative Commons licenses, leaders from various communities. This year Python creator Guido Van Rossum also joined in, and took a very interactive session.

Automate as much as possible

We now have a bot which helps the trainer during the sessions. This helps to reduce chaos during the sessions. As we all know the nature of IRC, it is very common that 10 people can try to talk at the same time. Stopping that was a big win for us. We still do not upload the logs automatically, as we go through the logs once manually before uploading.

Have a group of volunteers whom the community can trust

Most of the volunteers of dgplug are ex-participants of this summer training. They stayed back in the channel after their training is over. They help people almost 247 (depending their availability). There are the people whom we all trust. They become leaders by their own effort, no one comes and announces new leaders. Delegation of various work is important, getting fresh blood in the volunteer pool is very important. We constantly look for new volunteers. But at the same time you will need some people who can come in at the time of any emergency (with authority).

Enforce some rules

Rules are important for any community, call it Code of Conduct, or something else. But without these maintained rules, any community can get into chaos. Enforce these rules. We generally force people to write in full English words all the time (instead of sms language). The participants do pull legs of anyone who types u or r in the IRC channel. We also try to push people for inline reply than top posting in the mailing list.

Solve real life problem

Do not try to stay in only examples from books while teaching something. Always try to solve problems which are closer to the hearts of the participants. These solutions will give them much more encouragement than anything else. It can as small as finding photos in their computer, or building a full scale application to manage one’s video pool. We try to use Python to solve many of these issues, as we have experienced Python programmers in the channel . Python3 is our default language to do anything on the sessions.

Outcome of the training

I think we are the highest concentration of upstream contributors in one community in India. We had participants from Singapore, South Korea, to Afghanistan, Pakistan, to Nigeria (a big shoutout to Tosin), to many parts in US and Canada. We had school teachers, college students, lawyers, sys-admins, housewives, school students, music students, participants from various backgrounds in the last 10 years. We have companies personally inquiring about ex-participants before hiring them. Many of the participants are now my colleague in Red Hat. We have people in the channel who are willing to help anyone, everyone learns from everyone else. People are part many other communities, but #dgplug binds us all.

FUDCon APAC Phnom Penh 2016

Posted by Sirko Kemter on December 09, 2016 03:30 AM

FUDCon 2016, that was for me first of all a lot of work especially after the change of the venue in nearly last minute. Instead of ITC BarCamp happened this year at Norton University, what turned out not to be a good choice. A new hotel had to be found, not an easy task as on this side of the river are not many yet.

What I learned during my stay in South East Asia is, things need time here. So last minute changes bring you a lot of more work, what costs you time that you could spend in other tasks. So you need a lot of flexibility to handle that, a thing that drives a german sometimes mad. A special thanks goes here to Yekleang Dy, who helped me organizing all this small but important things like getting the people from the airport, the breakfast and lunch. I hope she helps me also for future events we planning in Cambodia.

Even the rooms of Norton University are better the exhibition area is it compared to ITC not, so there was much lesser exhibitors. Even the talks play a more vital role at the BarCamp, the exhibition seems to be from interest to. I remember last year the exhibition was not just bigger they had a stage with talks to. This and the the venue itself, which is located on the peninsula between Tonle Sap and Mekong. To get there you have to cross the Tonle bridge, what is especially in times of the rush hours horror. So that and the rain which was on sunday might kept a lot visitors away. But still the auditorium at the opening ceremony was filled (not comparable with the 3.500 seats at ITC but still 1.000 people). Next to Alex Davis Environment, Science and Technology Officer of the US Embassy in Cambodia, Dr. Seng Sopheap President of the National Institute of Post, Telecommuncation and ICT, Chan Sokheang Rector of Norton University, H.E. Chun Vat Director General of the Ministry of Posts and Telecommunication and our own Fedora Community Action and Impact Coordinator Brian Exelbierd gave a keynote.

Brians keynote nailed it down, there is no better way for an IT student as contributing to an Open Source project like Fedora to improve his CV, everybody can see what he has done because it is visible for all, his contributions are very visible for possible employer. Sadly we could not fill this large room again for the following talks, other events had that problem to, I had during FOSSASIA in that room a talk, which was also then just 10 people in that large room. But the rooms we had on the other floors was at least on Saturday good filled.

 

Especially Mohan Prakaschs workshop about Android Development with Fedora and also Jens Petersens talk about Functional Programming was good filled. In Jens talk was alone more then 25 students of PNC (Passerelles numériques Cambodge). A school who gives the young from underprivileged families a chance to become an IT professional.

Besides that there was time to speak about the situation in APAC and develop ideas how to improve Fedoras presence in the region. I had good conversations with Gerard Braad and Tommy about, what could help in China and with Jens about Japan. We also had the chance to deepen the contacts with the organizers of other Barcamp in the region, so was organizers from Thailand, Indonesia, the Philippines and Hong Kong around. Finding contributors in Cambodia is still very hard, the awareness for Open Source is low but presenting Fedora to an bigger audience in cooperation with an well know and good visited event, gave us the chance to make Fedora more known in Cambodia. Objective was to make Fedora more known in Cambodia and show that Fedora is leading the development in the FOSS world and that Fedoras contributors are experts in their area. To be sponsor of the BarCamp ASEAN, made Fedora very visible and also the big banners in the main hall did a good job. And it is still working, as the pictures go viral now.

Also giving the Barcamp volunteers FUDCon T-Shirts made us a lot of new friends. So there will be a brighter future for Fedora in Cambodia.

Fedora 23 End of Life

Posted by Fedora Magazine on December 08, 2016 10:28 PM

With the recent release of Fedora 25, Fedora 23 will officially enter End Of Life (EOL) status on December 20th, 2016. After December 20th, all packages in the Fedora 23 repositories will no longer receive security, bugfix, or enhancement updates, and no new packages will be added to the Fedora 23 collection.

Upgrading to Fedora 24 or Fedora 25 before December 20th 2016 is highly recommended for all users still running Fedora 23.

Looking back at Fedora 23

Fedora 23 was released in early November 2015, and during this time the Fedora Community have published over 10 000 updates to the Fedora 23 Repositories. Fedora 23 released with version 4.2 of the Linux kernel, and Fedora Workstation featured version 3.18 of GNOME. LibreOffice version 5 also made it’s first appearance in Fedora 23, and Fedora 23 also introduced the brand new Cinnamon spin.

Screenshot of Fedora 23 Workstation

Fedora 23 Workstation screenshot

About the Fedora Release Cycle

The Fedora Project provides updates for a particular release up until a month after the second subsequent version of Fedora is released. For example, updates for Fedora 24 will continue until one month after the release of Fedora 26, and Fedora 25 will continue to be supported up until one month after the release of Fedora 27.

The Fedora Project wiki contains more detailed information about the entire Fedora Release Life Cycle, from development to release, and the post-release support period.

Save

Save

Fedora Design Interns Update

Posted by Máirín Duffy on December 08, 2016 08:15 PM

Fedora Design Team Logo

I wanted to give you an update on the status of the Fedora Design team’s interns. We currently have two interns on our team:

Flock 2016 Logo

Mary Shakshober – (IRC: mshakshober) Mary started her internship full time this summer and amongst other things designed the beautiful, Polish folk art-inspired Flock 2016 logo. She’s currently working limited hours as the school year is back in swing at UNH, but she is still working on design team tickets, including new Fedora booth material designs and a template for Fedora’s logic model.

Suzanne Hillman – (IRC: shillman) Suzanne just started her Outreachy internship with us two days ago. She has been working on UX design research for a new Fedora Hubs feature – Regional Hubs. She’s already had some interviews with Fedora folks who’ve been involved in organizing regional Fedora events, and we’ll be using an affinity mapping exercise along with Matthew Miller to analyze the data she’s collected.

If you see Mary or Suzanne around, please say hi! 🙂

Holiday Break 2016.

Posted by Paul W. Frields on December 08, 2016 06:31 PM

The post Holiday Break 2016. appeared first on The Grand Fallacy.

It’s sad I don’t get more time to post here these days. Being a manager is a pretty busy job, although I have no complaints! It’s enjoyable, and fortunately I have one of the best teams imaginable to work with, the Fedora Engineering team.

Since we’re coming to the close of another calendar year, I wanted to take a moment to remind people about what the holidays mean to availability. I’m going to crib from an earlier post of mine:

My good friend John Poelstra is fond of saying, “It’s OK to disappoint people, but it’s not OK to surprise them.” That wisdom is a big reason why I like to set expectations over the holidays.

Working at Red Hat is a fast paced and demanding job. Working full time in Fedora is itself demanding on top of that. These demands can make downtime at the holiday important for our team. At Red Hat, there’s a general company shutdown between Christmas and the New Year. This lets the whole organization relax and step away from the keyboard without guilt or fear.

Of course, vital functions are always staffed. Red Hat’s customers will always find us there to support them. Similarly, our Fedora infrastructure team will monitor over the holidays to ensure our services are working nominally, and jump in to fix them if not.

Some people like to spend time over the holidays hacking on projects. Others prefer to spend the time with family and friends. I’ve encouraged our team to use the Fedora vacation calendar to mark their expected “down time.” I encourage other community members to use the calendar, too, especially if they carry some expectations or regular responsibilities around the project.

So all this to say, don’t be surprised if it’s harder to reach some people over the holidays. I’m personally taking several weeks around this holiday shutdown as time off, to relax with my family and recharge for what’s sure to be another busy year. Whatever your plans, I hope the holiday season is full of joy and peace for you and yours.

All systems go

Posted by Fedora Infrastructure Status on December 08, 2016 02:24 PM
Service 'COPR Build System' now has status: good: all systems operational

Remembering a friend: Matthew Williams

Posted by Fedora Community Blog on December 08, 2016 08:15 AM
Matthew Williams (left) interviews Ryan Jarvinen (right)

Matthew Williams (left) interviews Ryan Jarvinen (right)

One of the things about working in open source software communities is that you are always moving forward. It’s hard not to get a sense of momentum and progress when it seems you are constantly striving to improve and build on the work you and others have done before.

But sometimes you have to pause to reflect, because sometimes there is loss.

Remembering Matthew Williams

It is with heavy hearts that the Fedora Project community learned yesterday of the passing of one of its prominent members, Matthew Williams, who lost his three-year battle with cancer Wednesday morning. Matthew, also known as “Lord Drachenblut,” was an Indiana native and a passionate member of the Fedora community.

Matthew’s passion to constantly improve the software and hardware with which he worked created a tireless advocate for the Fedora Project, and his presence was felt at conferences across the nation: SCaLE, Ohio LinuxFest, and the former Indiana LinuxFest, an Indianapolis-based event that he helped found.

Matthew also devoted time to interviewing and archiving notable figures in the free and open source software communities to learn what drove people to work on their projects. He was also very driven to share what he knew, launching the Open FOSS training site in 2015 to help new Linux users with getting involved with any Linux distribution. While he was active in the Fedora community, Matthew was also very involved with Ubuntu as well.

A great deal of what Matthew did for Fedora centered on getting more people involved and knowledgeable about the project. To that end, he was the owner of the Fedora G+ page, a responsibility he took very seriously. Under his management, the page has over 25,000 members and is one of the Fedora Project’s strongest outreach channels.

All of this work and achievement does not really portray what Matthew was like as a person: a kind and thoughtful soul with an unwavering dedication to the things in which he believed. For those who worked with and knew Lord Drachenblut, it is your personal thoughts we invite you to reflect upon today. For the rest, know that the Fedora Project and the open source software community at large is a little more poorer today with the passing of our colleague.

The building will continue, but we will miss our friend Matthew.

The post Remembering a friend: Matthew Williams appeared first on Fedora Community Blog.

libinput beginner project - disabling touchpads on lid close

Posted by Peter Hutterer on December 07, 2016 10:49 PM

Update: Dec 08 2016: someone's working on this project. Sorry about the late update, but feel free to pick other projects you want to work on.

Interested in hacking on some low-level stuff and implementing a feature that's useful to a lot of laptop owners out there? We have a feature on libinput's todo list but I'm just constantly losing my fight against the ever-growing todo list. So if you already know C and you're interested in playing around with some low-level bits of software this may be the project for you.

Specifically: within libinput, we want to disable certain devices based on a lid state. In the first instance this means that when the lid switch is toggled to closed, the touchpad and trackpoint get silently disabled to not send events anymore. [1] Since it's based on a switch state, this also means that we'll now have to listen to switch events and expose those devices to libinput users.

The things required to get all this working are:

  • Designing a switch interface plus the boilerplate code required (I've done most of this bit already)
  • Extending the current evdev backend to handle devices with EV_SW and exposing their events
  • Hooking up the switch devices to internal touchpads/trackpoints to disable them ad-hoc
  • Handle those devices where lid switch is broken in the hardware (more details on this when we get to this point)

You get to dabble with libinput and a bit of udev and the kernel. Possibly Xorg stuff, but that's unlikely at this point. This project is well suited for someone with a few spare weekends ahead. It's great for someone who hasn't worked with libinput before, but it's not a project to learn C, you better know that ahead of time. I'd provide the mentoring of course (I'm in UTC+10, so expect IRC/email). If you're interested let me know. Riches and fame may happen but are not guaranteed.

[1] A number of laptops have a hw issue where either device may send random events when the lid is closed

Installing an OpenShift Origin Cluster on Fedora 25 Atomic Host: Part 1

Posted by Dusty Mabe on December 07, 2016 10:31 PM

Cross posted with this Project Atomic Blog post

Introduction

Openshift Origin is the upstream project that builds on top of the Kubernetes platform and feeds into the OpenShift Container Platform product that is available from Red Hat today. Origin is a great way to get started with Kubernetes, and what better place to run a container orchestration layer than on top of Fedora Atomic Host?

We recently released Fedora 25, along with the first biweekly release of Fedora 25 Atomic Host. This blog post will show you the basics for getting a production installation of Origin running on Fedora 25 Atomic Host using the OpenShift Ansible Installer. The OpenShift Ansible installer will allow you to install a production-worthy OpenShift cluster. If instead you'd like to just try out OpenShift on a single node instead, you can set up OpenShift with the oc cluster up command, which we will detail in a later blog post.

This first post will cover just the installation. In a later blog post we'll take the system we just installed for a spin and make sure everything is working as expected.

Environment

We've tried to make this setup as generic as possible. In this case we will be targeting three generic servers that are running Fedora 25 Atomic Host. As is common with cloud environments these servers each have an "internal" private address that can't be accessed from the internet, and a public NATed address that can be accessed from the outside. Here is the identifying information for the three servers:

+-------------+----------------+--------------+
|     Role    |   Public IPv4  | Private IPv4 |
+=============+================+==============+
| master,etcd | 54.175.0.44    | 10.0.173.101 |
+-------------+----------------+--------------+
|    worker   | 52.91.115.81   | 10.0.156.20  |
+-------------+----------------+--------------+
|    worker   | 54.204.208.138 | 10.0.251.101 |
+-------------+----------------+--------------+

NOTE In a real production setup we would want mutiple master nodes and multiple etcd nodes closer to what is shown in the installation docs.

As you can see from the table we've marked one of the nodes as the master and the other two as what we're calling worker nodes. The master node will run the api server, scheduler, and controller manager. We'll also run etcd on it. Since we want to make sure we don't starve the node running etcd, we'll mark the master node as unschedulable so that application containers don't get scheduled to run on it.

The other two nodes, the worker nodes, will have the proxy and the kubelet running on them; this is where the containers (inside of pods) will get scheduled to run. We'll also tell the installer to run a registry and an HAProxy router on the two worker nodes so that we can perform builds as well as access our services from the outside world via HAProxy.

The Installer

Openshift Origin uses Ansible to manage the installation of different nodes in a cluster. The code for this is aggregated in the OpenShift Ansible Installer on GitHub. Additionally, to run the installer we'll need to install Ansible on our workstation or laptop.

NOTE At this time Ansible 2.2 or greater is REQUIRED.

We already have Ansible 2.2 installed so we can skip to cloning the repo:

$ git clone https://github.com/openshift/openshift-ansible.git &>/dev/null
$ cd openshift-ansible/
$ git checkout 734b9ae199bd585d24c5131f3403345fe88fe5e6
Previous HEAD position was 6d2a272... Merge pull request #2884 from sdodson/image-stream-sync
HEAD is now at 734b9ae... Merge pull request #2876 from dustymabe/dusty-fix-etcd-selinux

In order to document this better in this blog post we are specifically checking out commit 734b9ae199bd585d24c5131f3403345fe88fe5e6 so that we can get reproducible results, since the Openshift Ansible project is fast-moving. These instructions will probably work on the latest master, but you may hit a bug, in which case you should open an issue.

Now that we have the installer we can create an inventory file called myinventory in the same directory as the git repo. This inventory file can be anywhere, but for this install we'll place it there.

Using the IP information from the table above we create the following inventory file:

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_user=fedora
ansible_become=true
deployment_type=origin
containerized=true
openshift_release=v1.3.1
openshift_router_selector='router=true'
openshift_registry_selector='registry=true'
openshift_master_default_subdomain=54.204.208.138.xip.io

# enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1', 'user': '$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50'}

# host group for masters
[masters]
54.175.0.44 openshift_public_hostname=54.175.0.44 openshift_hostname=10.0.173.101

# host group for etcd, should run on a node that is not schedulable
[etcd]
54.175.0.44

# host group for worker nodes, we list master node here so that
# openshift-sdn gets installed. We mark the master node as not
# schedulable.
[nodes]
54.175.0.44    openshift_hostname=10.0.173.101 openshift_schedulable=false
52.91.115.81   openshift_hostname=10.0.156.20  openshift_node_labels="{'router':'true','registry':'true'}"
54.204.208.138 openshift_hostname=10.0.251.101 openshift_node_labels="{'router':'true','registry':'true'}"

Well that is quite a bit to digest, isn't it? Don't worry, we'll break down this file in detail.

Details of the Inventory File

OK, so how did we create this inventory file? We started with the docs and copied one of the examples from there. This type of install we are doing is called a BYO (Bring Your Own) install because we are bringing our own servers and not having the installer contact a cloud provider to bring up the infrastructure for us. For reference there is also a much more detailed BYO inventory file you can look study.

So let's break down our inventory file. First we have the OSEv3 group and list the hosts in the masters, nodes, and etcd groups as children of that group:

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd

Then we set a bunch of variables for that group:

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_user=fedora
ansible_become=true
deployment_type=origin
containerized=true
openshift_release=v1.3.1
openshift_router_selector='router=true'
openshift_registry_selector='registry=true'
openshift_master_default_subdomain=54.204.208.138.xip.io

Let's run through each of them:

  • ansible_user=fedora - fedora is the user that you use to connect to Fedora 25 Atomic Host.
  • ansible_become=true - We want the installer to sudo when running commands.
  • deployment_type=origin - Run OpenShift Origin.
  • containerized=true - Run Origin from containers.
  • openshift_release=v1.3.1 - The version of Origin to run.
  • openshift_router_selector='router=true' - Set it so that any nodes that have this label applied to them will run a router by default.
  • openshift_registry_selector='registry=true' - Set it so that any nodes that have this label applied to them will run a registry by default.
  • openshift_master_default_subdomain=54.204.208.138.xip.io - This setting is used to tell OpenShift what subdomain to apply to routes that are created when exposing services to the outside world.

Whew ... quite a bit to run through there! Most of them are relatively self-explanatory but the openshift_master_default_subdomain might need a little more explanation. Basically, the value of this needs to be a Wildcard DNS Record so that any domain can be prefixed onto the front of the record and it will still resolve to the same IP address. We have decided to use a free service called xipiio so that we don't have to set up wildcard DNS just for this example.

So for our example, a domain like app1.54.204.208.138.xip.io will resolve to IP address 54.204.208.138. A domain like app2.54.204.208.138.xip.io will also resolve to that same address. These requests will come in to node 54.204.208.138, which is one of our worker nodes where a router (HAProxy) is running. HAProxy will route the traffic based on the domain used (app1 vs app2, etc) to the appropriate service within OpenShift.

OK, next up in our inventory file we have some auth settings:

# enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1', 'user': '$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50'}

You can use a multitude of authentication providers with OpenShift. The above statements say that we want to use htpasswd for authentication and we want to create two users. The password for the admin user is OriginAdmin, while the password for the user user is OriginUser. We generated these passwords by running htpasswd on the command line like so:

$ htpasswd -bc /dev/stdout admin OriginAdmin
Adding password for admin user
admin:$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1
$ htpasswd -bc /dev/stdout user OriginUser
Adding password for user user
user:$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50

OK, now on to the host groups. First up, our master nodes:

# host group for masters
[masters]
54.175.0.44 openshift_public_hostname=54.175.0.44 openshift_hostname=10.0.173.101

We have used 54.175.0.44 as the hostname and also set openshift_public_hostname to this same value so that certificates will use that hostname rather than a detected hostname. We're also setting the openshift_hostname=10.0.173.101 because there is a bug where the golang resolver can't resolve *.ec2.internal addresses. This is also documented as an issue against Origin. Once this bug is resolved, you won't have to set openshift_hostname.

Next up we have the etcd host group. We're simply re-using the master node for a single etcd node. In a production deployment, we'd have several:

# host group for etcd, should run on a node that is not schedulable
[etcd]
54.175.0.44

Finally, we have our worker nodes:

# host group for worker nodes, we list master node here so that
# openshift-sdn gets installed. We mark the master node as not
# schedulable.
[nodes]
54.175.0.44    openshift_hostname=10.0.173.101 openshift_schedulable=false
52.91.115.81   openshift_hostname=10.0.156.20  openshift_node_labels="{'router':'true','registry':'true'}"
54.204.208.138 openshift_hostname=10.0.251.101 openshift_node_labels="{'router':'true','registry':'true'}"

We include the master node in this group so that the openshift-sdn will get installed and run there. However, we do set the master node as openshift_schedulable=false because it is running etcd. The last two nodes are our worker nodes and we have also added the router=true and registry=true node labels to them so that the registry and the router will run on them.

Executing the Installer

Now that we have the installer code and the inventory file named myinventory in the same directory, let's see if we can ping our hosts and check their state:

$ ansible -i myinventory nodes -a '/usr/bin/rpm-ostree status'
54.175.0.44 | SUCCESS | rc=0 >>
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.42 (2016-11-16 10:26:30)
        Commit: c91f4c671a6a1f6770a0f186398f256abf40b2a91562bb2880285df4f574cde4
        OSName: fedora-atomic

54.204.208.138 | SUCCESS | rc=0 >>
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.42 (2016-11-16 10:26:30)
        Commit: c91f4c671a6a1f6770a0f186398f256abf40b2a91562bb2880285df4f574cde4
        OSName: fedora-atomic

52.91.115.81 | SUCCESS | rc=0 >>
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.42 (2016-11-16 10:26:30)
        Commit: c91f4c671a6a1f6770a0f186398f256abf40b2a91562bb2880285df4f574cde4
        OSName: fedora-atomic

Looks like they are up and all at the same state. The next step is to unleash the installer. Before we do, we should note that Fedora has moved to python3 by default. While Atomic Host still has python2 installed for legacy package support not all of the modules needed by the installer are supported in python2 on Atomic Host. Thus, we'll forge ahead and use python3 as the interpreter for ansible by specifying -e 'ansible_python_interpreter=/usr/bin/python3' on the command line:

$ ansible-playbook -i myinventory playbooks/byo/config.yml -e 'ansible_python_interpreter=/usr/bin/python3'
Using /etc/ansible/ansible.cfg as config file
....
....
PLAY RECAP *********************************************************************
52.91.115.81               : ok=162  changed=49   unreachable=0    failed=0   
54.175.0.44                : ok=540  changed=150  unreachable=0    failed=0   
54.204.208.138             : ok=159  changed=49   unreachable=0    failed=0   
localhost                  : ok=15   changed=9    unreachable=0    failed=0

We snipped pretty much all of the output. You can download the log file in its entirety from here.

So now the installer has run, and our systems should be up and running. There is only one more thing we have to do before we can take this system for a spin.

We created two users user and admin. Currently there is no way to have the installer associate one of these users with the cluster admin role in OpenShift (we opened a request for that). We must run a command to associate the admin user we created with cluster admin role for the cluster. The command is oadm policy add-cluster-role-to-user cluster-admin admin.

We'll go ahead and run that command now on the master node via ansible:

$ ansible -i myinventory masters -a '/usr/local/bin/oadm policy add-cluster-role-to-user cluster-admin admin'
54.175.0.44 | SUCCESS | rc=0 >>

And now we are ready to log in as either the admin or user users using oc login https://54.175.0.44:8443 from the command line or visiting the web frontend at https://54.175.0.44:8443.

NOTE To install the oc CLI tool follow these instructions.

To Be Continued

In this blog we brought up an OpenShift Origin cluster on three servers that were running Fedora 25 Atomic Host. We reviewed the inventory file in detail to explain exactly what options were used and why. In a future blog post we'll take the system for a spin, inspect some of the running system that was generated from the installer, and spin up an application that will run on and be hosted by the Origin cluster.

If you run into issues following these installation instructions, please report them in one of the following places:

Cheers!
Dusty

Thanks! I will definitely keep you in mind.

Posted by Suzanne Hillman (Outreachy) on December 07, 2016 09:20 PM

Thanks! I will definitely keep you in mind. Also, I’m totally going to get back to you when it’s time to get feedback and suggestions on possible design directions. ;)

Analysis is confusing

Posted by Suzanne Hillman (Outreachy) on December 07, 2016 09:18 PM

I’ve often found myself stumped when confronted with a bunch of data, whether quantitative or qualitative. I might know what I was hoping to find out (but not always!), but that doesn’t always translate easily into knowing what I need to do with the information I’ve collected.

This seems like a fairly common problem, and yet I’ve struggled with it for as long as I’ve done research. So, at least officially, since sometime in 2008 or so, when I took my first research methods class before applying to psychology graduate schools.

Statistics!

Quantitative data at least seems like it should be more tractable than qualitative data. The first one’s a bunch of numbers, right? That seems like it should be easier to understand than a bunch of fuzzy words and descriptions and ideas. That being so, I’ve taken numerous psychological statistics classes. Indeed, I my mom has stories of me helping adult students in my mother’s stats class when I was 8. Math makes sense to me. Figuring out what to _do_, what’s relevant, what’s useful? Much less clear. This is often true, I find: knowing how to use the information I have can be a daunting task.

One of the things I find fascinating about UX is that this is known to be confusing and hard and also a really important aspect of what we _do_. Rather than trying to look for statistical significance, however, we’re looking for ideas and guidance and places that are obviously painful and places that are working well. Statistical significance is somewhat… irrelevant to the questions we are trying to answer. Not ‘how much’ or ‘how fast’, but ‘what is happening’ and ‘why’.

Statistics always felt like it was supposed to be a thing that could be done on one’s own. Like I should just know what the best approach is. This is likely not helped by the fact that I have trouble verbalizing math; it’s not at all the same language in my head, and translating is difficult. Having trouble verbalizing math makes it difficult to discuss it, and to consult with others to figure out the right sorts of statistical methods to use beyond the basic stuff that pretty much always has to happen. It’s not even just about which methods to use, but how to correctly interpret things. Statistics is a lot more fuzzy than practitioners like to admit to, at least in the psychological sciences. It’s all ‘is this significant _enough_’?, ‘is there enough of an effect for this to actually matter?’, ‘Have I gone down a rabbit hole and wasted all this effort?’. This is _not_ helped by the fact that non-significant results rarely get published.

Affinity Mapping?

I’ve known of affinity mapping, and even tried to use sticky notes to figure out some of my data in the first UX project I did. Unfortunately, as I found out at the time, analysis of the data I get in UX research doesn’t really lend itself to being done alone. Much like statistics, I suspect. I’m not at all sure how UX consultants do their analyses, given this!

Thankfully, I now have a mentor and an internship! When I flailed at Mo earlier today during our meeting, she suggested that I obtain Gamestorming as a useful reference book, and that we should go ahead and do some affinity mapping on my data. I need a bit more data first, but this means I finally get some of the guidance I’ve desperately been looking for.

I’ve been reading Gamestorming today, taking frequent breaks so I have time to let things settle in before I continue reading. I’ve also been reading a Paper Prototyping book that I got at the suggestion of another helpful person in the Boston area UX community, Jen McGinn. Given that I sort of guessed at paper prototyping for the same project in which I tried to analyse my data using sticky notes, this book should be helpful.

I’m really looking forward to getting a chance to do affinity mapping on this project. I think it’ll make a huge difference for my confidence!

Major service disruption

Posted by Fedora Infrastructure Status on December 07, 2016 07:55 PM
Service 'COPR Build System' now has status: major: Service will not sign packages

New badge: LISA16 !

Posted by Fedora Badges on December 07, 2016 05:49 PM
LISA16You visited the Fedora booth at LISA16!

Send/Recv in mgmt

Posted by James Just James on December 07, 2016 12:00 PM

I previously published “A revisionist history of configuration management“. I meant for that to be the intro to this article, but it ended up being long enough that it deserved a separate post. I will explain Send/Recv in this article, but first a few clarifications to the aforementioned article.

Clarifications

I mentioned that my “revisionist history” was inaccurate, but I failed to mention that it was also not exhaustive! Many things were left out either because they were proprietary, niche, not well-known, of obscure design or simply for brevity. My apologies if you were involved with Bcfg2, Bosh, Heat, Military specifications, SaltStack, SmartFrog, or something else entirely. I’d love it if someone else wrote an “exhaustive history”, but I don’t think that’s possible.

It’s also worth re-iterating that without the large variety of software and designs which came before me, I wouldn’t have learned or have been able to build anything of value. Thank you giants!  By discussing the problems and designs of other tools, then it makes it easier to contrast with and explaining what I’m doing in mgmt.

Notifications

If you’re not familiar with the directed acyclic graph model for configuration management, you should start by reviewing that material first. It models a system of resources (workers) as the vertices in that DAG, and the edges as the dependencies. We’re going to add some additional mechanics to this model.

There is a concept in mgmt called notifications. Any time the state of a resource is successfully changed by the engine, a notification is emitted. These notifications are emitted along any graph edge (dependency) that has been asked to relay them. Edges with the Notify property will do so. These are usually called refresh notifications.

Any time a resource receives a refresh notification, it can apply a special action which is resource specific. The svc resource reloads the service, the password resource generates a new password, the timer resource resets the timer, and the noop resource prints a notification message. In general, refresh notifications should be avoided if possible, but there are a number of legitimate use cases.

In mgmt notifications are designed to be crash-proof, that is to say, undelivered notifications are re-queued when the engine restarts. While we don’t expect mgmt to crash, this is also useful when a graph is stopped by the user before it has completed.

You’ll see these notifications in action momentarily.

Send/Recv

I mentioned in the revisionist history that I felt that Chef opted for raw code as a solution to the lack of power in Puppet. Having resources in mgmt which are event-driven is one example of increasing their power. Send/Recv is another mechanism to make the resource primitive more powerful.

Simply put: Send/Recv is a mechanism where resources can transfer data along graph edges.

The status quo

Consider the following pattern (expressed as Puppet code):

# create a directory
file { '/var/foo/':
    ensure => directory,
}
# download a file into that directory
exec { 'wget http://example.com/data -O - > /var/foo/data':
    creates => '/var/foo/data',
    require => File['/var/foo/'],
}
# set some property of the file
file { '/var/foo/data':
    mode => 0644,
    require => File['/var/foo/data'],
}

First a disclaimer. Puppet now actually supports an http url as a source. Nevertheless, this was a common pattern for many years and that solution only improves a narrow use case. Here are some of the past and current problems:

  • File can’t take output from an exec (or other) resource
  • File can’t pull from an unsupported protocol (sftp, tftp, imap, etc…)
  • File can get created with zero-length data
  • Exec won’t update if http endpoint changes the data
  • Requires knowledge of bash and shell glue
  • Potentially error-prone if a typo is made

There’s also a layering violation if you believe that network code (http downloading) shouldn’t be in a file resource. I think it adds unnecessary complexity to the file resource.

The solution

What the file resource actually needs, is to be able to accept (Recv) data of the same type as any of its input arguments. We also need resources which can produce (Send) data that is useful to consumers. This occurs along a graph (dependency) edge, since the sending resource would need to produce it before the receiver could act!

This also opens up a range of possibilities for new resource kinds that are clever about sending or receiving data. an http resource could contain all the necessary network code, and replace our use of the exec { 'wget ...': } pattern.

Diagram

in this graph, a password resource generates a random string and stores it in a file

in this graph, a password resource generates a random string and stores it in a file; more clever linkages are planned

Example

As a proof of concept for the idea, I implemented a Password resource. This is a prototype resource that generates a random string of characters. To use the output, it has to be linked via Send/Recv to something that can accept a string. The file resource is one such possibility. Here’s an excerpt of some example output from a simple graph:

03:06:13 password.go:295: Password[password1]: Generating new password...
03:06:13 password.go:312: Password[password1]: Writing password token...
03:06:13 sendrecv.go:184: SendRecv: Password[password1].Password -> File[file1].Content
03:06:13 file.go:651: contentCheckApply: Invalidating sha256sum of `Content`
03:06:13 file.go:579: File[file1]: contentCheckApply(true)
03:06:13 noop.go:115: Noop[noop1]: Received a notification!

What you can see is that initially, a random password is generated. Next Send/Recv transfers the generated Password to the file’s Content. The file resource invalidates the cached Content checksum (a performance feature of the file resource), and then stores that value in the file. (This would normally be a security problem, but this is for example purposes!) Lastly, the file sends out a refresh notification to a Noop resource for demonstration purposes. It responds by printing a log message to that effect.

Libmgmt

Ultimately, mgmt will have a DSL to express the different graphs of configuration. In the meantime, you can use Puppet code, or a raw YAML file. The latter is primarily meant for testing purposes until we have the language built.

Lastly, you can also embed mgmt and use it like a library! This lets you write raw golang code to build your resource graphs. I decided to write the above example that way! Have a look at the code! This can be used to embed mgmt into your existing software! There are a few more examples available here.

Resource internals

When a resource receives new values via Send/Recv, it’s likely that the resource will have work to do. As a result, the engine will automatically mark the resource state as dirty and then poke it from the sending node. When the receiver resource runs, it can lookup the list of keys that have been sent. This is useful if it wants to perform a cache invalidation for example. In the resource, the code is quite simple:

if val, exists := obj.Recv["Content"]; exists && val.Changed {
    // the "Content" input has changed
}

Here is a good example of that mechanism in action.

Future work

This is only powerful if there are interesting resources to link together. Please contribute some ideas, and help build these resources! I’ve got a number of ideas already, but I’d love to hear yours first so that I don’t influence or stifle your creativity. Leave me a message in the comments below!

Happy Hacking,

James


Elections 2016: Nominate community members to Fedora leadership

Posted by Fedora Community Blog on December 07, 2016 09:13 AM
Fedora Elections - All interviews published

Fedora Elections are here!

With Fedora 25 out the door a couple of weeks ago, Fedora is once again moving ahead towards Fedora 26. As usual after a new release, the Fedora Elections are getting into gear. There are a fair number of seats up for election this release, across both the Fedora Engineering Steering Committee (FESCo) and the Fedora Council. The elections are one of the ways you can have an impact on the future of Fedora by nominating and voting. Nominate other community members (or self-nominate) to run for a seat in either of these leadership bodies to help lead Fedora. For this election cycle, nominations are due on December 12th, 2016, at 23:59:59 UTC. It is important to get nominations in quickly before the window closes. This article helps explain both leadership bodies and how to cast a nomination.

Fedora Engineering Steering Committee (FESCo)

The Fedora Engineering Steering Committee, shortened to FESCo, is the technical leadership group in the Fedora community. FESCo helps review change requests for new versions of Fedora and decide policies that carry across the development community. Currently, FESCo is reviewing change requests for Fedora 26, such as debugging info for static libraries and GHC 8.0. You can see a full list of current tasks for FESCo on their Pagure repo. You can see past meeting logs to see the type of tasks FESCo has worked on in the past.

Community members are encouraged to nominate active developers or self-nominate for a FESCo seat. There are five seats open for election this cycle. This is one of the most direct ways to help have an impact on the future of Fedora from a technical view. There are no prerequisites for casting a nomination.

Fedora Council

The Fedora Council is the top-level decision-making body in the Fedora community. The Council helps oversee the project and provides support for the growth and development of the community. Its primary role is to identify the short, medium, and long-term goals of the Fedora community and to organize and enable the project to best meet them. This is done in consultation with the entire Fedora community through transparent, public discussion. The Council also governs Fedora’s financial resources to set up an annual budget allocated to support Fedora initiatives, including Fedora Ambassadors, Fedora-managed events, and other activities which advance the project’s goals. The Council also decides on issues about use of the Fedora trademarks and settles disputes escalated from other committees or subgroups. It may also handle sensitive legal or personnel issues which need research and discussion to protect the interests of the Fedora Project or its sponsor(s).

Community members are encouraged to nominate involved members of the Fedora Project community (or self-nominate) for a Council seat. There is one seat open for election this cycle. Council members have a direct influence on the project and help represent the community at the top-level of leadership in the project. There are no prerequisites for casting a nomination.

How to nominate for elections

Know someone who you believe is a good fit for either FESCo or the Council? If so, reach out to them and get their personal approval before casting the nomination. Once you have their approval or if you are self-nominating, you can add the nomination to the 2016 Election page for either FESCo or the Council.

Towards the bottom of the wiki page, there is a section noted for nominations. To add a nomination, edit the wiki page and add the person’s name and FAS username. Complete this before December 12th, 2016 at 23:59:59 UTC to make sure the nomination is included in this election.

Don’t forget: Questionnaire

Maybe you’re not running for a seat, but do you have questions you want the candidates to answer? Don’t forget that there is a questionnaire form that all candidates use to write their interviews later on the Fedora Community Blog. Fedora community members are encouraged to add their own questions to the wiki page to have candidates answer. If you wish to add your own question, feel free to edit the questionnaire wiki page.

Good luck to all nominees this cycle!

The post Elections 2016: Nominate community members to Fedora leadership appeared first on Fedora Community Blog.

Functional Programming 101

Posted by farhaan on December 07, 2016 09:07 AM

“Amazing!”  that was my initial reaction when I heard and read about functional programming , I am very new to the whole concept so I might go a little off while writing about it so I am open to criticism .  This is basically my understanding about functional programming and why I got hooked to it .

Functional Programming is a concept just like Object Oriented Programming , a lot of people confuse these concept and start relating to a particular language , thing that needs to be clear is languages are tools to implement concepts. There is imperative programming where you tell the machine what to do ? For example

  1. Assign x to y
  2. Open a file
  3. Read a file

While when we specifically talk about FP it is a way to tell how to do things ? The nearest example that I can come up with is SQL query  where you say something like

SELECT  * FROM Something where bang=something and bing=something

Here we didn’t tell what to do but we told how to do it. This is what I got as a gist of functional programming where we divide our task into various functional parts and then we tell how things have to be implemented on the data.

Some of the core concepts that I came across was pure functions and functions treated as first class citizen or first class object . What each term means  lets narrow it down .

Pure functions  is a function whose return value is determined by the input given, the best example of pure functions are Math functions for example Math.sqrt(x) will return the same value for same value of x. Keeping in mind that x will never be altered. Lets go on a tangent and see that how this immutability of x is a good thing, this actually prevents data from getting corrupt.  Okay! That is alot to take in one go, lets understand this with a simple borrowed example from the talk I attended.

We will take example of a simple Library System  now for every library system there should be a book store, the book store here is an immutable data structure now what will happen if I want to add a new book to it ? Since it is immutable I can’t modify it , correct ? So a simple solution to this problem is every time I add or remove a book I will actually deliver a new book store and this new book store will replace the old one. That way I can preserve the old data because hey we are creating a whole new store. This is probably the gist or pros of functional programming.

book_store = ["Da Vinci's Code", "Angles and Demons", "The Lost Symbol"]
def add_book( book_store, book):
    new_book_store = []
    map(lambda old_book: new_book_store.append(old_book), book_store)
    new_book_store.append(book)
    return new_book_store

print add_book(book_store, "Inferno") # ["Da Vinci's Code", "Angles and Demons", "The Lost Symbol", "Inferno"]

print book_store # ["Da Vinci's Code", "Angles and Demons", "The Lost Symbol"]

In the above code you can actually see that a new book store is returned on addition of a new book. This is what a pure function looks like.

Function as first class citizens , I can relate a lot to this because of python where we say that everything is a first class objects. So, basically when we say functions are first class citizen we are implying that functions can be assigned to a variable, passed as a parameter and returned from a function. This is way more powerful then it sounds this bring a lot modular behavior to the software you are writing, it makes the project more organized and less tightly coupled. Which is a good thing in case you want to make quick changes or even feature related big changes.

def find_odd(num):
    return num if(num%2 != 0) else None

def find_even(num):
    return num if(num%2 == 0) else None

def filter_function(number_list, function_filter):
    return [num for num in number_list if(function_filter(num) != None)]

number_list = [1,2,3,4,5,6,7,8,9]
print filter_function(number_list, find_odd) # [1,2,5,7,9]
print filter_function(number_list, find_even) # [2,4,6,8]

In the above code you can see that function is passed as an argument to another function.

I have not yet explored into lambda calculus which I am thinking of getting into . There is a lot more power and beauty in functional programming.  I want to keep this post a quick read so I might cover some code example later, but I really want to demonstrate this code.

def fact(n, acc=1):
    return acc if ( n==1 ) else fact(n-1, n*acc)

where acc=1  this is pure textbook and really beautiful code which calculates factorial of n ,  when it comes to FP it is said To iterate is Human, to recurse is Divine. I will leave you to think more about it, will try to keep writing about things I learn.

Happy Hacking!


News: A new image format for the Web.

Posted by mythcat on December 07, 2016 09:01 AM
The news come from here.
WebP is a modern image format that provides superior lossless and lossy compression for images on the web. Using WebP, webmasters and web developers can create smaller, richer images that make the web faster. WebP lossless images are 26% smaller in size compared to PNGs. WebP lossy images are 25-34% smaller than comparable JPEG images at equivalent SSIM quality index. Lossless WebP supports transparency (also known as alpha channel) at a cost of just 22% additional bytes. For cases when lossy RGB compression is acceptable, lossy WebP also supports transparency, typically providing 3× smaller file sizes compared to PNG.

Endless Sky now available on Fedora

Posted by Fedora Magazine on December 07, 2016 08:00 AM

Endless Sky is a 2D space trading and combat game similar to Escape Velocity. The game sets you as a beginning pilot, just having made a down payment on your very first starship. You’re given a choice between a shuttle, a freighter or a fighter. Depending on what ship you choose, you will need to figure out how to earn money to outfit and eventually upgrade your ship. You can transport passengers, run cargo, mine asteroids or even hunt pirates. It’s an open-ended game that blends the top-down action of a 2D space shooter with the depth and replayability of a 4X.

<figure>endless-sky-promo-image

<figcaption>endless-sky-promo-image</figcaption> </figure>

Installation

Endless Sky is available on Fedora 23, 24, and 25. To install on Fedora Workstation, open Software and search for it by name. Click on the entry for the game and the following view appears:

gnome-software-endless-sky

Click the Install button. Or, you can install it using dnf:

dnf install endless-sky

You can now run Endless Sky from within GNOME Shell. Open Activities in the top-left corner, then click Show Applications in the favorites dash on the left.

Endless Sky as a desktop app in Fedora GNOME

Get Involved

There are many ways to contribute directly to the Endless Sky project. The project has an extensive wiki outlining how to contribute art, missions, ships, etc.

Be sure to keep the Players Manual open; there’s a wealth of information in the manual. Happy flying!

xinput is not a configuration UI

Posted by Peter Hutterer on December 07, 2016 02:58 AM

xinput is a tool to query and modify X input device properties (amongst other things). Every so-often someone-complains about it's non-intuitive interface, but this is where users are mistaken: xinput is a not a configuration UI. It is a DUI - a developer user interface [1] - intended to test things without having to write custom (more user-friendly) for each new property. It is nothing but a tool to access what is effectively a key-value store. To use it you need to know not only the key name(s) but also the allowed formats, some of which are only documented in header files. It is intended to be run under user supervision, anything it does won't survive device hotplugging. Relying on xinput for configuration is the same as relying on 'echo' to toggle parameters in /sys for kernel configuration. It kinda possibly maybe works most of the time but it's not pretty. And it's not intended to be, so please don't complain to me about the arcane user interface.

[1] don't do it, things will be a bit confusing, you may not do the right thing, you can easily do damage, etc. A lot of similarities... ;)

Keystone Development Bootstrap with Service Catalog

Posted by Adam Young on December 07, 2016 12:01 AM

My Last post showed how to get a working Keystone server. Or did it.

$ openstack service list
The service catalog is empty.

Turns out, to do most things with Keystone, you need a service catalog, and I didn’t have one defined. To fix it, rerun bootstrap with a few more options.

Rerun the bootstrap command with the additional parameters to create the identity service and the endpoints that implement it.

Note: I used 127.0.0.1 Explicitly elsewhere, so I did that here, too, for consistency. You can use localhost if you prefer, or an explicit hostname, so long as it works for you.

keystone-manage bootstrap --bootstrap-password keystone  --bootstrap-service-name keystone --bootstrap-admin-url http://127.0.0.1:35357  --bootstrap-public-url http://127.0.0.1:5000  --bootstrap-internal-url http://127.0.0.1:5000  --bootstrap-region-id RegionOne

Restart Keystone and now:

$ openstack service list
You are not authorized to perform the requested action: identity:list_services (HTTP 403) (Request-ID: req-3dfd0b6e-c4c9-443b-b374-243acdeda75e)

Hmmm. Seems I need a role on a project: add in the following params:

 --bootstrap-project-name admin      --bootstrap-role-name admin

So now my whole command line looks like this:

keystone-manage bootstrap \
--bootstrap-password keystone \
--bootstrap-service-name keystone \
--bootstrap-admin-url http://127.0.0.1:35357 \
--bootstrap-public-url http://127.0.0.1:5000 \
--bootstrap-internal-url http://127.0.0.1:5000 \
--bootstrap-project-name admin      \
--bootstrap-role-name admin
--bootstrap-region-id RegionOne

Let’s try again:

$ openstack service list
You are not authorized to perform the requested action: identity:list_services (HTTP 403) (Request-ID: req-b225c12a-8769-4322-955f-fb921d0f6834)

What?

OK, let’s see what is in the token. Running:

openstack token issue --debug

Will get me a token like this (formatted for legibility):

{
  "token": {
    "is_domain": false,
    "methods": [
      "password"
    ],
    "roles": [
      {
        "id": "0073eb4ee8b044409448168f8ca7fe80",
        "name": "admin"
      }
    ],
    "expires_at": "2016-12-07T00:02:13.000000Z",
    "project": {
      "domain": {
        "id": "default",
        "name": "Default"
      },
      "id": "f84f16ef1f2f45cd80580329ab2c00b0",
      "name": "admin"
    },
    "catalog": [
      {
        "endpoints": [
          {
            "url": "http://127.0.0.1:5000",
            "interface": "internal",
            "region": "RegionOne",
            "region_id": "RegionOne",
            "id": "78b654d00f3845f8a73d23793a2485ed"
          },
          {
            "url": "http://127.0.0.1:35357",
            "interface": "admin",
            "region": "RegionOne",
            "region_id": "RegionOne",
            "id": "81956b9544da41a5873ecddd287fb13b"
          },
          {
            "url": "http://127.0.0.1:5000",
            "interface": "public",
            "region": "RegionOne",
            "region_id": "RegionOne",
            "id": "c3ed6ca53a8b4dcfadf9fb6835905b1e"
          }
        ],
        "type": "identity",
        "id": "b5d4af37070041db969b64bf3a57dcb3",
        "name": "keystone"
      }
    ],
    "user": {
      "domain": {
        "id": "default",
        "name": "Default"
      },
      "password_expires_at": null,
      "name": "admin",
      "id": "bc72530345094d0e9ba53a275d2df9e8"
    },
    "audit_ids": [
      "UQc953wpQvGHa3YokNeNgQ"
    ],
    "issued_at": "2016-12-06T23:02:13.000000Z"
  }
}

So the roles are set correctly. But…maybe the policy is not. There is currently no policy.json in /etc/keystone. And maybe my wsgi App is not finding it.

sudo cp /opt/stack/keystone/etc/policy.json /etc/keystone/

Restart the wsgi applications and …

$ openstack service list
+----------------------------------+----------+----------+
| ID                               | Name     | Type     |
+----------------------------------+----------+----------+
| b5d4af37070041db969b64bf3a57dcb3 | keystone | identity |
+----------------------------------+----------+----------+

Avoiding CVE-2016-8655 with systemd

Posted by Lennart Poettering on December 06, 2016 11:00 PM

Avoiding CVE-2016-8655 with systemd

Just a quick note: on recent versions of systemd it is relatively easy to block the vulnerability described in CVE-2016-8655 for individual services.

Since systemd release v211 there's an option RestrictAddressFamilies= for service unit files which takes away the right to create sockets of specific address families for processes of the service. In your unit file, add RestrictAddressFamilies=~AF_PACKET to the [Service] section to make AF_PACKET unavailable to it (i.e. a blacklist), which is sufficient to close the attack path. Safer of course is a whitelist of address families whch you can define by dropping the ~ character from the assignment. Here's a trivial example:

…
[Service]
ExecStart=/usr/bin/mydaemon
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
…

This restricts access to socket families, so that the service may access only AF_INET, AF_INET6 or AF_UNIX sockets, which is usually the right, minimal set for most system daemons. (AF_INET is the low-level name for the IPv4 address family, AF_INET6 for the IPv6 address family, and AF_UNIX for local UNIX socket IPC).

Starting with systemd v232 we added RestrictAddressFamilies= to all of systemd's own unit files, always with the minimal set of socket address families appropriate.

With the upcoming v233 release we'll provide a second method for blocking this vulnerability. Using RestrictNamespaces= it is possible to limit which types of Linux namespaces a service may get access to. Use RestrictNamespaces=yes to prohibit access to any kind of namespace, or set RestrictNamespaces=net ipc (or similar) to restrict access to a specific set (in this case: network and IPC namespaces). Given that user namespaces have been a major source of security vulnerabilities in the past months it's probably a good idea to block namespaces on all services which don't need them (which is probably most of them).

Of course, ideally, distributions such as Fedora, as well as upstream developers would turn on the various sandboxing settings systemd provides like these ones by default, since they know best which kind of address families or namespaces a specific daemon needs.

Outreachy Starts Today!

Posted by Suzanne Hillman (Outreachy) on December 06, 2016 08:59 PM

This blog should now be showing up on various planets (Fedora, Outreachy, Fedora Design), which means I need to do an introduction.

Hi, I’m Suzanne. I’ve been working on getting myself into User Experience (UX) for about a year now, although I’ve been interested in smoothing the interface between people and technology for longer than that. Most recently, I spent a few years in a PhD program working with robots and investigating the kind of gestures that robots will most need to understand when conversing with people. When that turned out to not be a good program for me, I figured out that UX was the most interesting and relevant path forward.

My outreachy project is the UX of Fedora hubs, specifically that of regional hubs [1]. I’ve been working on it for a few months now with my mentor Mo Duffy. I’ve already done the competitive analysis, done initial research planning, and started interviewing Fedora ambassadors and community members. Some interviews remain, and I continue to try to schedule with and get in touch with the remaining 4 people.

I’m hoping to get those last few interviews done in the next two weeks, to stay on track for the original plan provided in the application. We shall see, since I’m having trouble getting replies back to my emails.

I’ve looked for patterns in the existing interviews, and there are definitely some emerging. I’d definitely prefer more interviews before I do much with those results, though. Three interviews isn’t really very much.

This seems like a decent introduction for now. Until later!

[1] https://pagure.io/fedora-hubs/issue/47

QEMU Advent Calendar 2016

Posted by Kashyap Chamarthy on December 06, 2016 03:10 PM

The QEMU Advent Calendar website 2016 features a QEMU disk image each day from 01-DEC to 24 DEC. Each day a new package becomes available for download (of format tar.xz) which contains a file describing the image (readme.txt or similar), and a little run shell script that starts QEMU with the recommended command-line parameters for the disk image.

The disk images contain interesting operating systems and software that run under the QEMU emulator. Some of them are well-known or not-so-well-known operating systems, old and new, others are custom demos and neat algorithms.” [From the About section.]

This is brought to you by Thomas Huth (his initial announcement here) and yours truly.


Explore the last five days of images from the 2016 edition here! [Extract the download with, e.g. for Day 05: tar -xf.day05.tar.xz]

PS: We still have a few open slots, so please don’t hesitate to contact if you have any fun disk image(s) to contribute.


Episode 17 - Cyphercon Interview with Korgo

Posted by Open Source Security Podcast on December 06, 2016 02:07 PM
Josh and Kurt talk to Michael Goetzman about Cyphercon

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/296503873&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Fedora 26 release dates and schedule

Posted by Fedora Community Blog on December 06, 2016 08:15 AM

With the recent release of Fedora 25, the Fedora 26 release schedule is falling into place. The current Fedora 26 schedule projects a release date of June 6th, 2017. Fedora 26 Alpha is slated for release on March 14th, 2017 and Beta is aiming for May 9th, 2017.

These dates may change as development on Fedora 26 progresses, so always check the schedule for the most accurate version of the Fedora 26 schedule.

The post Fedora 26 release dates and schedule appeared first on Fedora Community Blog.

The way to Linux Desktop

Posted by Luca Ciavatta on December 06, 2016 08:00 AM

Linux on the desktop can change the world

You can downvote them, disagree with them, glorify or vilify them. About the only thing you cannot do is ignore them. Because they ship your bug fixes.

The invent. They imagine. They heal. They explore. They create. They inspire. They push the human race forward.

An Ode to Linux Desktop Users Everywhere

Keystone development server on Fedora 25

Posted by Adam Young on December 06, 2016 03:42 AM

While Unit tests are essential to development, I often want to check the complete flow of a feature against a running keystone server as well. I recently upgraded to Fedora 25, and had to reset my environment. Here is how I set up for development.

Update: turns out there is more.

The Keystone server is unusual in that it requires no other OpenStack services in order to run. Most other services require a Keystone server, but Keystone itself only requires MySQL. As such, it is not worth the effort (and Python hassle) of running devstack. You can run the Keystone server right out of the source directory in a virtual environment.

The code I need for Keystone has been committed for a while. To start clean, I rebase my local git repository to master and to run tox -r to recreate the virtual environment.

I’m going to use that virtual environment along with the directions on the official Keystone development site.

First I need a Database.

 sudo dnf -y  install mariadb-server
 sudo systemctl enable mariadb.service
 sudo systemctl start mariadb.service

Check that The MySQL monitor works.

$ mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.1.19-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Now, configure the data base according to the official setup docs:

I want to end up with MySQL using SQLAlchemy via the following configuration:

connection = mysql+pymysql://keystone:keystone@127.0.0.1/keystone

This is what works on F25. It is a little different frm the older install guides. I am running as the no-root user `ayoung`

mysql -u root
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    ->   IDENTIFIED BY 'keystone';

That is not sufficient to connect, as shown by this test:

mysql -h 127.0.0.1 keystone -u keystone –password=keystone

Ensure need MySQL listening on a newtork socket.

$ getent services mysql
mysql                 3306/tcp
$ telnet 127.0.0.1 3306
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Y

Turns Out what I needed was:

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'    IDENTIFIED BY 'keystone';

This is not a production grade solution, but should work for development.

Enable the virtual environemnt:

. .tox/py27/bin/activate

Update /etc/keystone.conf as per the above doc and try the db sync:

keystone-manage db_sync
...
keystone-manage db_version
109

You will need uwsgi to run as the webserver. Don’t try to use the system package. On F24, at least, the system one was out of date. Since this is a development setup, let’s match the upstream approach and use pip to install it in the venv.

pip install uwsgi

Now try to run the server:

 uwsgi --http 127.0.0.1:35357 --wsgi-file $(which keystone-wsgi-admin)

And test:

curl localhost:35357
{"versions": {"values": [{"status": "stable", "updated": "2016-10-06T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.7", "links": [{"href": "http://localhost:35357/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://localhost:35357/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}

Now I want to run the bootstrap code to initialize the database tables:

 keystone-manage bootstrap --bootstrap-password keystone

Remember to run the public port server in a separate console window (but also in the venv)

. .tox/py27/bin/activate
uwsgi --http 127.0.0.1:5000 --wsgi-file $(which keystone-wsgi-public )

To run the sample data (again in another venv window)

 pip install python-openstackclient
ADMIN_PASSWORD=keystone tools/sample_data.sh

Here is my keystone.rc file for talking to this server. The OS_IDENTITY_API_VERSION bypasses discovery, which is probably not a long term solution.

unset `env | awk -F= '/OS_/ {print $1}' | xargs`

export OS_USERNAME=admin
export OS_PASSWORD=keystone
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://127.0.0.1:5000/v3

Make sure token issue work:

. ~/devel/openstack/keystone.rc 
openstack token issue
+------------+-----------------------------------------------------------------+
| Field      | Value                                                           |
+------------+-----------------------------------------------------------------+
| expires    | 2016-12-06T01:09:23+0000                                        |
| id         | gAAAAABYRgGzX_ZixdkZBmS-Ut9uGphBhfSw8rdnTBBar6waqfrghdQWi3PLgjI |
|            | ah6HL9pxGvdmGm8pHCCos7yo4D28LRmROrSRf8Yy1dEE9bMQGcCrFuG4QCe_m2E |
|            | SdqNoB3LMhfCPyCbm3705_Blo_h6f5Cst-fLZuUFyItKkgo4BYZUDpGxk       |
| project_id | f84f16ef1f2f45cd80580329ab2c00b0                                |
| user_id    | bc72530345094d0e9ba53a275d2df9e8                                |
+------------+-----------------------------------------------------------------+

New udev property: XKB_FIXED_LAYOUT for keyboards that must not change layouts

Posted by Peter Hutterer on December 06, 2016 02:44 AM

This post mostly affects developers of desktop environments/Wayland compositors. A systemd pull request was merged to add two new properties to some keyboards: XKB_FIXED_LAYOUT and XKB_FIXED_VARIANT. If set, the device must not be switched to a user-configured layout but rather the one set in the properties. This is required to make fake keyboard devices work correctly out-of-the-box. For example, Yubikeys emulate a keyboard and send the configured passwords as key codes matching a US keyboard layout. If a different layout is applied, then the password may get mangled by the client.

Since udev and libinput are sitting below the keyboard layout there isn't much we can do in this layer. This is a job for those parts that handle keyboard layouts and layout configurations, i.e. GNOME, KDE, etc. I've filed a bug for gnome here, please do so for your desktop environment.

If you have a device that falls into this category, please submit a systemd patch/file a bug and cc me on it (@whot).

The future of xinput, xmodmap, setxkbmap, xsetwacom and other tools under Wayland

Posted by Peter Hutterer on December 05, 2016 08:42 PM

This post applies to most tools that interface with the X server and change settings in the server, including xinput, xmodmap, setxkbmap, xkbcomp, xrandr, xsetwacom and other tools that start with x. The one word to sum up the future for these tools under Wayland is: "non-functional".

An X window manager is little more than an innocent bystander when it comes to anything input-related. Short of handling global shortcuts and intercepting some mouse button presses (to bring the clicked window to the front) there is very little a window manager can do. It's a separate process to the X server and does not receive most input events and it cannot affect what events are being generated. When it comes to input device configuration, any X client can tell the server to change it - that's why general debugging tools like xinput work.

A Wayland compositor is much more, it is a window manager and the display server merged into one process. This gives the compositor a lot more power and responsibility. It handles all input events as they come out of libinput and also manages device's configuration. Oh, and instead of the X protocol it speaks Wayland protocol.

The difference becomes more obvious when you consider what happens when you toggle a setting in the GNOME control center. In both Wayland and X, the control center toggles a gsettings key and waits for some other process to pick it up. In both cases, mutter gets notified about the change but what happens then is quite different. In GNOME(X), mutter tells the X server to change a device property, the server passes that on to the xf86-input-libinput driver and from there the setting is toggled in libinput. In GNOME(Wayland), mutter toggles the setting directly in libinput.

Since there is no X server in the stack, the various tools can't talk to it. So to get the tools to work they would have to talk to the compositor instead. But they only know how to speak X protocol, and no Wayland protocol extension exists for input device configuration. Such a Wayland protocol extension would most likely have to be a private one since the various compositors expose device configuration in different ways. Whether this extension will be written and added to compositors is uncertain, I'm not aware of any plans or even intentions to do so (it's a very messy problem). But either way, until it exists, the tools will merely shout into the void, without even an echo to keep them entertained. Non-functional is thus a good summary.

Try Fedora in the cloud for free with Dply

Posted by Fedora Magazine on December 05, 2016 04:23 PM

Fedora 25 is now available on Dply. Dply is a new experimental cloud provider which lets you run an instance for two hours at a time — for free, with no catch. That means that with a few clicks, you can try Fedora 25 from the comfort of your home, school, or coffeeshop.

You’ll need:

Launching a Dply server

Once you have those things, click this to get started. It’ll open a new browser tab or window:

dply-fedora

You’ll be prompted to sign in to GitHub. After that, you’ll see the Dply screen for launching a new server:

02-new-server-form

Fill in the various fields. Don’t forget the “Server Name” section above the line at the top. Pick a conveniently close location, and the “2 Hours (Free)” plan. Select the SSH key from your GitHub account. (Mine is named “ubik.”)

You can leave the User-data section as is. Or you can customize if you’re familiar with cloud-init and its #cloud-config syntax.

03-filled-form

Once you’re ready, press the “CREATE SERVER” button, and your cloud instance is built:

04-building

Soon, you’ll see a screen like this. Make note of the IP address — it’s the number after “Fedora 25” in the dark black box. You’ll need that to connect.

05-running

Now bring up a terminal window, or your favorite SSH client, if you’re not running Fedora already. At the prompt, type:

ssh fedora@NN.NN.NN.NN

Replace the NNs with the numbers from above. Of course, use yours, not the ones that appear in this example. (You can copy and paste them from your web browser.)

You’ll be prompted to accept a key fingerprint. Unfortunately there’s currently no way to validate that. So you’ll have to accept the unlikely probability of someone spoofing the brand new host in this exact second. Type yes and hit Enter, and you’re in!

06-shell

You have access to sudo without an additional prompting. So you can do things like sudo dnf install httpd to set up a web server — or anything else you want.

Other details

This service is hosted on Digital Ocean. The operating system image is Fedora Cloud Base, a minimal take on Fedora plus cloud-init for configuration. To convert to the batteries-included Fedora Server Edition, run:

sudo dnf -y install convert-to-edition
sudo convert-to-edition -p -e server

Among other things, this command installs and configures a firewall. We’re hoping to have Fedora Atomic Host available in Digital Ocean and Dply soon.

Please note, we don’t endorse or have a sponsorship relationship with Dply. We just think this is cool, and a neat way to get your hands on Fedora Cloud with little effort. If this doesn’t work for you, you can instead launch the Fedora Cloud Base image in Amazon EC2 or download for other platforms. See this mailing list post for more information.

Install PHP 7.1 on CentOS, RHEL or Fedora

Posted by Remi Collet on December 05, 2016 10:20 AM

Here is a quick howto upgrade default PHP version provided on Fedora, RHEL or CentOS with latest version 7.1.

 

Repositories configuration:

On Fedora, standards repositories are enough, on Enterprise Linux (RHEL, CentOS) the Extra Packages for Enterprise Linux (EPEL) repository must be configured, and on RHEL the optional channel must be enabled.

Fedora 25

wget http://rpms.remirepo.net/fedora/remi-release-25.rpm
dnf install remi-release-25.rpm

Fedora 24

wget http://rpms.remirepo.net/fedora/remi-release-24.rpm
dnf install remi-release-24.rpm

RHEL version 7.2 or 7.3

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
wget http://rpms.remirepo.net/enterprise/remi-release-7.rpm
rpm -Uvh remi-release-7.rpm epel-release-latest-7.noarch.rpm
subscription-manager repos --enable=rhel-7-server-optional-rpms

RHEL version 6.8

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
wget http://rpms.remirepo.net/enterprise/remi-release-6.rpm
rpm -Uvh remi-release-6.rpm epel-release-latest-6.noarch.rpm
rhn-channel --add --channel=rhel-$(uname -i)-server-optional-6

CentOS version 7.2 (or 7.3)

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
wget http://rpms.remirepo.net/enterprise/remi-release-7.rpm
rpm -Uvh remi-release-7.rpm epel-release-latest-7.noarch.rpm

CentOS version 6.8

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
wget http://rpms.remirepo.net/enterprise/remi-release-6.rpm
rpm -Uvh remi-release-6.rpm epel-release-latest-6.noarch.rpm

 

remi-php71 repository activation

Needed packages are in the remi-safe (enabled by default) and remi-php71 repositories, the latest is not enabled by default (administrator choice according to the desired PHP version).

RHEL or CentOS

yum install yum-utils
yum-config-manager --enable remi-php71

Fedora

dnf install dnf-plugins-core
dnf config-manager --set-enabled remi-php71

 

PHP upgrade

By choice, the packages have the same name than in the distribution, so a simple update is enough:

yum update

That's all :)

$ php -v
PHP 7.1.0 (cli) (built: Dec  1 2016 06:23:20) ( NTS )
Copyright (c) 1997-2016 The PHP Group
Zend Engine v3.1.0-dev, Copyright (c) 1998-2016 Zend Technologies
    with Zend OPcache v7.1.0, Copyright (c) 1999-2016, by Zend Technologies
    with Xdebug v2.5.0, Copyright (c) 2002-2016, by Derick Rethans

 

Known issues

The upgrade can fail (by design) when some installed extensions are not yet compatible with  PHP 7.

See the compatibility tracking list: PECL extensions RPM status

If these extensions are not mandatory, you can remove them before the upgrade, else, you will have to be patient.

Warning: some extensions are still under development (memcache, redis...), but it seems useful to provide them to allow upgrade to more people, and to allow user to give feedback to the authors.

 

More d'information

If you prefer to install PHP 7 beside PHP 5, this can be achieve using the php71 prefixed packages, see the PHP 7.1 as Software Collection post.

You can also try the configuration wizard.

The packages available in the repository will be used as source for Fedora 26 (self contained change proposal already accepted).

By providing a full feature PHP stack, with about 150 available extensions, 4 PHP versions, as base and SCL packages, for Fedora and Enterprise Linux, and with 100 000 download per day, remi repository became in the last 10 years a reference for PHP users on RPM based distributions, maintained by an active contributor to the projects (Fedora, PHP, PECL...).

See also:

libinput now requires axis resolutions for graphics tablets

Posted by Peter Hutterer on December 05, 2016 01:52 AM

I pushed the patch to require resolution today, expect this to hit the general public with libinput 1.6. If your graphics tablet does not provide axis resolution we will need to add a hwdb entry. Please file a bug in systemd and CC me on it (@whot).

How do you know if your device has resolution? Run sudo evemu-describe against the device node and look for the ABS_X/ABS_Y entries:


# Event code 0 (ABS_X)
# Value 2550
# Min 0
# Max 3968
# Fuzz 0
# Flat 0
# Resolution 13
# Event code 1 (ABS_Y)
# Value 1323
# Min 0
# Max 2240
# Fuzz 0
# Flat 0
# Resolution 13
if the Resolution value is 0 you'll need a hwdb entry or your tablet will stop working in libinput 1.6. You can file the bug now and we can get it fixed, that way it'll be in place once 1.6 comes out.

Please don't use pastebins in bugs

Posted by Peter Hutterer on December 05, 2016 01:51 AM

pastebins are useful for dumping large data sets whenever the medium of conversation doesn't make this easy or useful. IRC is one example, or audio/video conferencing. But pastebins only work when the other side looks at the pastebin before it expires, and the default expiry date for a pastebin may only be a few days.

This makes them effectively useless for bugs where it may take a while for the bug to be triaged and the assignee to respond. It may take even longer to figure out the source of the bug, and if there's a regression it can take months to figure it out. Once the content disappears we have to re-request the data from the reporter. And there is a vicious dependency too: usually, logs are more important for difficult bugs. Difficult bugs take longer to fix. Thus, with pastebins, the more difficult the bug, the more likely the logs become unavailable.

All useful bug tracking systems have an attachment facility. Use that instead, it's archived with the bug and if a year later we notice a regression, we still have access to the data.

If you got here because I pasted the link to this blog post, please do the following: download the pastebin content as raw text, then add it as attachment to the bug (don't paste it as comment). Once that's done, we can have a look at your bug again.

Problema: RPi con Fedora y python-picamera

Posted by Tonet Jallo on December 05, 2016 01:34 AM

pi-camera-in-hand

Hola de nuevo, hace poco me hice con una Raspberry Pi a la cual he llamado PIerina (suelo poner nombre a todos mis dispositivos), también compré el modulo de cámara para PIerina, la mayoría de cosas funcionan bien con distros como Raspbian, pero como yo elegí la pastilla azul (Fedora), eso no es tan cómodo pues no todo funciona correctamente, una de estas cosas es la librería picamera de Python, esta me lanzaba el siguiente error:

Traceback (most recent call last):
File “./take_photo.py”, line 4, in <module>
import picamera
File “/usr/lib/python2.7/site-packages/picamera/__init__.py”, line 72, in <module>
from picamera.exc import (
File “/usr/lib/python2.7/site-packages/picamera/exc.py”, line 41, in <module>
import picamera.mmal as mmal
File “/usr/lib/python2.7/site-packages/picamera/mmal.py”, line 47, in <module>
_lib = ct.CDLL(‘libmmal.so’)
File “/usr/lib/python2.7/ctypes/__init__.py”, line 357, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libmmal.so: cannot open shared object file: No such file or directory

La mayor parte de este mensaje no nos interesa, solo la ultima linea:

OSError: libmmal.so: cannot open shared object file: No such file or directory

Este error ocurre por que las Raspberry Pi usan un kernel propio, con librerías propias (para sus módulos), y binarios propios, una de esas librerías es libmmal.so, esta se puede encontrar en /opt/vc/lib/, PERO, el sistema no reconoce por defecto estas librerías (ya que no es la única),

Para que este error sea corregido tenemos que añadir ese directorio al grupo de librerías que el sistema reconoce (o también llamado “añadir variables de entorno”), y eso se hace de la siguiente forma:

# echo “/opt/vc/lib/”>/etc/ld.so.conf.d/rpi.conf

# ldconfig

El primer comando crea un nuevo archivo conteniendo la ruta de una nueva carpeta conteniendo librerías, y el segundo comando hace conocer al sistema que fue registrada una nueva carpeta de librerías.

Hecho esto, se puede usar la librería picamera sin ningún problema.

Espero les sirva y…

Happy hacking…

 


Overridden let() causes segfault with RSpec

Posted by Alexander Todorov on December 04, 2016 08:48 PM

Last week Anton asked me to take a look at one of his RSpec test suites. He was able to consistently reproduce a segfault which looked like this:

/home/atodorov/.rbenv/versions/2.3.2/lib/ruby/gems/2.3.0/gems/rspec-core-3.5.4/lib/rspec/core/runner.rb:113: [BUG] vm_call_cfunc - cfp consistency error
ruby 2.3.2p217 (2016-11-15 revision 56796) [x86_64-linux]

-- Control frame information -----------------------------------------------
c:0013 p:---- s:0048 e:000047 CFUNC  :map
c:0012 p:0011 s:0045 e:000044 BLOCK  /home/atodorov/.rbenv/versions/2.3.2/lib/ruby/gems/2.3.0/gems/rspec-core-3.5.4/lib/rspec/core/runner.rb:113
c:0011 p:0035 s:0043 e:000042 METHOD /home/atodorov/.rbenv/versions/2.3.2/lib/ruby/gems/2.3.0/gems/rspec-core-3.5.4/lib/rspec/core/configuration.rb:1835
c:0010 p:0011 s:0040 e:000039 BLOCK  /home/atodorov/.rbenv/versions/2.3.2/lib/ruby/gems/2.3.0/gems/rspec-core-3.5.4/lib/rspec/core/runner.rb:112
c:0009 p:0018 s:0037 e:000036 METHOD /home/atodorov/.rbenv/versions/2.3.2/lib/ruby/gems/2.3.0/gems/rspec-core-3.5.4/lib/rspec/core/reporter.rb:77
c:0008 p:0022 s:0033 e:000032 METHOD /home/atodorov/.rbenv/versions/2.3.2/lib/ruby/gems/2.3.0/gems/rspec-core-3.5.4/lib/rspec/core/runner.rb:111
c:0007 p:0025 s:0028 e:000027 METHOD /home/atodorov/.rbenv/versions/2.3.2/lib/ruby/gems/2.3.0/gems/rspec-core-3.5.4/lib/rspec/core/runner.rb:87
c:0006 p:0085 s:0023 e:000022 METHOD /home/atodorov/.rbenv/versions/2.3.2/lib/ruby/gems/2.3.0/gems/rspec-core-3.5.4/lib/rspec/core/runner.rb:71
c:0005 p:0026 s:0016 e:000015 METHOD /home/atodorov/.rbenv/versions/2.3.2/lib/ruby/gems/2.3.0/gems/rspec-core-3.5.4/lib/rspec/core/runner.rb:45
c:0004 p:0025 s:0012 e:000011 TOP    /home/atodorov/.rbenv/versions/2.3.2/lib/ruby/gems/2.3.0/gems/rspec-core-3.5.4/exe/rspec:4 [FINISH]
c:0003 p:---- s:0010 e:000009 CFUNC  :load
c:0002 p:0136 s:0006 E:001e10 EVAL   /home/atodorov/.rbenv/versions/2.3.2/bin/rspec:22 [FINISH]
c:0001 p:0000 s:0002 E:0000a0 (none) [FINISH]

Googling for vm_call_cfunc - cfp consistency error yields Ruby #10460. Comments on the bug and particularly this one point towards the error:

> Ruby is trying to be nice about reporting the error; but in the end,
> your code is still broken if it overflows stack.

Somewhere in the test suite was a piece of code that was overflowing the stack. It was somewhere along the lines of

describe '#active_client_for_user' do
  context 'matching an existing user' do
    it_behaves_like 'manager authentication' do
      include_examples 'active client for user with existing user'
    end
  end
end

Considering the examples in the bug I started looking for patterns where a variable was defined and later redefined, possibly circling back to the previous definition. Expanding the shared examples by hand transformed the code into

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
describe '#active_client_for_user' do
  context 'matching an existing user' do
    let(:user) { create(:user, :manager) }
    let!(:api_user_authentication) { create(:user_authentication, user: user) }
    let(:user) { api_user_authentication.user }

    context 'with an `active_assigned_client`' do
      ... skip ...
    end

    ... skip ...
  end
end

Line 5. overrode line 3. When line 4. was executed first because of lazy execution and the call execution path became: 4-5-4-5-4-5 ... NOTE: I think we need a warning about that in RuboCop, see RuboCop #3769. The fix however is a no brainer:

-  let(:user) { create(:user, :manager) }
-  let!(:api_user_authentication) { create(:user_authentication, user: user) }
+  let(:manager) { create(:user, :manager) }
+  let!(:api_user_authentication) { create(:user_authentication, user: manager) }

Thanks for reading and happy testing.

Airports, Goats, Computers, and Users

Posted by Josh Bressers on December 04, 2016 05:29 PM
Last week I had the joy traveling through airports right after the United States Thanksgiving holiday. Now I don't know how many of you have ever tried to travel the week after Thanksgiving but it's kind of crazy, there are a lot of people, way more than usual, and a significant number of them have probably never been on an airplane or if they travel by air they don't do it very often. The joke I like to tell people is that there are folks at the airport wondering why they can't bring their goat onto the airplane. I’m not going to use this post to discuss the merits of airport security (that’s a whole different conversation), it’s really about coexisting with existing security systems.


Now on this trip I didn't see any goats, I was hoping to see something I could classify as truly bizarre, so this was a disappointment to me. There were two dogs but they were surprisingly well behaved. However, all the madness I witnessed got me thinking about Security in an environment where a substantial number of the users are woefully unaware of the security all around them. The frequent travelers know how things work, they keep it moving smoothly, they’re aware of the security and make sure they stay out of trouble. It’s not about if something makes you more or less secure, it’s about the goal of getting from the door to the plane as quickly and painlessly as possible. Many of the infrequent travels aren’t worry about moving through the airport quickly, they’re worried about getting their stuff onto the plane. Some of this stuff shouldn’t be brought through an airport.


Now let’s think about how computer security works for most organizations. You’re not dealing with the frequent travels, you’re dealing with the holiday horde trying to smuggle a jug of motor oil through security. It’s not that these people are bad or stupid, it’s really just that they don’t worry about how things work, they’re not going to be back in the airport until next Thanksgiving. In a lot of organizations the users aren’t trying to be stupid, they just don’t understand security in a lot of instances. Browsing Facebook on the work computer isn’t seen as a bad idea, it’s their version of smuggling contraband through airport security. They don’t see what it hurts, they’re not worried about the general flow of things. If their computer gets ransomware it’s not really their problem. We’ve pushed security off to another group nobody really likes.


What does this all mean? I’m not looking to solve this problem, it’s well known that you can’t fix problems until you understand them. I just happened to notice this trend while making my way through the airport, looking for a goat. It’s not that users are stupid, they’re not as clueless as we think either, they’re just not invested in the process. It’s not something they want to care about, it’s something preventing them from doing what they want to. Can we get them invested in the airport process?


If I had to guess, we’re never going to fix users, we have to fix the tools and environment.

Fedora and GNOME at the Engineering Week of UPIG

Posted by Julita Inca Chiroque on December 04, 2016 07:19 AM

I was invited today to present two Free software projects: GNOME and Fedora at UPIG (Universidad Peruana de Integración Global). They celebrated the event during the whole week the “Engineering Week”. This was the advertisement they used to announce the workshop that last two hours. It was offered free admission with certification fee of s/.25.

15284024_1350078885010813_3865978805236060843_nThe computers of the labs in the university had Windows installed and it was only allowed to install virtual box with a local iso during the workshop. The computers did not have neither permission to read DVDs. So, I did an introduction to the projects that these students have not heard before at all.

upig01I did what I usually do during my workshops such as prizing people when they respond questions of what I have already explained, photo of the entire group and playing with Fedora and GNOME balloons. They somehow get engaged and ask me to give more links related to the Linux history as well the contributions on these projects.

upig02Thank you so much for organizing this event and for helping us to spread the Linux word in universities in Lima, Peru. Thanks Lizbeth Lucar and Solanch Casas.

upig03This is not going to be my last presentation in Lima to spread the GNOME and FEDORA word.  I have previously arranged a couple of more presentations at universities such as UNTELS and UNMSM. Let’s expected more attendances next time, after final exams😉


Filed under: FEDORA, GNOME Tagged: fedora, GNOME, Julita Inca, Julita Inca Chiroque, linux event, Perú, Semana de Ingenieria, UPIG

Fedora 25 Release Party in Managua

Posted by William Moreno Reyes on December 04, 2016 05:11 AM
Today a part of the local Fedora's community  team in Nicaragua we met on the occasion of the release of Fedora 25.



Unlike others relase events that we have done, those that have been with talks and open to the public, in this meeting we meet to see inside the local community and plan what activities we plan to do in 2016.



Los asistentes a esta reunión fueron:

What do not we want to do in 2017?

At the beginning of the conversation Neville raised something interesting: what we do not want to do ?, after removing this list of tasks focus on the tasks we want to carry out.

  • No queremos repetir nuevamente los mismos temas que se han venido abordando años anteriores.
  • We want to be more selective in the participation of local events, there are simply events that do not contribute anything to the Fedora community but it has always had presence because it does not look bad with someone who does not invite to participate.
  • We do not want to collaborate with Fedora to stop being fun, at the end of the day it is a voluntary activity.

Plans for 2017

  1. To organize a couple of Fedora 25 and 26 release parties in universities, we consider that transmitting the work done by the Fedora Project to promote advancement and innovation in free software is one of the main tasks of an ambassador.
  2. We want to create material that is reusable, we plan to create a series of videos that cover part of the basic theme related to Fedora and free software in general, even use Moodle to create some online course in which people can take advantage of the available material Without having to be present physically in the place.
  3. If we want to bring more collaborators to the Fedora Project we need to expand the user base, so we must continue to have presence in local events and universities.
  4. We want to get closer to the local community in Bilwi, they have been showing quite active and we know that they are a community that mostly use Ubuntu, but we believe that with our support they can adopt Fedora as their main operating system.
To fulfill these plans we basically want to reactivate the Fedora School, continue to have a presence in local activities and organize a Fedora Day in Bilwi, in addition to the traditional Fedora launch parties.

Vídeo Porting Fedora to 64-bit ARM systems

Posted by Frederico Lima on December 04, 2016 01:11 AM

Vídeo Porting Fedora to 64-bit ARM systems

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/ja7rQMKRKoU" width="560"></iframe>

Vídeo Understanding the Fedora - Red Hat Relationship - Denise Dumas

Posted by Frederico Lima on December 04, 2016 12:50 AM

Vídeo Understanding the Fedora - Red Hat Relationship - Denise Dumas

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/qJ3CozFrEvg" width="560"></iframe>

Vídeo de Métricas de 2016 do Projeto Fedora

Posted by Frederico Lima on December 03, 2016 10:58 PM

Vídeo do canal oficial do Projeto Fedora no YouTube sobre as Métricas do Projeto Fedora de 2016

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/DklOE_X2dM8" width="560"></iframe>

Grupo de Software Livre no Telegram. #ultraGNU

Posted by Frederico Lima on December 03, 2016 10:51 PM

Divulgando um grupo de Software Livre no Telegram. Coisa séria, de raiz, alinhado com a Filosofia Software Livre.

https://telegram.me/ultraGNU

Entre no Grupo Fedora Brasil no Telegram

Posted by Frederico Lima on December 03, 2016 10:47 PM

Entre já para o grupo de mais de 400 membros do grupo Fedora Brasil no telegram

https://telegram.me/fedorabr