Fedora People

Deploy Windows 2016 AD and Fedora 25 IPA with a One-way Trust

Posted by Striker Leggette on May 30, 2017 05:38 AM

For the purpose of this post, the two machines I used for these instructions are VMs running atop a Fedora 25 hypervisor.

Make sure IPA and AD are on separate domains.  Otherwise, IPA clients will be querying the AD server directly when they dig the domain for ldap records.  (Example: ipa-server-1.linux.example.com and ad-server-1.example.com).

Deploying Windows 2016 AD

  1. On my first VM, I booted using a Trial ISO of Windows Server 2016:
  2. Begin the installation with Windows Server 2016 Standard Evaluation (Desktop Experience).
  3. After the machine boots from installation, configure the Hostname:
    1. Server Manager – Local Server.
    2. Click on the machine’s current hostname.
    3. Click Change and change the hostname to your preference.
      • Example: win16ad01
  4. Configure Active Directory and DNS:
    1. Server Manager – Dashboard.
    2. Add Roles and Features.
    3. For Installation Type, choose Role-based or feature-based installation.
    4. For Server Roles, click Active Directory Domain Services and DNS Server.
    5. Within Server Manager, go to AD DS and click on More.
    6. Click on Promote this server to a domain….
    7. In the next window, choose Add a new Forest.
      • Here, set the full DN of your Forest.
        • Example: win.terranforge.com

Deploying Fedora 25 IPA

  1. For the second VM, I booted using the HTTP link to Fedora 25 Server:
  2. During pre-installation:
    1. Choose Minimal at Software Selection.
    2. In Network & Host Name, set the full hostname of the machine.
      • Example: f25ipa01.linux.terranforge.com
    3. Make sure to give /var a large amount of space, as this is where the IPA Database and Logs will be stored.
  3. After installation and reaching a root prompt:
    1. Install the IPA packages and the RNG package:
      • dnf install ipa-server ipa-server-dns ipa-server-trust-ad rng-tools -y
      • The RNG daemon will generate free entropy to be used during the certificate database creation, otherwise that process can take a very long time to complete.
    2. Open the correct ports that IPA will use:
      • firewall-cmd –add-port=80/tcp –add-port=443/tcp –add-port=389/tcp –add-port=636/tcp –add-port=88/tcp –add-port=464/tcp –add-port=135/tcp –add-port=138/tcp –add-port=139/tcp –add-port=445/tcp –add-port=1024-1300/tcp –add-port=88/udp –add-port=464/udp –add-port=123/udp –add-port=138/udp –add-port=139/udp –add-port=389/udp –add-port=445/udp –permanent
      • firewall-cmd –reload
    3. Start the RNG Daemon:
      • systemctl start rngd
    4. Configure the IPA instance:
      • ipa-server-install –setup-dns
        1. For Server host name, press Enter (Hostname was set during pre-install).
        2. For Domain name and Realm name, press Enter.
        3. Press Enter when prompted for DNS forwarders.
        4. For Enter and IP address for a DNS forwarder, enter the IP Address of your Windows 2016 AD.
        5. Type yes and press Enter to finalize the pre-configuration and begin installation.

Configure the One-way Trust

  1. From the Fedora root prompt, prepare IPA for the trust:
    • ipa-adtrust-install
    • All options should be default.
  2. Configure and Verify the trust:
    1. ipa trust-add –type=ad <DNS Domain for Windows AD> –admin Administrator –password
      • Example: ipa trust-add –type=ad win.terranforge.com –admin Administrator –password
    2. id administrator@win.terranforge.com

Get involved and ask questions

You can get in touch with the IPA community by joining the #freeipa and #sssd channels on Freenode and the freeipa-users and sssd-users mailing lists.


Welcome to Google Summer of Code

Posted by David Carlos on May 30, 2017 12:24 AM

This is the first post of this blog, and as the first post I would like to announce that I was accepted in Google Summer of Code (GsoC) this year. GsoC it's a Google program that occurs during the summer, with the objective to encourage students all over the world, to contribute to free and open-source software. The students choose some organization to contribute to, and submit a proposal to a new project, or to a existent project proposed by the organization. The organization that I submited a proposal was Fedora, a Red Hat like Linux Distribution, developed by a great community of contributors from around the world. I have been using Linux for a long time, and now it's time to realy start contributing with the community, helping to track the quality of the source code, that goes inside each package made available by Fedora.

As a software developer I always had the interest in static analysis, and the benefit that such pratice can bring to the cicle of software development. Many tools were developed for this purpose, but a system that permits the developer to easly integrates such analysis in their development cicle, does not exists. A tool that tried acomplish this goal was Debile, developed in the context of the Debian Distribution. The main problem of Debile is to be a lot coupled to Debian infrastructure, not allowing to run the analysis system in others sources of code. Our proposal, to Google Summer of Code is to build a extensible system, that with a few steps of the developer, can continuosly monitors and colect data of static analysis from diferents sources of code. This ideia was initialy discussed in the devel mailing lists, by the guys in the Static Analysis SIG, on Fedora. All the data collected will be stored in a database, and made available to the developers that want to monitor the quality of some source code. This system will be called kiskadee [1], and already can monitors some Debian mirrors. Now our objective is to monitor the Fedora repos, and integrates more static analyzers to kiskadee.

You can read my proposal here to understand better our objective, and follow the development process here

I will make weekly posts, reporting the status of kiskadee development. Let's Code :)

[1] the great kiskadee is a bird that watches its prey (usually bugs) and catch.

Petit bilan de Rawhide, épisode 4, mai 2017

Posted by Charles-Antoine Couret on May 29, 2017 10:00 PM

Comme promis, à peu près tous les mois j'essaye de tenir au courant la communauté de mes aventures avec Rawhide / la prochaine version stable. D'un point de vue purement technique, je n'utilise pas Fedora Rawhide mais la future F26 à ce stade.

Fedora 26 Beta est décalé

Et oui, comme souvent, la prochaine Fedora sera en retard encore un petit peu. Normalement la Beta devrait sortir d'ici une semaine, mais un énième report à cause de bogues bloquants n'est pas à exclure.

Depuis la dernière fois il y a eu 3 journées de tests concernant :

J'ai participé aux trois et à part un bogue commun dans DNF concernant les langpacks, rien à signaler, tout s'est bien passé !

Changements notables

Niveaux changements notables, je ne note pas grand chose de nouveaux depuis. GNOME étant en voie de stabilisation c'est normal.

La grosse nouveauté, d'un point de vue plus technique, c'est peut être l'arrivée récente de LLVM 4.0 et CLang 4.0 dont bénéficie la dernière monture de mesa pour l'accélération graphique. Cela permet d'exploiter les dernières versions d'OpenGL sur l'ensemble des pilotes récents.

Cependant j'ai l'impression depuis que ma machine chauffe plus, mais c'est peut être liée à la montée du thermomètre dehors. À investiguer.

Les problèmes rencontrés ?

Pour le coup, il y en a eu plus que d'habitude.

Le bogue relevé lors des journées de tests à propos de DNF consiste au fait que si on désinstalle une traduction globale du système (dit langpacks), la traduction du paquet glibc persistera sur le système. Rien de bien grave, mais la fonctionnalité n'est pas encore totalement au point.

Un bogue concernant des clients de Bodhi, dont le cache du captcha est invalide et la seule solution pour régénérer le cache est de supprimer le répertoire ~/.fedora. Cela concerne en particulier l'utilitaire fedora-easy-karma que j'utilise quotidiennement.

Une mise à jour dans la pile Wayland rendait les raccourcis claviers de GNOME non fonctionnels. Toutes les applications de GNOME et uniquement celles-ci étaient concernées. Cela était particulièrement gênant mais semble maintenant résolu.

Un problème assez furtif, gnome-control-center ne se lançait plus, ne trouvant pas la bibliothèque libwbclient.so.0 pourtant installée. Le moyen de contournement est de réinstaller le paquet libwbclient.

Après j'ai eu quelques crashes de GNOME qui semblent avoir disparus avant de pouvoir le notifier.

Bref, le travail avance, quelques soucis mais dans l'ensemble cela va bien. Espérons que les retours des testeurs aidera à avoir une nouvelle Fedora très stable.

Stealing from customers

Posted by Josh Bressers on May 29, 2017 09:59 PM
I was having some security conversations last week and cybersecurity insurance came up as a topic. This isn't overly unusual as it's a pretty popular topic, but someone said something that really got me thinking.
What if the insurance covered the customers instead of the companies?
Now I understand that many cybersecurity insurance policies can cover some amount of customer damage and loss, but fundamentally the coverage is for the company that is attacked, customers who have data stolen will maybe get a year of free credit monitoring or some other token service. That's all well and good, but I couldn't help myself from thinking about this problem from another angle. Let's think about insurance in the context of shoplifting. For this thought exercise we're going to use a real store in our example, which won't be exactly correct, but the point is to think about the problem, not get all the minor details correct.

If you're in a busy store shopping and someone steals your wallet, it's generally accepted that the store is not at fault for this theft. Most would put some effort into helping you, but at the end of the day you're probably out of luck if you expect the store to repay you for anything you lost. They almost certainly won't have insurance to cover the theft of customer property in their store.

Now let's also imagine there are things taken from the store, actual merchandise gets stolen. This is called shoplifting. It has a special name and many stores even have special groups to help minimize this damage. They also have insurance to cover some of these losses. Most businesses see some shoplifting as a part of doing business. They account for some volume of this theft when doing their planning and profit calculations.

In the real world, I suspect customers being robbed while in a store isn't very common. If there is a store that gains a reputation for customers having wallets stolen, nobody will shop there. If you visit a store in a rough part of town they might even have a security guard at the door to help keep the riffraff out. This is because no shop wants to be known as a dangerous place. You can't exist as a store with that sort of reputation. Customers need to feel safe.

In the virtual world, all that can be stolen is basically information. Sometimes that information can be equated to actual money, sometimes it's just details about a person. Some will have little to no value like a very well known email address. Sometimes it can have a huge value like a tax identifier that can be used to commit identity theft. It can be very very difficult to know when information is stolen, but also the value of that information taken can vary widely. We also seem to place very little value on our information. Many people will trade it away for a trinket online worth a fraction of the information they just supplied.

Now let's think about insurance. Just like loss prevention insurance, cybersecurity insurance isn't there to protect customers. It exists to help protect the company from the losses of an attack. If customer data is stolen the customers are not really covered, in many instances there's nothing a customer can do. It could be impossible to prove your information was stolen, even if it gets used somewhere else can you prove it came from the business in question?

After spending some time on the question of what if insurance covered the customers, I realize how hard this problem is to deal with. While real world customer theft isn't very common and it's basically not covered, there's probably no hope for information. It's so hard to prove things beyond a reasonable doubt and many of our laws require actual harm to happen before any action can be taken. Proving this harm is very very difficult. We're almost certainly going to need new laws to deal with these situations.

Flock 2017 registration and submissions open

Posted by Fedora Magazine on May 29, 2017 02:24 PM

Planning is heavily underway for the annual Fedora contributors conference, Flock 2017. The conference is in Cape Cod, Massachusetts USA from August 29 – September 1, 2017. If you’re a contributor, or want to become one, here are some ways you can get involved.

Flock registration

First, registration is now open on the website. The registration this year includes a small fee to offset swag and setup costs per attendee. The fee for USA attendees is $25.

This fee has been scaled via the Big Mac Index to other countries and geographic areas. This means the fee in each country should roughly be the same level of spending, rather than the exact equivalent in local currency. That makes it easier for people in each area to register.

When you register you also have the option to cover other people’s fees. This means anyone can contribute to make it easier for someone else to attend.

Submissions: Talks and workshops

Second, the call for submissions for talks and workshops is also open. However, before submitting, take heed. This year’s conference is highly focused on getting things done! So instead of “state of the project” talks, submissions are encouraged to focus on building skills and participation. Here are some examples of better submission topics:

  • Setting up and using Fedora Atomic Host
  • Gathering user feedback on a Fedora web app
  • Writing package tests in dist-git using Ansible

You can submit your talk or workshop on the same Flock registration site until June 15, 2017

Other resources

There is also a mailing list for communicating with other attendees, as well as a Freenode IRC channel. The website also lists several hints for transportation to the event venue.

We encourage you to get your registration and submission in as soon as you can — the conference will fill up quickly!

 

IBus 1.5.16 is released

Posted by Takao Fujiwara on May 29, 2017 08:21 AM

IBus 1.5.16 is now released and it’s available in Fedora 26:


# dnf update --enablerepo=updates-testing ibus

Also it’s available in Fedora Copr for Fedora 25.

This release enhanced the functionality of emoji typings on IBus Emojier:

<iframe allowfullscreen="true" class="youtube-player" height="349" src="https://www.youtube.com/embed/F5yViKrEf4M?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="425"></iframe>
  • Control-f, Control-b, Control-n, Control-p, Control-h, Control-e, Control-a, Control-u for cursor operations
  • Focus on emoji annotation entry by default
  • The language setting of emoji annotations is moved into ibus-setup
  • Man page (ibus-emoji.7) is added
  • ibus emoji command is available for no IBus panel desktops likes GNOME
  • Favorites category is updated by selecting an emoji
  • Change the modal dialog (popup window) to the modeless dialog
  • Insert multiple Unicode points with Shift-space key. E.g. 1f466 Shift-space 1f3fb
  • Provide an option to match emoji annotations with partial match
  • Hide emoji variants by default
  • Enabled custom annotations by gsettings

Currently emoji color rendering is not supported yet but you can enable it with the patched cairo in Fedora Copr.

If you build ibus from the source codes, You can get Unicode emoji files from http://www.unicode.org/Public/emoji/4.0/ or Fedora provides unicode-emoji package. The version 4.0 is recommended since 5.0 is not released officially yet.
You can get emoji annotation files from https://github.com/fujiwarat/cldr-emoji-annotation or Fedora provides cldr-emoji-annotation package.

Event Report – May 17, LGBTQA Awareness Day

Posted by Fedora Community Blog on May 29, 2017 06:55 AM

May 17 is recognized as International Day Against Homophobia, Transphobia and Biphobia across the globe. The Fedora Diversity team organized an online event (video call) to acknowledge and celebrate the diversity in Fedora for the first time on May 17, 2017. The event was to raise awareness of existing violence and discrimination of LGBTQA communities worldwide, which in turn provides an opportunity to take actions and engage in dialogues within the Fedora community.

40% of world population is from LGBTQA and they have to live under constant fear of crime. Research on wiki reports that between 2008 and 2014, more than 1000 trans people were killed. Sexual and gender minorities face attacks, criticism and their human rights are being denied on daily basis. One in six LGBTQA people faces criminal attacks and incidents do not get reported. What broke my heart the most is that sometimes these people have to change their behavior while in public so that they can save themselves from the hate.

These facts make it very important to transparently and visibly show that Fedora is a diverse and inclusive community. So that, THIS  message can reach everyone including people who have to live in a constant state of fear. Fedora does not support any kind of discrimination and welcomes everyone regardless of gender, culture, belief, sexual orientation and religion.

We had Adam Williamson  (Fedora contributor) on the call, who helped us understand the topic in more deeper context. We had Brian Exelbierd on the call, whose presence helped us know more about where we are currently in Fedora in terms of Code of Conduct and policies and what is the scope of accommodating several discussed improvement factors during the discussion. Dolores Portalatin (an artist, programmer, social activist), Rhea (a fedora contributor and a bisexual person) and Sumantro (a fedora contributor) , also shared their thoughts on the topic. Many other people joined and helped us making the event successful. I am very thankful to all of them as they participated with time, energy, and activism in event call.

As it was a short call of 1 hour and the aim is to identify actions and providing an open forum to drill down the issues. We had chosen critical items only for the agenda of the call and topics involved majorly:

  • Awareness of any existing problems
  • How to improve and become more inclusive
  • Understanding the challenges
  • Future initiatives or directions

Iit makes me feel great that the take away was quite impressive.

Here are the major highlights:

1. Behavior and Moderation – We all are from different culture and carry different background with us. A small question or even an compliment in your opinion may be correct, but may not be appropriate for others. Before giving comments and also compliments, we always need to be more aware that this should not offend someone. Asking someone’s gender publicly or making raciest jokes is not appropriate. There are IRC user guidelines already present in Fedora wiki and also IRC operator guidelines, but they are not very well known. It will be good and helpful to spread awareness about their existence. Greater visibility of our processes and guidelines will surely help.

2. Code of Conduct Expansion – One of the major point of the discussion was code of conduct in Fedora. Our present code of conduct is concise and to the point. There is an opportunity for  expanding the CoC. Diversity team has done  ground work to offer ideas for expanding our CoC is being considered by the Fedora Council now. In particular we believe that explicitly mentioning diversity &inclusion and the LGBTQA community will both make Fedora’s efforts more visible and provide documented reassurance to members of the community that we understand.

3. Polices and Guidelines – Creation and existence of policies and guidelines specifically for LGBTQA individuals can demonstrate our free and open culture in Fedora community and make it more transparent. It also help contributors to understand the action plan created in case of policy violation. People feel more safe and under less risk when such polices and guidelines are crafted carefully.

4. Reporting Issues and Awareness  – It was noted on the call that there is no documented method of reporting incidents. bex shared that one can open private council ticket for reporting issues, which is good. But, people feel more secure while there is a one to one communication channel provided in such scenarios for privacy and to feel more confident while sharing sensitive data and information. Therefore bex is also going to suggest that specific people be identified when the council updates the CoC to include reporting information.  While we build solutions in this area, we need to make sure that we spread awareness about it. So that people can use it, when in need.

5. Tooling – Other than above points, there is the need for accommodations to help a range of contributors, for example by providing subtitles for videos, transcriptions for video calls, etc. For LGBTQA people, there may need to be additional conversations around room sharing options when traveling on Fedora travel budgets. People may not open up or feel comfortable sharing their gender identity publicly on the registration form too, so giving a contact person’s email ID may help in such cases.

Events like this make our perspective more clear. I am privileged and honored to be part of such a diverse community. I am sure the experience was inspiring for everyone who has joined and we crafted a good line of action items for us as diversity team.

Love Fedora hate homophobia, Transphobia and Biphobia

The post Event Report – May 17, LGBTQA Awareness Day appeared first on Fedora Community Blog.

Slice of Cake #9

Posted by Brian "bex" Exelbierd on May 29, 2017 02:52 AM

A slice of cake

Last week as FCAIC I:

  • Contributing toward the streamlining of getting content on the Fedora Community Blog along with many others in CommOps.
  • OMG the financials. They never end. An entire day of reporting done :).
  • Lots of little cleanups and meetings before travel, including getting this slice of cake posted. You probably won’t hear from me for a few weeks.
  • Attended my first Fedora Readiness Meeting for docs and earned the snazzy readiness badge.

A la Mode

  • I attended an OpenShift training to learn more about this PAAS platform. I also hope this will help me with an exam to extend my RHCE which expires soon.

Cake Around the World

I’ll be traveling to:

  • Open Source Summit in Tokyo, Japan from 31 May - 2 June.
  • I’ll be on vacation/holiday the week of 5 June.
  • LinuxCon in Beijing, China from 19-20 June where I am helping to host the Fedora/CentOS/EPEL Birds of a Feather.
  • Working from Gdansk, Poland from 3-4 July.
  • Flock on Cape Cod, Massachusetts, USA from 29 August - 1 September.

Dear Lazyweb: No video for VLC on Fedora 26?

Posted by Jeroen van Meeuwen on May 28, 2017 11:35 AM
Dear Lazyweb, even though Fedora 26 is not yet released, I’ve upgraded — now, VideoLAN isn’t displaying video any longer. If I start it with —no–embedded–video I do get video, but it doesn’t seem to be able to run or switch to/from fullscreen. Resetting my preferences has not helped so far. I would appreciate to… Continue reading Dear Lazyweb: No video for VLC on Fedora 26?

Updated Fedora Lives Available (4.10.16-200) Memorial Weekend Run

Posted by Corey ' Linuxmodder' Sheldon on May 28, 2017 10:13 AM

 

We in the Respins SIG are pleased to mention the latest series of Updated Live Respins carrying the 4.10.16-200 Kernel.  These respins use the livemedia-creator tool packaged in the default Fedora repo and following the guide here as well as using the scripts located here.

As Always  there are available @  http://tinyurl.com/live-respins2

For those needing a non-shortened url that expands to https://dl.fedoraproject.org/pub/alt/live-respins/

This round will be noticeably missing from it’s usual gpg clearsigned CHECKSUM|HASHSUM files hosted on https://community.ameridea.net due to a key cycling operation.  This post will be updated with the  new KeyID|Fingerprint next week however, next run will be the first run with that key in play.

 


Filed under: Community, F25, F25 Torrents, Fedora, PSAs, Volunteer Tagged: Community, Fedora, Open Source, Torrents

A year of no more chasing - 2017

Posted by Sarup Banskota on May 28, 2017 04:03 AM

Starting with 2012, until 2016, I’ve had a fantastic 4 years - I managed to level up skills, personal relationships, professional relationships, income, travel exposure - all sorts of things.

However 2017 has been a year of the chase, and I’ve found myself dissatisfied so far. 3 more days for half the year to end and I don’t feel like I’ve accomplished much. Personal projects, finances, workplace, relationships, travel - they haven’t quite had the bang I’ve been seeing in the previous 4 years.

Better late than never, so I’ve decided to take charge and ensure I make the remaining 6 months worthwhile. Here are some of the things I’ve been chasing in 2017:

  • Social: I’ve been trying to make friends. Most of my interaction happens online on Facebook or Tinder, and I don’t quite meet a lot of people in person
  • Freelance jobs: I’m beginning to feel like I need to try more things and not just focus on my $dayjob. Working on the same problem for an extended period can feel demotivating, and I feel siloed from the rest of the tech world. However, I don’t hate anything as much as I do wading through job postings and similar crap and writing in
  • Personal projects: Probably just to look cool, I’ve started several personal projects with a clear focus on making money. I start excitedly, but as it turns out making money out of something is hard, and then I lose focus on the fun behind solving the problem itself
  • OSS projects: I’ve been messaging some of my OSS heroes telling them I’m gonna free time to contribute, and often they even find things for me, but I just never get to it
  • Design work: I’d like to do artsy and pretty things, but apart from the two days of sketching I did enthusiastically I couldn’t go much further
  • Sports and Fitness: I had a fantastic February maintaining my diet plan and even wrote an article about willpower inspired by that. Yet again, in March, there was a crazy outburst of work, and I couldn’t keep up

That’s a lot of things and it can be difficult to remember all of them, so I’ve decided that I’ll keep the following guidelines in mind for the rest of the year:

  • Avoid virtual talk. Either there’s time alone for hobbies, or there’s time spent with a real person talking or doing things. I’ve already let go of my Facebook account and my Tinder
  • No more wading through job postings for work. Over time I’ve realised that there is a basic amount of money I need for my lifestyle expenses (private condo, uber, nice food, monthly flights, sketching courses or similar), and quite frankly, that’s not a lot. Beyond this, a few thousand extra dollars a month isn’t going to buy me a private jet, so might as well spend the time doing something that I enjoy. I enjoy the company of smart makers because to me they’re cool, so I’m going to try and be more involved with OSS projects
  • Same with design work. At my dayjob, doing design work isn’t feasible right now. Outside, finding the kind of design work I want to do involves wading through crap. Therefore, once again, I’m going to spend time sketching, and on 99Designs without any expectations of winning
  • I’m going to contribute $10 to a fund for every day I don’t do any form of workout. I’ll use that as a scholarship for students later on. Very often, I pick the unhealthy option because it’s cheaper, so I’ll contribute $20 for days I skip > 1 meal or consume 2 unhealthy meals

So, to summarise, my guidelines for the rest of 2017:

  • Avoid virtual talk
  • Try and be more involved with OSS projects
  • Spend time sketching, and on 99Designs without any expectations of winning
  • Contribute $10 to a fund for every day I don’t do any form of workout
  • Contribute $20 for days I skip > 1 meal or consume 2 unhealthy meals

Learn Python & Selenium Automation in 8 weeks

Posted by Alexander Todorov on May 26, 2017 09:36 PM

Couple of months ago I conducted a practical, instructor lead training in Python and Selenium automation for manual testers. You can find the materials at GitHub.

The training consists of several basic modules and practical homework assignments. The modules explain

  1. The basic structure of a Python program and functions
  2. Commonly used data types
  3. If statements and (for) loops
  4. Classes and objects
  5. The Python unit testing framework and its assertions
  6. High-level introduction to Selenium with Python
  7. High-level introduction to the Page Objects design pattern
  8. Writing automated tests for real world scenarios without any help from the instructor.

Every module is intended to be taken in the course of 1 week and begins with links to preparatory materials and lots of reading. Then I help the students understand the basics and explain with more examples, often writing code as we go along. At the end there is the homework assignment for which I expect a solution presented by the end of the week so I can comment and code-review it.

All assignments which require the student to implement functionality, not tests, are paired with a test suite, which the student should use to validate their solution.

What worked well

Despite everything I've written below I had 2 students (from a group of 8) which showed very good progress. One of them was the absolute star, taking active participation in every class and doing almost all homework assignments on time, pretty much without errors. I think she'd had some previous training or experience though. She was in the USA, training was done remotely via Google Hangouts.

The other student was in Sofia, training was done in person. He is not on the same level as the US student but is the best from the Bulgarian team. IMO he lacks a little bit of motivation. He "cheated" a bit on some tasks providing non-standard, easier solutions and made most of his assignments. After the first Selenium session he started creating small scripts to extract results from football sites or as helpers to be applied in the daily job. The interesting fact for me was that he created his programs as unittest.TestCase classes. I guess because this was the way he knew how to run them!?!

There were another few students which had had some prior experience with programming but weren't very active in class so I can't tell how their careers will progress. If they put some more effort into it I'm sure they can turn out to have decent programming skills.

What didn't work well

Starting from the beginning most students failed to read the preparatory materials. Some of the students did read a little bit, others didn't read at all. At the times when they came prepared I had the feeling the sessions progressed more smoothly. I also had students joining late in the process, which for the most part didn't participate at all in the training. I'd like to avoid that in the future if possible.

Sometimes students complained about lack of example code, although Dive into Python includes tons of examples. I've resorted to sending them the example.py files which I produced during class.

The practical part of the training was mostly myself programming on a big TV screen in front of everyone else. Several times someone from the students took my place. There wasn't much active participation on their part and unfortunately they didn't want to bring personal laptops to the training (or maybe weren't allowed)! We did have a company provided laptop though.

When practicing functions and arithmetic operations the students struggled with basic maths like breaking down a number into its digits or vice versa, working with Fibonacci sequences and the like. In some cases they cheated by converting to/from strings and then iterating over them. Also some hard-coded the first few numbers of the Fibonacci sequence and returned it directly. Maybe an in-place explanation of the underlying maths would have been helpful but honestly I was surprised by this. Somebody please explain or give me an advise here!

I am completely missing examples of the datetime and timedelta classes which tuned out to be very handy in the practical Selenium tasks and we had to go over them on the fly.

The OOP assignments went mostly undone, not to mention one of them had bonus tasks which are easily solved using recursion. I think we could skip some of the OOP practice (not sure how safe that is) because I really need classes only for constructing the tests and we don't do anything fancy there.

Page Object design pattern is also OOP based and I think that went somewhat well granted that we are only passing values around and performing some actions. I didn't put constraints nor provided guidance on what the classes should look like and which methods go where. Maybe I should have made it easier.

Anyway, given that Page Objects is being replaced by Screenplay pattern, I think we can safely stick to the all-in-one functional based Selenium tests. Maybe utilize helper functions for repeated tasks (like login). Indeed this is what I was using last year with Rspec & Capybara!

What students didn't understand

Right until the end I had people who had troubles understanding function signatures, function instances and calling/executing a function. Also returning a value from a function vs. printing the (same) value on screen or assigning to the same global variable (e.g. FIB_NUMBERS).

In the same category falls using method parameters vs. using global variables (which happened to have the same value), using the parameters as arguments to another function inside the body of the current function, using class attributes (e.g. self.name) to store and pass values around vs. local variables in methods vs. method parameters which have the same names.

I think there was some confusion about lists, dictionaries and tuples but we did practice mostly with list structures so I don't have enough information.

I have the impression that object oriented programming (classes and instances, we didn't go into inheritance) are generally confusing to beginners with zero programming experience. The classical way to explain them is by using some abstractions like animal -> dog -> a particular dog breed -> a particular pet. OOP was explained to me in a similar way back in school so these kinds of abstractions are very natural for me. I have no idea if my explanation sucks or students are having hard time wrapping their heads around the abstraction. I'd love to hear some feedback from other instructors on this one.

I think there is some misunderstanding between a class (a definition of behavior) and an instance/object of this class (something which exists into memory). This may also explain the difficulty remembering or figuring out what self points to and why do we need to use it inside method bodies.

For unittest.TestCase we didn't do lots of practice which is my fault. The homework assignments request the students to go back to solutions of previous modules and implement more tests for them. Next time I should provide a module (possibly with non-obvious bugs) and request to write a comprehensive test suite for it.

Because of the missing practice there was some confusion/misunderstanding about the setUpClass/tearDownClass and the setUp/tearDown methods. Also add to the mix that the first are @classmethod while the later are not. "To be safe" students always defined both as class methods!

I have since corrected the training materials but we didn't have good examples (nor practiced) explaining the difference between setUpClass (executed once aka before suite) and setUp (possibly executed multiple times aka before test method).

On the Selenium side I think it is mostly practice which students lack, not understanding. The entire Selenium framework (any web test framework for that matter) boils down to

  • Load a page
  • Find element(s)
  • Click or hover (that one was tricky) element
  • Get element's attribute value or text
  • Wait for the proper page to load (or worst case AJAX calls)

IMO finding the correct element on the page is on-par with waiting (which also relies on locating elements) and took 80% of the time we spent working with Selenium.

Thanks for reading and don't forget to comment and give me your feedback!

Image source: https://www.udemy.com/selenium-webdriver-with-python/

Measuring things only makes sense if you know what you’re measuring, why, and what you intend to do…

Posted by Suzanne Hillman (Outreachy) on May 26, 2017 08:50 PM

Measuring things only makes sense if you know what you’re measuring, why, and what you intend to do with it. Otherwise, even if you _have_ numbers, they don’t actually have any meaning. So what’s the point?

The story of tunables

Posted by Siddhesh Poyarekar on May 26, 2017 03:33 PM
This is long overdue and I have finally got around to writing this. Apologies to everyone who asked me to write about it and I responded with "Oh yeah, right away!" If you are not interested in the story bits, start with So what are tunables anyway below.

The story of tunables began in 2013 when I was a relatively fresh glibc engineer in the Red Hat toolchain team. We wanted to add an environment variable to allow users to set the default stack sizes for thread stacks and Carlos took that idea to the next level with the question: How do we make this more extensible so that we have full control over the kind of tuning parameters we accept in glibc but at the same time, allow distributions to add their own tuning parameters without affecting upstream code? He asked this question in the 2013 Cauldron in Mountain View, where the famous glibc BoF happened in a tiny meeting room which overflowed into an adjacent room, which also filled up quickly, and then the BoF overran its 45 minute slot by roughly a couple of hours! Carlos joined the BoF over Hangout (I think it was called Google Talk then) because he couldn’t make it and we had a lengthy back and forth about the pros and cons of having such tuning parameters. In principle, everybody agreed that such a thing would be desirable from a maintenance perspective. However the approach for doing it was something nobody seemed to agree on.

Thus the idea of tunables was born 4 years ago, except that Carlos wrote the first wiki page and called it ‘tunnables’. He consistently spelled it tunnables and I tunables. I won in the end because I wrote the patches ;)

Jokes aside, we were happy about the reception of the idea and we went about documenting it at length. However given that we were a two man army manning the glibc bunkers in Red Hat and the fact that upstream was still reviving itself from the post-Uli era meant that we would never come back to it for a while.

Then 2015 happened and it came with a memorable Cauldron in Prague. It was memorable because by then I had come up with a first draft of an API for the tunables framework. It was also memorable because it was my last month at Red Hat, something I never imagined would ever happen. I was leaving my dream team and I wasn’t sure if I would ever be as happy again. Those uncertainties were unfounded as I know now, but that’s a story for another post.

The struggle to write code

The first draft I presented at Cauldron in 2015 was really just a naive attempt at storing and initializing public values accessed across libraries in glibc and we had not even thought through everything we would end up fixing with tunables. It kinda worked, but it was never going to make the cut. A new employer meant that tunables will become a weekend project and as a result it missed the release deadline. And another, and then another. Towards the closing of every release I would whip out a patchset that would be poked holes into and then the change would be considered too risky to include.

Finally we set a deadline of 2.25 for tunables because by then quite a few devs had started maintaining their own list of tunables on top of my tree, frustratingly rebasing every time I completely changed my approach. We made it in the end, with Florian and I working through the year end holidays to get the whole patchset in before freeze.

So as of 2.25, tunables is firmly entrenched into glibc and as we speak, there are more tunables to come, especially to override IFUNC selections and to tune the processor capability mask.

So what are tunables anyway?

This is where you start if you want the technical description and are not interested in the story bits.

Tunables is an internal implementation detail in glibc. It is a way to manage ways in which we allow behaviour in glibc to be modified. As of now the only way to manage glibc is via environment variables and the way to do that was strewn all over the place in the source code. Tunables provide one place to add the tunable parameter with all of the characteristics it would have and then the framework will handle everything from there. The user of that tunable (e.g. malloc for MALLOC_MMAP_THRESHOLD_ or malloc.mmap.threshold in tunables parlance) would then simply access the tunable from the list and do what it wants to do, without bothering about where it came from.

The framework is implemented in elf/dl-tunables.c and all of the supporting code is named as elf/dl-tunable*. As is evident, tunables is linked into the dynamic linker, where it is initialized very early. In static binaries, the initialization is done in libc-start.c, again early enough to influence almost everything in the program. The list is initialized just once and is modifiable only in the dynamic linker before it relocates itself.

The main list of tunables is maintained in elf/dl-tunables.list. Architectures may define their own tunables in sysdeps/…/dl-tunables.list. There is a README.tunables that lists out the gory details of using tunables within glibc to access its values and if necessary, update it.

This gives us a number of advantages, some of them being the following:

Single Initialization

All environment variables used by glibc would be read in by a single double-nested loop which initializes all tunables. Accesses are then just a GOT away, so no more getenv loops in glibc code. This is not achieved yet since all of the environment variables are not yet ported to tunables (Hint: here’s a nice project for you, you aspiring glibc developer!)

All tunables are listed in a single file

The file elf/dl-tunables.list has a full list of tunables along with its properties such as type, value range, default value and its behaviour with setuid binaries. This caused us to introspect on each environment variable we ported into tunables and we ended up fixing a few bugs as well.

Very Early Initialization

Yes, very early, earlier than you would imagine, earlier than IFUNCs! *gasp*

Tunables get initialized very early so that they can influence almost every behaviour in glibc. The unreleased 2.26 makes this even earlier (or rather, delays CPU features initialization enough) so that tunables can impact selection of routines using IFUNCs. This fixes an important inconsistency in glibc, where LD_HWCAP_MASK was read in dynamically linked binaries but not in static binaries because it was not read in early enough.

relro

The tunable list is read-only, so glibc reads from a list that cannot be tampered by malicious code that gets loaded after relocation.

What changes for me as a user?

The change in 2.25 is minimal enough that you won’t notice. In this release, only the malloc tuning environment variables have been ported to tunables and if you’ve been using those environment variables before, they will continue to work even now. In addition, you get to tune these parameters in a fancy way that doesn’t require the stupid trailing underscore, using the GLIBC_TUNABLES environment variable. The manual describes it extensively so I won’t go into details.

The major change is about to happen now. Intel is starting to push a number of tunables to allow you to tune your library to your liking, changing things like string routines that get selected for your program, cache parameters, etc. I believe PowerPC and S390 will see something simila too in the lock elision space and aarch64 multiarch will be tunable as well. All of this will hopefully come in 2.26 or latest by 2.27.

One thing to note though is that for now tunables are not covered by any ABI or API guarantees. That is to say, if you like a tunable that is in 2.26, we may well remove the tunable in 2.27 if we find that it either does not make sense to have that tunable exposed or exposing that tunable is somehow detrimental to user programs.

The big difference will likely come in when distributions start adding their own tunables into the mix. since it will allow them to add customizations to the library without having to maintain huge ugly patchsets.

The Road Ahead

The big advantage of collecting all tuning parameters under a single framework is the ability to then add new ways to influence those tuning parameters. We have environment variables now, but we could add other methods to tune the library. Some ideas discussed are as follows:

  • Have a systemwide configuration file (e.g. /etc/sysctl.user.conf) that sets different defaults for some tunables and limits the degree to which specific tunables are altered. This allows systems administrators to have more fine grained control over the processes on their system
  • Have user-specific configuration files (e.g. $HOME/.sysctl.user.conf) that does something similar but at a user level
  • Have some tunables modified during execution via some shared memory mechanism

All of this is still evolving, so if you have an idea or would like to work on any of these ideas, feel free to get in touch with me and we can find a way to get you contributing to one of the most critical parts of the operating system!

Merging Kubernetes client configs at run time

Posted by Adam Young on May 26, 2017 03:20 PM

Last time I walked through the process of merging two sets of Kubernetest client configurations into one. For more ephemeral data, you might not want to munge it all into your main configuration. The KUBECONFIG environment variables lets you specify muiltiple configuration files and merge them into a single set of configuration data.

From

kubectl config --help

If $KUBECONFIG environment variable is set, then it is used [as] a list of paths (normal path delimiting rules for your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list.

 

So, lets start with the file downloaded by the kubevirt build system yesterday.

 

[ayoung@ayoung541 vagrant]$ echo $PWD
/home/ayoung/go/src/kubevirt.io/kubevirt/cluster/vagrant
[ayoung@ayoung541 vagrant]$ export KUBECONFIG=$PWD/.kubeconfig
[ayoung@ayoung541 vagrant]$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@kubernetes kubernetes kubernetes-admin 

Contrast this with what a get without the environment variable set, if I use the configuration in ~/.kube, which I synced over from my OpenShift cluster:

[ayoung@ayoung541 vagrant]$ unset KUBECONFIG
[ayoung@ayoung541 vagrant]$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
 default/munchlax:8443/ayoung munchlax:8443 ayoung/munchlax:8443 default
* default/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 default
 kube-system/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 kube-system

I want to create a new configuration for the vagrant managed machines for Kubevirt.  IT turns out that the API server specified there is actually a proxy, a short term shim we put in place as we anxiously awate the Amagalmated Api Server of 1.7.  However, sometimes this proxy is broken or we just need to by-pass it.  The only difference between this setup and the proxied setup is the server URL.

So…I create a new file, based on the .kubeconfig file, but munged slightly.  Here is the diff:

[ayoung@ayoung541 vagrant]$ diff -Nurd .kubeconfig .kubeconfig-core 
--- .kubeconfig 2017-05-24 19:49:24.643158731 -0400
+++ .kubeconfig-core 2017-05-26 11:10:49.359955538 -0400
@@ -3,13 +3,13 @@
 - cluster:
 certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01EVXlOREl4TWpnek5sb1hEVEkzTURVeU1qSXhNamd6Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmRmCnVINkE3Q1JVRVQ5VzhpSGgyam9EQUNxVGZXQ01ITFN3dzc5Q01DZXlyWFhZazVvR0lIbnZkeVB5aEVNZ2xYYysKczUwZVJZRDBkWTYrYUlnVmtVaElIVitSUHltVHE0WklrS3EzNnV5MXk0TFYzSDNTaGt2eVZBbitjL3htYldaZQp5bEZaZHhSMTFoVjRac0h4WXdzWTR4bmVoaWpkMnkwWUFaQnkwellkQm5xTmE4cFpDb3BNbStLdmtjVEJ1UERGCkp5ZWkzU0tJd3R1R0gxU3ByUCsxdi9OSGFCOTNXR0g0MFQxbm1HZTRGWWQ2SzErcWNNdndpdmY1dVQ4Nk10M2YKVWhEQWZNUlk3aW5maXVsVW1HeUNPWlNsbFhpWlRMWmpoOGZiUW1FdmZvOFJjMm1lOGtwTXJpMDdIWUQ4ZjZFNQpScjNhT05mcTkwd2s1VDM5YWxjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNTll1R3N1bGNpY3REQ0pFZ3R3K2ZQSUU3S04KQnRvV2RuZWZZdktya1l1WUVTRkk5alFXTFNmVGN1MnpibzNRWnYzZDg3WnkvNjYyK2R0SWloWFA5V3NJWVhHUApDUXNuTUMyZXY5djlmOU9WbVhZbEhuUUx0YXRiSDZVTFZPdWJZUXlFRlRSa21XV1dwcXpoR1pNWk1pbG8wRzhLCnBNd29Ia0dDWm5tUytyUVVEVWF6QlprcVdzRFNabW5jWUhtdFRtMEJ6RUJpa002SEFsNzAvT21rNGpHcmtHZEQKS2tMWU16UjJkZnlkSklCVGxKdGlGYjRhZ3R5amlFb3NDSGY0Z1oyY0xUMTRyOENud0QrOWxSbVk3dDNDRjIrdgpFOGxxb3RSYVI2TVRyWnZkUXUrOWtFYnNKWVZUN1NQR3pqeEpMZ1BmTGprK0g1YUJWQU9od0tvdTV0QT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
 server: https://192.168.200.2:6443
- name: kubernetes
+ name: core
 contexts:
 - context:
- cluster: kubernetes
+ cluster: core
 user: kubernetes-admin
- name: kubernetes-admin@kubernetes
-current-context: kubernetes-admin@kubernetes
+ name: kubernetes-admin@core
+current-context: kubernetes-admin@core
 kind: Config
 preferences: {}
 users:

Now I have a couple choices. I can just specify this second config file on the command line:

[ayoung@ayoung541 vagrant]$ kubectl --kubeconfig=$PWD/.kubeconfig-core config get-contexts
 CURRENT NAME CLUSTER AUTHINFO NAMESPACE
 kubernetes-admin@core core kubernetes-admin

Or I can munge the two together and provide a flag which states which context to use.

[ayoung@ayoung541 vagrant]$ export KUBECONFIG=$PWD/.kubeconfig:$PWD/.kubeconfig-core
[ayoung@ayoung541 vagrant]$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@kubernetes kubernetes kubernetes-admin 
 kubernetes-admin@core core kubernetes-admin

Note that this gives a different current context (with the asterix) than if I reverse the order of the files in the env-var:

[ayoung@ayoung541 vagrant]$ export KUBECONFIG=$PWD/.kubeconfig-core:$PWD/.kubeconfig
[ayoung@ayoung541 vagrant]$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@core core kubernetes-admin 
 kubernetes-admin@kubernetes kubernetes kubernetes-admin

Whichever one declared the default first wins.

However, regardless of the order, I can explicitly set the context I want to use on the command line:

[ayoung@ayoung541 vagrant]$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
 kubernetes-admin@core core kubernetes-admin 
* kubernetes-admin@kubernetes kubernetes kubernetes-admin 
[ayoung@ayoung541 vagrant]$ kubectl --context=kubernetes-admin@core config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@kubernetes kubernetes kubernetes-admin 
 kubernetes-admin@core core kubernetes-admin

Again, notice the line where asterix specifies which context is in use.

With only two files, it might be easier to just specify the –kubeconfig option, but as the number of configs you work with grows, you might find you want to share the user data between two of them, or have a bunch of scripts that work between them, and it is easier to track which context to use than to track which file contains which set of data.

Secure your webserver with improved Certbot

Posted by Fedora Magazine on May 26, 2017 08:00 AM

A year and a half ago the Let’s Encrypt project entered public beta. Just over a year ago, as the project left beta, the letsencrypt client was spun out of ISRG, which continues to maintain the Let’s Encrypt servers, into an EFF project and renamed certbot. The mission remained the same, however: to provide quick, simple access to free domain validated certificates, in order to encrypt the internet.

This week marked a significant point in the development of Certbot as the recommended Let’s Encrypt client, with the 0.14 release of the tool.

When the letsencrypt client was first released it only supported using the webroot of an existing HTTP server. This is a standalone mode where letsencrypt listens temporarily on port 80 to carry out the challenge, or a manual method where the admin puts the challenge presented into place before the ACME server proceeded to verify it. Now the letsencrypt client is even more functional.

Apache HTTPD plugin for Certbot

When the client was changed to be an EFF project, one of the first major features that appeared was the Apache HTTPD plugin. This plugin lets the Certbot application automatically configure the webserver to use certificates for one or more VirtualServer installations.

NOTE: If you encounter an issue with SELinux in enforcing mode while using the plugin, use the setenforce 0 command to switch to permissive mode when running the certbot –apache command. Afterward, switch back to enforcing mode using setenforce 1. This issue will be resolved in a future update.

When you start the Apache httpd server with mod_ssl, the service automatically generates a self signed certificate.

self signed certificate

Default mod_ssl self signed certificate not trusted by the browser.

Next, run this command:

certbot --apache

Certbot prompts for a few questions. You can also run it non-interactively and provide all the arguments in advance.

Questions at the terminal

After a few moments, the Apache server has a valid certificate in place.

Valid SSL certificate in place

Nginx plugin for Certbot

The nginx plugin requires the domain name in the configuration from my testing, whereas the httpd plugin modifies the default SSL virtualhost.

The process is similar to the httpd plugin. Answer a few questions, if you do not provide arguments on the command line, and the instance is then protected with a valid SSL certificate.

Python 3 compatibility

The Certbot developers have put a significant amount of work over the past several months to make Certbot fully compatible with Python 3. At the 0.12 release, the unit tests we carried out when building the RPMs passed. However, the developers were not yet happy to declare it ready, since they noticed some edge case failures on real world testing. As of the 0.14 release, developers have declared Certbot Python 3 compatible. This change brings it inline with the default, preferred Python version in Fedora.

To minimize possible issues, Rawhide and the upcoming Fedora 26 will be switched over to using certbot-3 first, whilst Fedora 25 remains using certbot-2 as the default.

Getting hooked on renewals

A recent update added a systemd timer to automate renewals of the certificates.The timer checks each day to see if any certificates need updating. To enable it, use this command:

systemctl enable --now certbot-renew.timer

The configuration in /etc/sysconfig/certbot can change the behavior of the renewals. It includes options for hooks that run before and after the renewal, and another hook that runs for each certificate processed. These are global behaviors. Optionally, you can configure hooks in the configuration files in /etc/letsencrypt/renewal on a per-certificate basis.

Some form of automation is advised, whether the systemd timer or another method, to ensure that certificates are refreshed periodically and don’t expire by accident.

Testing SSL security

A test of SSL security with CentOS7 and the Apache plugin provided a C rating. The nginx plugin resulted in a B rating.

Of course the Red Hat defaults lean towards compatibility in mind. If there’s no need to support older clients then you can tighten up the list of permitted ciphers.

Using this configuration on Fedora 25 on my own blog gets an A+ rating:

SSLProtocol all -SSLv2 -SSLv3
SSLCipherSuite "EECDH+aRSA+AESGCM EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA !aNULL !eNULL !LOW !MEDIUM !SEED !3DES !CAMELLIA !MD5 !EXP !PSK !SRP !DSS !RC4"
SSLCertificateFile /etc/pki/tls/certs/www-hogarthuk.com-ssl-bundle.crt
SSLCertificateKeyFile /etc/pki/tls/private/www-hogarthuk.com-decrypted.key

<IfModule mod_headers.c>
      Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
</IfModule>

What’s next?

There are always bugs to fix and improvements to make. Apart from improvements to SELinux compatibility as mentioned above, there’s also a future to look forward to. DNS based validation will make it easier to take Certbot beyond web servers. Mail, jabber, load balancers and other services can then more easily use Let’s Encrypt certificates using the Certbot client.

PHP version 7.0.20RC1 and 7.1.6RC1

Posted by Remi Collet on May 26, 2017 05:49 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.0.20RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 23-24 and Enterprise Linux.

RPM of PHP version 7.1.6RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26 or remi-php71-test repository for Fedora 23-25 and Enterprise Linux.

PHP Version 5.6 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.1.6RC1 is also available in Fedora rawhide (for QA).

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php70, php71)

Base packages (php)

Panda de feu de compétition

Posted by Casper on May 25, 2017 08:37 PM

Entre Fedoriens, on aime bien comparer nos configurations personnalisées de Firefox. Le prix à gagner est la reconnaissance d'une config comme étant "Parano", titre suprême qui garantie une navigation fluide comme dans les années 90. À la base, je n'avais pas prévu de publier la description de ma config, je l'avais écrite en guise de backup pour pouvoir reproduire cette config sur n'importe quel poste que l'on placerait entre mes mains (et vu que j'ai 3-4 machines, ça me sert pas mal...). Je pense que cette config pourrait vous être utile (au moins certaines parties), elle a été exclusivement conçue pour protéger celui qui l'utilise.

Search engines

  • fossencdi.org http://searx.cwuzdtzlubq5uual.onion/search?q= (ajout auto)
  • searx.nulltime.net http://searx7hcqiogbrhk.onion/search?q=
  • 4ray.co https://searx.4ray.co/search?q= (ajout auto)
  • gibberfish http://o2jdk5mdsijm2b7l.onion/search?q= (ajout auto)
  • s3arch.eu http://eb6w5ctgodhchf3p.onion/search?q=
  • searx.gotrust.de http://nxhhwbbxc4khvvlw.onion/search?q=

Unset display search suggestions

General config

Startup

Au démarrage de Firefox : Afficher les derniers onglets et fenêtres utilisés

Tabs

Open new tab instead of new windows

Flash plugin

~/.mozilla/plugins/libflashplayer.so

Advanced config

Network

  • Custom automatic cache management
  • Limit cache size at 1024Mio

Network settings

  • Manual config with proxy
  • Socks host 127.0.0.1 port 9050
  • SOCKSv5
  • Exception for localhost, 127.0.0.1, 192.168.0.0/24
  • Use distant DNS when SOCKSv5 is enabled

User certificates

  • falcon if available
  • blackbird if falcon does not exist

Addons

  • Adblock Plus
  • Cookie Manager avancée
  • Cookies Manager+
  • Disconnect
  • Download Youtube Videos as MP4
  • humanstxt (disparu)
  • NoScript
  • Toggle Proxy
  • User-Agent Switcher revived
  • Video DownloadHelper
  • Youtube and more - Easy Video Downloader
  • YouTube HTML5-Video

Notes

Un routeur Tor configuré en mode client (mode par défaut out-of-the-box) est requis sur le poste.

Glad to be a Mentor of Google Summer Code again!

Posted by Tong Hui on May 25, 2017 04:44 PM
This year I will mentoring in FedoraProject, and help Mandy Wang finish her GSoC program about “Migrate Plinth to Fedora Server” which raised by me. While, why I proposed this idea? Plinth is developed by Freedombox which is a Debian based project. The Freedombox is aiming for building a 100% free software self-hosting web server to … Continue reading "Glad to be a Mentor of Google Summer Code again!"

Merging two Kubernetes client configurations

Posted by Adam Young on May 25, 2017 03:22 PM

I have two distinct Kubernetes clusters I work with on a daily basis. One is a local vagrant bases set of VM built by the Kubevirt code base. The other is a “baremetal” install of OpenShift Origin on a pair of Fedora workstation in my office. I want to be able to switch back and forth between them.

When you run the kubectl command without specifying where the application should look for the configuration file, it defaults to looking in $HOME/.kube/config. This file maintains the configuration values for a handful of object types. Here is an abbreviated look at the one set up by origin.

apiVersion: v1
clusters:
- cluster:
 api-version: v1
 certificate-authority-data: LS0...LQo=
 server: https://munchlax:8443
 name: munchlax:8443
contexts:
- context:
 cluster: munchlax:8443
 namespace: default
 user: system:admin/munchlax:8443
 name: default/munchlax:8443/system:admin
- context:
 cluster: munchlax:8443
 namespace: kube-system
 user: system:admin/munchlax:8443
 name: kube-system/munchlax:8443/system:admin
current-context: kube-system/munchlax:8443/system:admin
kind: Config
preferences: {}
users:
- name: system:admin/munchlax:8443
 user:
 client-certificate-data: LS0...tLS0K
 client-key-data: LS0...LS0tCg==

Note that I have ellided the very long cryptographic entries for certificate-authority-data, client-certificate-data, and client-key-data.

First up is an array of clusters.  The minimal configuration for each here provides a servername, which is the remote URL to use, some set of certificate authority data, and a name to be used for this configuration elsewhere in this file.

At the bottom of the file, we see a chunk of data for user identification.  Again, the user has a local name

 system:admin/munchlax:8443

With the rest of the identifying information hidden away inside the client certificate.

These two entities are pulled together in a Context entry. In addition, a context entry has a namespace field. Again, we have an array, with each entry containing a name field. The Name of the context object is going to be used in the current-context field and this is where kubectl starts its own configuration.   Here is an object diagram.

The next time I run kubectl, it will read this file.

  1. Based on the value of CurrentContext, it will see it should use the kube-system/munchlax:8443/system:admin context.
  2. From that context, it will see it should use
    1. the system:admin/munchlax:8443 user,
    2. the kube-system namespace, and
    3. the URL https://munchlax:8443 from the munchlax:8443 server.

Below is a similar file from the kubevirt set up, found on my machine at the path ~/go/src/kubevirt.io/kubevirt/cluster/vagrant/.kubeconfig

apiVersion: v1
clusters:
- cluster:
 certificate-authority-data: LS0...LS0tLQo=
 server: https://192.168.200.2:6443
 name: kubernetes
contexts:
- context:
 cluster: kubernetes
 user: kubernetes-admin
 name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
 user:
 client-certificate-data: LS0...LS0tLQo=
 client-key-data: LS0...LS0tCg==

Again, I’ve ellided the long cryptographic data.  This file is organized the same way as the default one.  kubevirt uses it via a shell script that resolves to the following command line:

${KUBEVIRT_PATH}cluster/vagrant/.kubectl --kubeconfig=${KUBEVIRT_PATH}cluster/vagrant/.kubeconfig "$@"

which overrides the default configuration location.  What if I don’t want to use the shell script?  I’ve manually merged the two files into a single ~/.kube/config.  The resulting one has two users,

  • system:admin/munchlax:8443
  • kubernetes-admin

two clusters,

  • munchlax:8443
  • kubernetes

and three contexts.

  • default/munchlax:8443/system:admin
  • kube-system/munchlax:8443/system:admin
  • kubernetes-admin@kubernetes

With current-context: kubernetes-admin@kubernetes:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
haproxy-686891680-k4fxp 1/1 Running 0 15h
iscsi-demo-target-tgtd-2918391489-4wxv0 1/1 Running 0 15h
kubevirt-cockpit-demo-1842943600-3fcf9 1/1 Running 0 15h
libvirt-199kq 2/2 Running 0 15h
libvirt-zj6vw 2/2 Running 0 15h
spice-proxy-2868258710-l85g2 1/1 Running 0 15h
virt-api-3813486938-zpd8f 1/1 Running 0 15h
virt-controller-1975339297-2z6lc 1/1 Running 0 15h
virt-handler-2s2kh 1/1 Running 0 15h
virt-handler-9vvk1 1/1 Running 0 15h
virt-manifest-322477288-g46l9 2/2 Running 0 15h

but with current-context: kube-system/munchlax:8443/system:admin

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
tiller-deploy-3580499742-03pbx 1/1 Running 2 8d
youthful-wolverine-testme-4205106390-82gwk 0/1 CrashLoopBackOff 30 2h

There is support in the kubectl executable for configuration:

[ayoung@ayoung541 helm-charts]$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
 kubernetes-admin@kubernetes kubernetes kubernetes-admin 
 default/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 default
* kube-system/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 kube-system
[ayoung@ayoung541 helm-charts]$ kubectl config current-context kubernetes-admin@kubernetes
kube-system/munchlax:8443/system:admin
[ayoung@ayoung541 helm-charts]$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
 default/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 default
* kube-system/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 kube-system
 kubernetes-admin@kubernetes kubernetes kubernetes-admin

The openshift login command can add additional configuration information.

$ oc login
Authentication required for https://munchlax:8443 (openshift)
Username: ayoung
Password: 
Login successful.

You have one project on this server: "default"

Using project "default".

This added the following information to my .kube/config

under contexts:

- context:
 cluster: munchlax:8443
 namespace: default
 user: ayoung/munchlax:8443
 name: default/munchlax:8443/ayoung

under users:

- name: ayoung/munchlax:8443
 user:
 token: 24i...o8_8

This time I elided the token.

It seems that it would be pretty easy to write a tool for merging two configuration files.  The caveats I can see include:

  • don’t duplicate entries
  • ensure that two entries with the same name but different values trigger an error

Canaries in a coal mine (apropos nothing)

Posted by Stephen Smoogen on May 24, 2017 10:29 PM

[This post is brought to you by Matthew Inman. Reading http://theoatmeal.com/comics/believe made me realize I don't listen enough and Verisatium's https://www.youtube.com/watch?v=UBVV8pch1dM made me realize why thinking is hard. I am writing this to remind myself when I forget and jump on some phrase.]

Various generations ago, part of my family was coal miners and some of their lore was still passed down many many years later. One of those was about the proverbial canary. A lot of people like to think that they are being a canary when they bring up a problem that they believe will cause great harm.. singing louder because they have run out of air.

That isn't what a canary does. The birds in the mines go silent when the air runs out. They may have died or are on the verge of being dead. They got quieter and quieter and what the miners listened for was the lack of noise from birds versus more noise. Of course it is very very hard to hear the birds in the first place in a mine because they aren't quiet places. There is hammering, and shoveling and footsteps echoing down long tubes.. so you might think.. bring more birds.. that just added more distractions and miners would get into fights because the damn birds never shut up. So the birds were few and far between and people would have to check up on the birds every now and then to see if they were still kicking. Safer mines would have some old fellow stay near the bird and if it died/passed out they would begin ringing a bell which could be heard down the hole.

So if analogies were 1:1, the time to worry is not when people are complaining a lot on a mailing list about some change. In fact if everyone complains, then you could interpret that you have too many birds and not enough miners so go ahead. The time to worry would be when things have changed but no one complains. Then you probably really need to look at getting out of the mine (or most likely you will find it is too late).

However analogies are rarely 1:1 or even 1:20. People are not birds, and you should pay attention to when changes cause a lot of consternation. Listen to why the change is causing problems or pain. Take some time to process it, and see what can be done to either alter the change or find a way for the person who is in pain to get out of pain.

Best password management tool.

Posted by mythcat on May 24, 2017 06:40 PM
This suite of tools come with many features free and one good premium option.
The Password Tote tools provides secure password management through software and services on multiple platforms and work very well with software downloads for Windows, Mac OS X, Safari, Chrome, Firefox, iOS (iPhone, iPod Touch, iPad), Android.
You can download this from downloads page.

Features OutlineFreePremium
Website Access
Browser Extensions
Desktop Software
Mobile Software
Password Sharing
YubiKey Support
PriceFree$2.99 a month or 2 Years at a 16% savings
DescriptionThis will allow you to use the website version completely free. It also gives you access to fill your passwords from the browser extensions. It does not provide access to the desktop software or mobile phone software.Premium gives you access to your passwords from all versions of Password Tote, including the desktop software and mobile phone versions.

Synchronization between browser extensions and utilities is fast and does not confuse the user in navigation. Importing files is fast for the csv file dedicated to dozens of passwords.
A very good aspect was the compromise solution for custom import with a generic csv file.
The utility generates this file and you can fill it with the necessary login data for your web sites.
The other csv import options did not work for me, I guess the problems are incompatible with the other files exported by the dedicated software.
I used it with YubiKey and it worked very well. It's the only utility that allowed me to connect with YubiKey, the other utilities demand a premium version.

How to enable YubiKeys and password tote.
  • First log in to your Password Tote account. 
  • Click Account, then Manage YubiKeys. You will arrive at the YubiKey Management page. 
  • Click Add YubiKey to register your YubiKey with your Password Tote account. 
  • Fill in the required details. If successful, your YubiKey will be displayed in the list as shown in the screen shot below.

Formatting a new extFAT USB on Fedora

Posted by Julita Inca Chiroque on May 24, 2017 06:31 PM

I have a new 64GB USB and it was not show up at first time:

Thanks to this video I typed fdisk -l, then I was able to see 58.2 GB

After trying to install the exfat package with dnf -y install fuse-exfat, I failed

What I did after many failings was, setting the partition using the GUI:

Then you can see the new format as Ext4:

It is OK to have a little FreeSpace with no extension. It is time to write into the USB:

Now we can see the USB device in the list of devices 😀

Screenshot from 2017-05-24 13:33:06


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: device, extfat, extfat mount, fedora, format, GNOME, Julita Inca, Julita Inca Chiroque, mnt, mount, USB, USB flash drive

Getting started with helm on OpenShift

Posted by Adam Young on May 24, 2017 05:20 PM

After attending in on a helm based lab at the OpenStack summit, I decided I wanted to try it out for myself on my OpenShift cluster.

Since helm is not yet part of Fedora, I used the upstream binary distribution Inside the tarball was, among other things, a standalone binary named helm, which I moved to ~/bin (which is in my path). Once I had that in place:

$ helm init
Creating /home/ayoung/.helm 
Creating /home/ayoung/.helm/repository 
Creating /home/ayoung/.helm/repository/cache 
Creating /home/ayoung/.helm/repository/local 
Creating /home/ayoung/.helm/plugins 
Creating /home/ayoung/.helm/starters 
Creating /home/ayoung/.helm/repository/repositories.yaml 
$HELM_HOME has been configured at /home/ayoung/.helm.

Tiller (the helm server side component) has been installed into your Kubernetes Cluster.
Happy Helming!

Checking on that Tiller install:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
default       docker-registry-2-z91cq          1/1       Running   0          23h
default       registry-console-1-g4qml         1/1       Running   0          1d
default       router-5-4w3zt                   1/1       Running   0          23h
kube-system   tiller-deploy-3210876050-8gx0w   1/1       Running   0          1m

But trying a helm command line operation fails.

$ helm list
Error: User "system:serviceaccount:kube-system:default" cannot list configmaps in project "kube-system"

This looks like an RBAC issue. I want to assign the role ‘admin’ to the user “system:serviceaccount:kube-system:tiller” on the project “kube-system”

$ oc project kube-system
Now using project "kube-system" on server "https://munchlax:8443".
[ansible@munchlax ~]$ oadm policy add-role-to-user admin system:serviceaccount:kube-system:tiller
role "admin" added: "system:serviceaccount:kube-system:tiller"
[ansible@munchlax ~]$ ./helm list
[ansible@munchlax ~]$

Now I can follow the steps outlined in the getting started guide:

[ansible@munchlax ~]$ ./helm create mychart
Creating mychart
[ansible@munchlax ~]$ rm -rf mychart/templates/
deployment.yaml  _helpers.tpl     ingress.yaml     NOTES.txt        service.yaml     
[ansible@munchlax ~]$ rm -rf mychart/templates/*.*
[ansible@munchlax ~]$ 
[ansible@munchlax ~]$ 
[ansible@munchlax ~]$ vi mychart/templates/configmap.yaml
[ansible@munchlax ~]$ ./helm install ./mychart
NAME:   esteemed-pike
LAST DEPLOYED: Wed May 24 11:46:52 2017
NAMESPACE: kube-system
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME               DATA  AGE
mychart-configmap  1     0s
[ansible@munchlax ~]$ ./helm get manifest esteemed-pike

---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: mychart-configmap
data:
  myvalue: "Hello World"
[ansible@munchlax ~]$ ./helm delete esteemed-pike
release "esteemed-pike" deleted

Exploring OpenShift RBAC

Posted by Adam Young on May 24, 2017 03:27 PM

OK, since I did it wrong last time, I’m going to try creating an user in OpenShift, and grant that user permissions to do various things. 

I’m going to start by removing the ~/.kube directory on my laptop and perform operations via SSH on the master node.  From my last session I can see I still have:

$ oc get users
NAME UID FULL NAME IDENTITIES
ayoung cca08f74-3a53-11e7-9754-1c666d8b0614 allow_all:ayoung
$ oc get identities
NAME IDP NAME IDP USER NAME USER NAME USER UID
allow_all:ayoung allow_all ayoung ayoung cca08f74-3a53-11e7-9754-1c666d8b0614

What openshift calls projects (perhaps taking the lead from Keystone?) Kubernetes calls namespaces:

$ oc get projects
NAME DISPLAY NAME STATUS
default Active
kube-system Active
logging Active
management-infra Active
openshift Active
openshift-infra Active
[ansible@munchlax ~]$ kubectl get namespaces
NAME STATUS AGE
default Active 18d
kube-system Active 18d
logging Active 7d
management-infra Active 10d
openshift Active 18d
openshift-infra Active 18d

According to the documentation here I should be able to log in from my laptop, and all of the configuration files just get magically set up.  Lets see what happens:

$ oc login
Server [https://localhost:8443]: https://munchlax:8443 
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y

Authentication required for https://munchlax:8443 (openshift)
Username: ayoung
Password: 
Login successful.

You don't have any projects. You can try to create a new project, by running

oc new-project <projectname>

Welcome! See 'oc help' to get started.

Just to make sure I sent something, a typed in the password “test” but it could have been anything.  The config file now has this:

$ cat ~/.kube
.kube/ .kube.bak/ 
[ayoung@ayoung541 ~]$ cat ~/.kube/config 
apiVersion: v1
clusters:
- cluster:
 insecure-skip-tls-verify: true
 server: https://munchlax:8443
 name: munchlax:8443
contexts:
- context:
 cluster: munchlax:8443
 user: ayoung/munchlax:8443
 name: /munchlax:8443/ayoung
current-context: /munchlax:8443/ayoung
kind: Config
preferences: {}
users:
- name: ayoung/munchlax:8443
 user:
 token: 4X2UAMEvy43sGgUXRAp5uU8KMyLyKiHupZg7IUp-M3Q

I’m going to resist the urge to look too closely into that token thing.
I’m going to work under the assumption that a user can be granted roles in several namespaces. Lets see:

 $ oc get namespaces
 Error from server (Forbidden): User "ayoung" cannot list all namespaces in the cluster

Not a surprise.  But the question I have now is “which namespace am I working with?”  Let me see if I can figure it out.

$ oc get pods
Error from server (Forbidden): User "ayoung" cannot list pods in project "default"

and via kubectl

$ kubectl get pods
Error from server (Forbidden): User "ayoung" cannot list pods in project "default"

What role do I need to be able to get pods?  Lets start by looking at the head node again:

[ansible@munchlax ~]$ oc get ClusterRoles | wc -l
64
[ansible@munchlax ~]$ oc get Roles | wc -l
No resources found.
0

This seems a bit strange. ClusterRoles are not limited to a namespace, whereas Roles are. Why am I not seeing any roles defined?

Lets start with figuring out who can list pods:

oadm policy who-can GET pods
Namespace: default
Verb:      GET
Resource:  pods

Users:  system:admin
        system:serviceaccount:default:deployer
        system:serviceaccount:default:router
        system:serviceaccount:management-infra:management-admin
        system:serviceaccount:openshift-infra:build-controller
        system:serviceaccount:openshift-infra:deployment-controller
        system:serviceaccount:openshift-infra:deploymentconfig-controller
        system:serviceaccount:openshift-infra:endpoint-controller
        system:serviceaccount:openshift-infra:namespace-controller
        system:serviceaccount:openshift-infra:pet-set-controller
        system:serviceaccount:openshift-infra:pv-binder-controller
        system:serviceaccount:openshift-infra:pv-recycler-controller
        system:serviceaccount:openshift-infra:statefulset-controller

Groups: system:cluster-admins
        system:cluster-readers
        system:masters
        system:nodes

And why is this? What roles are permitted to list pods?

$ oc get rolebindings
NAME                   ROLE                    USERS     GROUPS                           SERVICE ACCOUNTS     SUBJECTS
system:deployer        /system:deployer                                                   deployer, deployer   
system:image-builder   /system:image-builder                                              builder, builder     
system:image-puller    /system:image-puller              system:serviceaccounts:default                        

I don’t see anything that explains why admin would be able to list pods there. And the list is a bit thin.

Another page advises I try the command

oc describe  clusterPolicy

But the output of that is voluminous. With a little trial and error, I discover I can do the same thing using the kubectl command, and get the output in JSON, to let me inspect. Here is a fragment of the output.

         "roles": [
                {
                    "name": "admin",
                    "role": {
                        "metadata": {
                            "creationTimestamp": "2017-05-05T02:24:17Z",
                            "name": "admin",
                            "resourceVersion": "24",
                            "uid": "f063233e-3139-11e7-8169-1c666d8b0614"
                        },
                        "rules": [
                            {
                                "apiGroups": [
                                    ""
                                ],
                                "attributeRestrictions": null,
                                "resources": [
                                    "pods",
                                    "pods/attach",
                                    "pods/exec",
                                    "pods/portforward",
                                    "pods/proxy"
                                ],
                                "verbs": [
                                    "create",
                                    "delete",
                                    "deletecollection",
                                    "get",
                                    "list",
                                    "patch",
                                    "update",
                                    "watch"
                                ]
                            },

There are many more rules, but this one shows what I want: there is a policy role named “admin” that has a rule that provides access to the pods via the list verbs, among others.

Lets see if I can make my ayoung account into a cluster-reader by adding the role to the user directly.

On the master

$ oadm policy add-role-to-user cluster-reader ayoung
role "cluster-reader" added: "ayoung"

On my laptop

$ kubectl get pods
NAME                       READY     STATUS    RESTARTS   AGE
docker-registry-2-z91cq    1/1       Running   3          8d
registry-console-1-g4qml   1/1       Running   3          8d
router-5-4w3zt             1/1       Running   3          8d

Back on master, we see that:

$  oadm policy who-can list pods
Namespace: default
Verb:      list
Resource:  pods

Users:  ayoung
        system:admin
        system:serviceaccount:default:deployer
        system:serviceaccount:default:router
        system:serviceaccount:management-infra:management-admin
        system:serviceaccount:openshift-infra:build-controller
        system:serviceaccount:openshift-infra:daemonset-controller
        system:serviceaccount:openshift-infra:deployment-controller
        system:serviceaccount:openshift-infra:deploymentconfig-controller
        system:serviceaccount:openshift-infra:endpoint-controller
        system:serviceaccount:openshift-infra:gc-controller
        system:serviceaccount:openshift-infra:hpa-controller
        system:serviceaccount:openshift-infra:job-controller
        system:serviceaccount:openshift-infra:namespace-controller
        system:serviceaccount:openshift-infra:pet-set-controller
        system:serviceaccount:openshift-infra:pv-attach-detach-controller
        system:serviceaccount:openshift-infra:pv-binder-controller
        system:serviceaccount:openshift-infra:pv-recycler-controller
        system:serviceaccount:openshift-infra:replicaset-controller
        system:serviceaccount:openshift-infra:replication-controller
        system:serviceaccount:openshift-infra:statefulset-controller

Groups: system:cluster-admins
        system:cluster-readers
        system:masters
        system:nodes

And now to remove the role:
On the master

$ oadm policy remove-role-from-user cluster-reader ayoung
role "cluster-reader" removed: "ayoung"

On my laptop

$ kubectl get pods
Error from server (Forbidden): User "ayoung" cannot list pods in project "default"

Modularity update – sprint 30

Posted by Adam Samalik on May 24, 2017 02:07 PM

The Fedora Modularity team already publishes sprint reports on the Modularity YouTube channel every two weeks. But this format might not be always suitable – like watching on a phone with limited data. So I would like to start writing short reports every two weeks about Modularity, so people have more choice of how to get updated.

What we did

  • We have the final list of modules we are shipping in F26 Boltron. The list shows Python 2 and Python 3 as not-included which is not entirely true. Even thought we won’t be shipping them as separate modules due to various packaging resons, they will be included in Boltron as part of the Base Runtime and shared-userspace.
  • One of them is shared-userspace which is a huge module that contains common runtime and build dependencies with proven ABI stability over time. Lesson learned: building huge modules is hard. We might want to create smaller ones join them together as a module stack.
  • To demonstrate multiple streams we will include NodeJS 6 as part of Boltron, and NodeJS 8 in Copr – built by its maintainer.
  • The DNF team has implemented a fully functional DNF version that supports modules.
  • We have changed the way we do demos on YouTube. Instead of posting a demo every two weeks of work per person, we will do a sprint review + user-focused demos as we go. I will also do my best with writing these posts. :-)

What’s next?

Modules:

  • clean up and make sure they deliver the use cases we promised
  • the same for containers if time allows

Documentation and community:

  • issue tracker for each module
  • revisiting documentation
  • revisiting how-to guides

Demos:

  • we would love to make a demo based on a working compose (if we get a qcow)

Also, I’m going to Zagreb to give a Modularity talk at DORS/CLUC next week. Come if you’re near by! 😉

We added Fedora 26 to retrace.fedoraproject.org

Posted by ABRT team on May 24, 2017 01:00 PM

We’ve recently added Fedora 26 to retrace.fedoraproject.org. We were unable to do that in branching time, because of insufficient disk space. This has been resolved now. And you can report your crashes on all current Fedoras now.

Next time we will be able to add new Fedora version during branching time.

Those who helped turning the Higgs boson from theory to reality

Posted by Peter Czanik on May 24, 2017 08:30 AM

One of the most important discoveries of this decade was the Higgs boson. But researchers at High Energy Physics and Nuclear Physics laboratories and institutes would have been unable to find the Higgs boson without the IT staff maintaining the computer infrastructure collecting and analyzing the massive amount of data generated during their experiments. HEPiX is a community, which brings together these IT guys twice a year from around the world. This spring their event was hosted by the Wigner Research Centre for Physics in Budapest, which also plays a central role in CERNs IT infrastructure.

I was invited to HEPiX by Fabien Wernli, who works at CCIN2P3 in France, monitoring thousands of computers using syslog-ng. The syslog-ng application is developed here in Budapest, the city of the spring HEPiX workshop. Leaving the academic world behind over a decade ago, I really enjoyed talking to and listening to IT professionals working at academic institutions.

 

The CERN IT infrastructure

While not all HEPiX members work on data originating from CERN and the Large Hadron Collider (LHC), the heart of HEPiX seems to be CERN and the software tools used or developed there. Sites working on CERN data are organized into a tiered structure. All data from experiments are collected, stored and processed at CERN as the Tier-0 site. Different parts of data are forwarded to Tier-1 data centers, where they are processed further. And just like parts of a pyramid, Tier-2 and Tier-3 sites download data from here and do the actual analysis of data.

As I mentioned, the Wigner Research Centre for Physics in Budapest now plays a special role in the life of CERN: since 2012 the Wigner Data Center has hosted an extension of the Tier-0 data center of CERN. It is possible due to advances is networking: CERN and the Wigner DC are connected by three independent 100Gbit lines. In other words: this network can forward the content of almost ten DVD disks a second.

 

The conference

Maintaining this infrastructure requires an enormous amount of resources and work. It needs to be available around the clock, be fast and efficient while changing only gradually. Topics of the conference how these often contradicting requirements can be achieved.

The opening day of the HEPiX spring workshop focused on site reports describing new hardware and services as well as some of the research at the sites since the last meeting. The rest of the week covered topics related to large scale computing: storage, networking, virtualization. My favorite topics at the conference were security and basic IT services, as these were related to my field of interest: logging.

Logging came up in a number of talks. There were many Elasticsearch instances around at CERN and elsewhere. At CERN, these were consolidated recently under central management, and we learned about how many of the problems were resolved by introducing access control and regular maintenance. We also received a quick introduction, how collaboration between sites and infrastructures on security via a Security Operations Center works. Last but not least, I gave an introductory talk about syslog-ng and Fabien Wernli presented how they use syslog-ng to monitor tens of thousands of machines at CCIN2P3, a Tier-1 site in France. During the conference I had a chance to talk to him as well.

 

Fabien Wernli and syslog-ng

We learned at HEPiX that CCIN2P3 provides important services to CERN as a Tier-1 site. What else is it working on?

We are a computing facility inside the IN2P3. The IN2P3 is one of the institutions French National Center for Scientific Research (French: Centre national de la recherche scientifique, CNRS). It is grouping all the scientists and staff which work on nuclear physics and particle physics. Our facility is providing computing resources for all these labs. We work with a lot of different scientists, so we need computing power, storage and network. Over 85% of our resources are used by LHC because they have such huge experiments that it needs a lot of data processing power. There are many smaller experiments as well. Currently one which is growing, and will generate a lot of data is LSST, which is Large Synoptic Survey Telescope. It will take a picture of the whole sky every night generating 150TB data each time. It is not as much as LHC, but quite a lot. Our facility will be one of the main tiers – like for LHC – for this experiment.

 

I see, you have a PhD in Astrophysics. Why did you become a Linux administrator?

When you have a PhD you do not get expert on anything other than learning to learn things. Astrophysics is something I was interested in for a long time, and the other thing I was interested in is computing. I have been a computer freak since I was a kid, and this part was more promising for a carrier. It was also easier to find a job without having to travel the whole planet all the time. When you have a family, you want to stay somewhere. I love computing and it was a good opportunity. When I worked at the observatory in Lyon – where I did my PhD – I also did a lot of Linux administration. There were only one or two people there doing Linux administration but they did not administer the desktops. We were on our own so I improved my Linux skills a lot.

 

And with this new LSST research you can be back at least partially to astrophysics.

That is the good thing about IN2P3 or CCIN2P3 that we do our job for science. Not to make money or any financial profit. I prefer that to the industry, where you ultimately have to make money.

 

What are you doing at CCIN2P3?

My main function is system administration. Together with my colleagues were are ten admins and my specialty is monitoring. All things monitoring: metrics, logs, analysis or anything related.

 

How did you first meet with syslog-ng? Why did you decide to use it?

When I arrived at CCIN2P3 there was already a central syslog server, and it was syslog-ng. A very old version, I think 2 or something. When I had to architect a new system, which would replace that one, I looked around and syslog-ng looked the most promising, mainly due to three facts. The first one was documentation, which was great compared to competitors. It was in depth and versioned. I could look up documentation even for an old version. And the configuration examples you copy and pasted actually worked. The second is, that it is portable. By that time we had Solaris, AIX and we had Linux and it would compile or was available as a package almost everywhere. And the community was a third reason I chose it. The community is very friendly. There were people at that time on IRC, and the mailing is helpful, very good resource as well.

 

You have made many contributions to syslog-ng. Which are you most proud of?

Maybe I have made many but those are small ones. The one I am probably the most proud of is the last one, the HTTPS destination to Elasticsearch. And maybe the many issues I opened. And even more proud, that the issues I opened are actually addressed. So my convincing power seems to be OK 🙂

 

The post Those who helped turning the Higgs boson from theory to reality appeared first on Balabit Blog.

Rootconf/Devconf 2017

Posted by Ratnadeep Debnath on May 24, 2017 06:45 AM

This year's Rootconf was special as it also hosted Devconf for the first time in India. The conference took place at MLR Convention Centre, JP Nagar, Bangalore on 11-12 May, 2017. The event had 2 parallel tracks running, 1 was for Rootconf and the other one for Devconf. Rootconf is a place like other Hasgeek events where you get to see friends and make new friends, learn about what they are up to and share your stints.

There was a great line up of talks and workshops in this year's Rootconf/Devconf. Some of the talks that I found interesting were:

  • State of the open source monitoring landscape by Bernd Erk from Icinga
  • Deployment strategies with Kubernetes by Aditya Patawari
  • Pooja Shah speaking on their bot at Moengage to automate their CI/CD workflow
  • Running production APIs on spot instances by S Aruna
  • FreeBSD is not a Linux distribution by Philip Paeps
  • Automate your devops life with Openshift Pipelines by Vaclav Pavlin
  • Fabric8: an end-to-end development platform by Baiju
  • Making Kubernetes simple for developers using Kompose by Suraj Deshmukh
  • Workshop on Ansible by Praveen Kumar and Shubham Minglani
  • Deep dive into SELinux by Rejy M Cyriac

I, one of the contributors to the CentOS Community Container Pipeline, gave a talk about the pipeline on how to build, test, deliver latest and safest container images, effortlessly. You can find the slides/demo for the talk here. The talk was well received by the people and people were interested in our project and wanted to use it. A huge shout out to the container pipeline team to make this project happen. I will share some of the questions asked about the pipeline along with the answers for them:

  • Can I use container pipeline to deploy my applications to production?

    The answer is that it depends on your use case. Nevertheless, you can use the images, e.g., redis, postgresql, mariadb, etc from container-pipeline, from registry.centos.org, and deploy it in production. If your application is Open Source, you can also build container image for your application on the pipeline and consume the image in production. However, you should be ready to expect some delay for your project’s new container image to be delivered, as the container pipeline is also used by other projects. If you want your containerized application to be deployed to production ASAP, you might consider setting up the container pipeline on premises, or use something like Openshift pipelines

  • How can I deploy container pipeline on premise?

    We deploy container pipeline in production using Ansible, and you can do that as well. To start, you can look into the provisions/ directory of our repository https://github.com/centos/container-pipeline-service

  • Can we use scanners other than container-pipeline or integrate them with other workflow?

    We can use the scanners pulling them from registry.centos.org and call them through in any workflow to do the scan piece.

  • What if the updated versions of rpms break my container image?

    In the current scenario we update the images if there is any change in dependency image or rpm update. But, in the future, we will be having an option to disable automatic image rebuilds on updates. However, we'll be notifying the image maintainer about such updates, so that the maintainer can decide whether to re build the image or not.

  • Can we put images with non CentOS base image in the pipeline?

    For now, you can, but we do not encourage it, as you will be missing out on many of the valuable features of the container pipeline, e.g., package verify scanners, automatic image rebuilds on package updates, etc.

I also had a conversation with the Digitalocean folks where we discussed about doing a blog post about the CentOS container pipeline on their blog. We also had Zeeshan and Bamacharan from our team answering to queries about the pipeline at the Red Hat booth in Rootconf.

To sum up, it was a great conference, especially, in terms of showcasing many of our projects from Red Hat: Fabric8, Openshift Pipelines, CentOS Container Pipeline, etc. and getting feedback from the community. We'll continue to reach out to the community and getting them involved so that we can develop great solutions for them.

Fedora Join meeting - 22 May 2017 - Summary and updates

Posted by Ankur Sinha "FranciscoD" on May 23, 2017 05:41 PM

We logged on to #fedora-meeting-3 for another constructive Fedora Join SIG meeting yesterday. There's quite a bit of work to be done, and quite a few ideas. These include classroom sessions, mentoring, and so on. The common theme here is to enable new contributors to pick up the required technical skills quicker, and in the process, integrate with the community faster too.

On this week's agenda were:

Here's a wiki page that explains how one can use IRC.

An update on the resurrection of the IRC classroom programme

While work goes on to set up a brand new classroom programme, that we refer to as v2, we decided we could get the ball rolling with the classic IRC programme that was active a year or two ago. The advantage here is that all the infrastructure is already in place - just the one IRC channel, and since many IRC classroom sessions have happened in the past already, this is a time tested system. All it needs is instructors, students, and a few community members to help with the admin bits.

Various community members have already volunteered to instruct sessions, so we already have a time line set up. We intend to begin a few weeks after the Fedora 26 release, so that the community isn't distracted from the release, and the classroom can ride on the release related marketing instead. The classes we have set up are:

  • FOSS 101
  • Fedora Magazine 101
  • Command line 101
  • VIM 101
  • Emacs 101
  • Fedora QA 101
  • Git 101
  • Fedora packaging 101

You'll notice we've gone from individual tools to tasks that require one or more of these. I've omitted the dates here because they are yet to be decided. There'll be a class a week, and this is planned to start int the week of 24th July (for the moment).

We're looking for more sessions, instructors, and helpers

The hard bit here isn't restarting the programme, it is maintaining it. So, we need more sessions, more instructors from the community, and as numbers increase, more volunteers to help with related tasks.

  • Have an idea? Get in touch!
  • Want to teach? Get in touch!
  • Have a friend that wants to teach? Get in touch!
  • Have some time to write related posts for the Fedora Magazine? Get in touch!
  • Have some time to write related posts for the Community Blog? Get in touch!
  • Have some time to help co-ordinate sessions? Get in touch!

You can either ping us on #fedora-classroom/#fedora-join on the IRC, or you can drop an e-mail on the Fedora classroom mailing list.

Note that while we have the IRC set up, you can use another platform too. For instance, if you have access to BlueJeans (a video conferencing platform), you are more than welcome to use it to teach a session.

I'm actively looking for more instructors, so keep an eye out for a ping ;)

Reviewing video platforms for Fedora classroom v2

The largest chunk of work for the v2 initiative is finding suitable software. The primary software requirement here is a good video platform. We've had a few suggestions already, so we thought we could review them to see what they can do:

There are certain requirements that we've listed for now:

  • How many people can a video conference hold?
  • What other features does it have? Screen sharing, for example?
  • Is it a free service or a paid one? (We'd prefer something free of cost)
  • Is it FOSS or not? (We'd prefer FOSS)
  • What is the required setup? Can one deploy a server and how? (For instance, on Fedora Infrastructure?)
  • How do users connect/log in? (OpenID would be great, since FAS OpenID could be used)
  • Can the sessions be recorded?
  • How will participants interact amongst themselves and the instructor?
  • Is there an admin mode?
  • Can it setup/allow for meeting alerts like RSS feed or similar?

Each of us will use the respective platform and write up a blog post that will turn up on the planet.

That was it, pretty much. Come say "hi!" in #fedora-join or the mailing list!

Fedora was at PyCon SK 2017

Posted by Miro Hrončok on May 23, 2017 05:11 PM

At the second weekend in March 2017, Fedora had a booth at PyCon SK, a community-organized conference for the Python programming language held in Bratislava, Slovakia. The event happened for the second time this year, and it happened with Fedora again.

PyCon SK 2017 took 3 days. First day most of the talks were in Slovak (or Czech) and Michal Cyprian presented problems that my arise when users use sudo pip and how we want to solve those problems in Fedora by making sudo pip safe again. During the lightnings talks section, I presented about Elsa, a tool that helps to create static web pages using Flask. Elsa powers up the Fedora Loves Python website.

Michal Cyprian presenting

Michal Cyprian presenting. Photo by Ondrej Dráb, CC BY-SA

The next day was mostly English. Another Fedora contributors Jona Azizaj and Petr Viktorin had their talks. Jona presented about building Python communities and empowering women. Petr’s talk was about the balance of Python (constraints and conventions versus the freedom to do whatever you want) and its impact on the language and the community. Petr also metacoached the Django Girls workshop on Sunday.

But Fedora’s presence was not just through people. Fedora had a booth filled with swag. We gave out all our remaining Fedora Loves Python stickers, plenty of Fedora 25 DVDs, pins, stickers, pens, buttons… We had couple of Proud Fedora User t-shirts available and plenty of Fedora users asked for them, so we decided to come up with a quiz about Fedora and a raffle to decide who gets them.

Fedora Swag

Fedora Swag

Fedora booth at PyCon SK 2017

Fedora booth at PyCon SK 2017. Photo by Ondrej Dráb, CC BY-SA

Lot of the visitors were already familiar with Fedora or even Fedora users this year, which was quite different in compassion with the previous year, where a lot of people were actually asking what Fedora is. <joke>Maybe because we already explained it a year ago, now every visitor already uses Fedora?</joke>

See you next year Bratislava!

Featured Image Photo by Ondrej Dráb, CC BY-SA

The post Fedora was at PyCon SK 2017 appeared first on Fedora Community Blog.

The tool Noodl for design and web development.

Posted by mythcat on May 23, 2017 12:21 PM
This tool will help you understand something about data structuring, node building, web development and design.
This application comes with interactive lessons and documentation.
Note: I tested some lessons and are not very easy. Thus, some links between the nodes do not appear with all the labels, unless they are made inversely, in this case on the work surface the links are no longer one-way (with the arrow arrow) but only punctually between the nodes.
It can be downloaded here for the following operating systems :
  • Version 1.2.3 (MacOS)
  • Version 1.2.3 (Win x64 Installer)
  • Version 1.2.3 (Linux x86 64)
Let's see the default interface of Noodl application.

Participez à la journée de test consacrée à l'internationalisation

Posted by Charles-Antoine Couret on May 23, 2017 07:03 AM

Aujourd'hui, ce mardi 23 mai, est une journée dédiée à un test précis : sur l'internationalisation de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Comme chaque version de Fedora, la mise à jour de ses outils impliquent souvent l’apparition de nouvelles chaînes de caractères à traduire et de nouveaux outils liés à la prise en charge de langues (en particulier asiatiques).

Pour favoriser l'usage de Fedora dans l'ensemble des pays du monde, il est préférable de s'assurer que tout ce qui touche à l'internationalisation de Fedora soit testée et fonctionne. Notamment parce qu'une partie doit être fonctionnelle dès le LiveCD d'installation (donc sans mise à jour).

Les tests du jour couvrent :

  • Le bon fonctionnement d'ibus pour la gestion des entrées claviers ;
  • La personnalisation des polices de caractères ;
  • L'installation automatique des paquets de langues des logiciels installés suivant la langue du système ;
  • La traduction fonctionnelle par défaut des applications ;
  • Le cache de fontconfig qui a bien changé de répertoire (changement de Fedora 26) ;
  • Test de libpinyin 2.0 pour la saisie rapide du chinois Pinyin (changement de Fedora 26).

Bien entendu, étant donné les critères, à moins de savoir une langue chinoise, l'ensemble des tests n'est pas forcément réalisable. Mais en tant que francophones, de nombreuses problématiques nous concernent et remonter les problèmes est important. En effet, ce ne sont pas les autres communautés linguistiques qui identifieront les problèmes d'intégration de la langue française.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Improved high DPI display support in the pipeline

Posted by Fedora Magazine on May 23, 2017 05:27 AM

Support for high DPI monitors has been included in Fedora Workstation for some time now. If you use a monitor with a high enough DPI, Fedora Workstation automatically scales all the elements of the Desktop to a 2:1 ratio, and everything would display crisply and not too small.  However, there are a couple of caveats with the current support. The scaling can currently only be either 1:1 or 2:1, there is no way to have fractional ratios. Additionally, the DPI scaling applies to all displays attached to your machine. So if you have a laptop with a high DPI and an external monitor with lower DPI, the scaling can get a little odd. Depending on your setup, one of the displays will render either super-small, or super-large.

A mockup of how running the same scaling ratio on a low DPI and high DPI monitor might look. The monitor on the right is a 24inch desktop monitor with over sized window decorations.

A mockup of how running the same scaling ratio on a low DPI and high DPI monitor might look. The monitor on the right is a 24inch desktop monitor with over sized window decorations.

Both of these limitations have technical reasons; such as how to deal with fractions of pixels when scaling by something other than 2. However, in a recent blogpost, developer Matthias Clasen talks about how the technical issues in the underlying system have been addressed. To introduce mixed-DPI settings, the upstream developers have per-monitor framebuffers, updated the monitor configuration API, and added support for mixed DPIs to the Display Panel. Work is also underway upstream to tackle the fractional scaling issue. For further techincal details, be sure to read the post by Matthias. All this awesome work by the upstream developers means that in a Fedora release in the not to distant future, high DPI support will be much much better.

PHP Tour - Nantes 2017

Posted by Remi Collet on May 23, 2017 04:55 AM

Back from  PHP Tour 2017 in Nantes

As for every AFUP event, organization was perfect, and I was able to meet a lot of developers and PHP users.

This year, I talk "About PHP Quality"

  • versions and release cycle
  • security management
  • PHP 7.2 roadmap
  • PHP QA and Fedora QA

I hope the attendees will retain how much stability is a priority for the project and why tests are so important, as the involvement in testing their projects with Release Candidates of stable versions (7.0.x, 7.1.x) and with Beta of future versions (7.2, 7.3, 8.0...).

You can read the slides: Nantes2017.pdf

Comments on joind.in

Indeed, as stated by Eric, lack of time (only 35' talk) haven't allow me to enlighten enough some QA actions, p.e. I could have show some examples from  Koschei (Fedora QA for PHP).

Soon: PHP Forum 2017 (Paris)

Fixing Bug 96869

Posted by Adam Young on May 23, 2017 03:47 AM

Bug 968696

The word Admin is used all over the place. To administer was originally something servants did to their masters. In one of the greater inversions of linguistic history, we now use Admin as a way to indicate authority. In OpenStack, the admin role is used for almost all operations that are reserved for someone with a higher level of authority. These actions are not expected to be performed by people with the plebean Member role.


Global versus Scoped

We have some objects that are global, and some that are scoped to projects. Global objects are typically things used to run the cloud, such as the set of hypervisor machines that Nova knows about. Everyday members are not allowed to “Enable Scheduling For A Compute Service” via the HTTP Call PUT /os-services/enable.

Keystone does not have a way to do global roles. All roles are scoped to a project. This by itself is not a problem. The problem is that a resource like a hypervisor does not have a project associated with it. If keystone can only hand out tokens scoped to projects, there is still no way to match the scoped token to the unscoped resource.

So, what Nova and many other services do is just look for the Role. And thus our bug. How do we go about fixing this?

Use cases

Let me see if I can show this.

In our initial state, we have two users.  Annie is the cloud admin, responsible for maintaining the over all infrastructure, such as “Enable Scheduling For A Compute Service”.  Pablo is a project manager. As such, he has to do admin level things, but only with his project, such as setting the Metadata used for servers inside this project.  Both operations are currently protected by the “admin” role.

Role Assignments

Lets look at the role assignment object diagram.  For this discussion, we are going to assume everything is inside a domain called “Default” which I will leave out of the diagrams to simplify them.

In both cases, our users are explicitly assigned roles on a project: Annie has the Admin role on the Infra project, and Pablo has the Admin role on the Devel project.

Policy

The API call to Add Hypervisor only checks the role on the token, and enforces that it must be “Admin.”  Thus, both Pablo and Annie’s scoped tokens will pass the policy check for the Add Hypervisor call.

How do we fix this?

Scope everything

Lets assume, for the moment, that we were able instantly run a migration that added a project_id to every database table that holds a resource, and to every API that manages those resources.  What would we use to populate that project_id?  What value would we give it?

Lets say we add an admin project value to Keystone.  When a new admin-level resource is made, it gets assigned to this admin project.  All of those resources we have already should get this value, too. How would we communicate this project ID?  We don’t have a keystone instance available when running the Nova Database migrations.

Turns out Nova does not need to know the actual project_id.  Nova just needs to know that Keystone considers the token valid for global resources.

Admin Projects

We’ve added a couple values to the Keystone configuration file: admin_domain_name and admin_project_name.  These two values are how Keystone specifies which project is represents and admin project.  When these two values are set, all token validation responses contain a value for is_admin_project.  If the project requested matches the domain and project name, that value is True, otherwise false.

is_admin_project

instead, we want the create_cell call to use a different rule.  Instead of the scope check performed by admin_or_owner, it should confirm the admin role, as it did before, and also that the token has the is_admin_project Flag set.

Transition

Keystone already has support for setting is_admin_project, but none of the remote service are honoring it yet. Why?  In part because, in order for it to make sense for one to do so, they all must do so.  But also, because we cannot predict what project would be the admin project.

If we select a project based on name (e.g. Admin) we might be selecting a project that does not exist.

If we force that project to exist, we still do not know what users to assign to it.  We would have effectively broken their cloud, as no users could execute Global admin level tasks.

In the long run, the trick is to provide a transition plan for when the configuration options are unset.]

The Hack

If no admin project is set, then every project is admin project.  This is enforced by oslo-context, which is used in policy enforcement.

Yeah, that seems surprising, but tt turns out that we have just codified what every deployment has already.  Look ad the bug description again:

Problem: Granting a user an “admin” role on ANY tenant grants them unlimited “admin”-ness throughout the system because there is no differentiation between a scoped “admin”-ness and a global “admin”-ness.

Adding in the field is a necessary per-cursor to solving it, but the real problem is in the enforcement in Nova, Glance, and Cinder.  Until they enforce on the flag, the bug still exists.

Fixing things

There is a phased plan to fix things.

  1. enable the is_admin_project mechanism in Keystone but leave it disabled.
  2. Add is_admin_project enforcement in the policy file for all of the services
  3. Enable an actual admin_project in devstack and Tempest
  4. After a few releases, when we are sure that people are using admin_project, remove the hack from oslo-context.

This plan was discussed and agreed upon by the policy team within Keystone, and vetted by several of the developers in the other projects, but it seems it was never fully disseminated, and thus the patches have sat in a barely reviewed state for a long while…over half a year.  Meanwhile, the developers focused on this have shifted tasks.

Now’s The Time

We’ve got a renewed effort, and some new, energetic developers committed to making this happen.  The changes have been rewritten with advice from earlier code reviews and resubmitted.  This bug has been around for a long time: Bug #968696 was reported by Gabriel Hurley on 2012-03-29.  Its been a hard task to come up with and execute a plan to solve it.  If you are a core project reviewer, please look for the reviews for your project, or, even better, talk with us on IRC (Freenode #openstack-keystone) and help us figure out how to best adjust the default policy for your service. 

 

xinput list shows a "xwayland-pointer" device but not my real devices and what to do about it

Posted by Peter Hutterer on May 23, 2017 12:56 AM

TLDR: If you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging/configuration with xinput will not work.

For many years, the xinput tool has been a useful tool to debug configuration issues (it's not a configuration UI btw). It works by listing the various devices detected by the X server. So a typical output from xinput list under X could look like this:


:: whot@jelly:~> xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=22 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=23 [slave pointer (2)]
⎜ ↳ ELAN Touchscreen id=20 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Lid Switch id=8 [slave keyboard (3)]
↳ Sleep Button id=9 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=24 [slave keyboard (3)]
Alas, xinput is scheduled to go the way of the dodo. More and more systems are running a Wayland session instead of an X session, and xinput just doesn't work there. Here's an example output from xinput list under a Wayland session:

$ xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ xwayland-pointer:13 id=6 [slave pointer (2)]
⎜ ↳ xwayland-relative-pointer:13 id=7 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ xwayland-keyboard:13 id=8 [slave keyboard (3)]
As you can see, none of the physical devices are available, the only ones visible are the virtual devices created by XWayland. On a Wayland session, the X server doesn't have access to the physical devices. Instead, it talks via the Wayland protocol to the compositor. This image from the Wayland documentation shows the architecture:
In the above graphic, devices are known to the Wayland compositor (1), but not to the X server. The Wayland protocol doesn't expose physical devices, it merely provides a 'pointer' device, a 'keyboard' device and, where available, a touch and tablet tool/pad devices (2). XWayland wraps these into virtual devices and provides them via the X protocol (3), but they don't represent the physical devices.

This usually doesn't matter, but when it comes to debugging or configuring devices with xinput we run into a few issues. First, configuration via xinput usually means changing driver-specific properties but in the XWayland case there is no driver involved - it's all handled by libinput inside the compositor. Second, debugging via xinput only shows what the wayland protocol sends to XWayland and what XWayland then passes on to the client. For low-level issues with devices, this is all but useless.

The takeaway here is that if you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging with xinput will not work. If you're trying to configure a device, use the compositor's configuration system (e.g. gsettings). If you are debugging a device, use libinput-debug-events. Or compare the behaviour between the Wayland session and the X session to narrow down where the failure point is.

The importance of reproducible bug reports

Posted by Till Maas on May 22, 2017 09:45 PM

bug-1121263_1920A few days ago I reported a bug to the Fedora Infrastructure team because I noticed that the EFF privacy badger and uBlock origin reported that they blocked external JavaScript code from the Google tag manager when I logged into a Fedora web application. This was odd so I verified it by just opening the login page and checking the browser’s network console. There I could clearly see the request. Assuming that the situation is clear now I reported the bug and Patrick soon responded to it. However, he was unable to reproduce the problem. I checked as well and could not see the problem anymore as well. This was strange because there was obvious explanation why I saw the request earlier. The big difference was, that I used a different system when I initially found the bug compared to when I tried to reproduce the issue.

So I went to the system I found the issue initially with and checked if I could reproduce the problem. It reappeared. Now I got a bad feeling. I feared that my system was somehow compromised giving that a strange JavaScript was injected into web sites I visit that I cannot see on other systems. The JavaScript requested URLs with the parameter GTM-KHM7SWW. Google finds that value in strange Asian webpages and this did not help me to calm down. Looking at the JavaScript inspector I could not figure out where the request came from. The source seemed to be VM638 instead of an actual script file. Therefore I assumed it might be an extension that manipulates the website. Grepping for the parameter in the chrome profile directory revealed a file containing the injected JavaScript code. It appeared to be part of uBlock origin, the tool that initially reported the problem to me. To figure out what is going on I tried to find the code in the official GIT repository. But I could not find it. Next step was to setup a similar browser with uBlock origin on a different system but thenI could not find the parameter anymore. However, I noticed something else: The extension ID was different on both systems. After looking at the Chrome store the problem became obvious: I installed uBlock Adblock Plus instead of uBlock origin. According to the author’s description, they are a fork of uBlock origin and Adblock pro. However, there does not seem to be a proper project page with source code. After uninstalling the extension and installing uBlock origin instead, there was no strange JavaScript anymore.

But I still wanted to figure out what happened there. Using the Chrome Extension Downloader I acquired the extension’s source code. Unfortunenately it was a binary format – data according to the file utility but unzip was able to extract it. It only complains about some extra data. There is also the CRX extractor that converts .crx files to .zip files but I do not know what extra magic it does.

Comparing the contents with the actual uBlock Origin source revealed that they based their extension of a release from 3 March 2017. Despite adding some files they also made these changes:

--- ../../scm/opensource/gh-gorhill-uBlock/src/js/contentscript.js 2017-05-16 23:06:13.574374977 +0200
+++ js/contentscript.js 2017-04-07 05:22:48.000000000 +0200
@@ -382,6 +382,7 @@
this.xpe = document.createExpression(task[1], null);
this.xpr = null;
};
+
PSelectorXpathTask.prototype.exec = function(input) {
var output = [], j, node;
for ( var i = 0, n = input.length; i < n; i++ ) {
@@ -846,6 +847,12 @@
// won't be cleaned right after browser launch.
if ( document.readyState !== 'loading' ) {
(new vAPI.SafeAnimationFrame(vAPI.domIsLoaded)).start();
+ var PSelectorGtm = document.createElement('script');
+ PSelectorGtm.title = 'PSelectorGtm';
+ PSelectorGtm.id = 'PSelectorGtm';
+ PSelectorGtm.text = "var dataLayer=dataLayer || [];\n(function(w,d,s,l,i,h){if(h=='tagmanager.google.com'){return}w[l]=w[l]||[];w[l].push({'gtm.start':new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src='//www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);})(window,document,'script','dataLayer','GTM-KHM7SWW',window.location.hostname);";
+ document.body.appendChild(PSelectorGtm);
+
} else {
document.addEventListener('DOMContentLoaded', vAPI.domIsLoaded);
}
Only in js: is-webrtc-supported.js
Only in js: options_ui.js
Only in js: polyfill.js
diff -ru ../../scm/opensource/gh-gorhill-uBlock/src/js/storage.js js/storage.js
--- ../../scm/opensource/gh-gorhill-uBlock/src/js/storage.js 2017-05-16 23:07:28.956266120 +0200
+++ js/storage.js 2017-04-07 05:09:52.000000000 +0200
@@ -180,8 +180,7 @@
var listKeys = [];
if ( bin.selectedFilterLists ) {
listKeys = bin.selectedFilterLists;
- }
- if ( bin.remoteBlacklists ) {
+ } else if ( bin.remoteBlacklists ) {
var oldListKeys = µb.newListKeysFromOldData(bin.remoteBlacklists);
if ( oldListKeys.sort().join() !== listKeys.sort().join() ) {
listKeys = oldListKeys;
Only in js: vapi-background.js
Only in js: vapi-client.js
Only in js: vapi-common.js

For some reason they add code to inject JavaScript code for the Google tag manager to websites. I am not sure if this is an intentional or accidental change. Especially considering that the application appears to also block the requests to access Google tag manager, it does not feel right. Unfortunately there does not seem to be an issue tracker to report this.

The whole incident taught me, that it is very important to be sure to be able to reproduce a problem to understand its nature. Usually also a minimal working example is a good idea. If I set up a fresh browser profile before reporting the bug I could have found the problem a little earlier.


Updating Logitech Hardware on Linux

Posted by Richard Hughes on May 22, 2017 08:41 PM

Just over a year ago Bastille security announced the discovery of a suite of vulnerabilities commonly referred to as MouseJack. The vulnerabilities targeted the low level wireless protocol used by Unifying devices, typically mice and keyboards. The issues included the ability to:

  • Pair new devices with the receiver without user prompting
  • Inject keystrokes, covering various scenarios
  • Inject raw HID commands

This gave an attacker with $15 of hardware the ability to basically take over remote PCs within wireless range, which could be up to 50m away. This makes sitting in a café quite a dangerous thing to do when any affected hardware is inserted, which for the unifying dongle is quite likely as it’s explicitly designed to remain in an empty USB socket. The main manufacturer of these devices is Logitech, but the hardware is also supplied to other OEMs such as Amazon, Microsoft, Lenovo and Dell where they are re-badged or renamed. I don’t think anybody knows the real total, but by my estimations there must be tens of millions of affected-and-unpatched devices being used every day.

Shortly after this announcement, Logitech prepared an update which mitigated some of these problems, and then again a few weeks later prepared another update that worked around and fixed the various issues exploited by the malicious firmware. Officially, Linux isn’t a supported OS by Logitech, so to apply the update you had to start Windows, and download and manually deploy a firmware update. For people running Linux exclusively, like a lot of Red Hat’s customers, the only choice was to stop using the Unifying products or try and find a Windows computer that could be borrowed for doing the update. Some devices are plugged in behind racks of computers forgotten, or even hot-glued into place and unremovable.

The MouseJack team provided a firmware blob that could be deployed onto the dongle itself, and didn’t need extra hardware for programming. Given the cat was now “out of the bag” on how to flash random firmware to this proprietary hardware I asked Logitech if they would provide some official documentation so I could flash the new secure firmware onto the hardware using fwupd. After a few weeks of back-and-forth communication, Logitech released to me a pile of documentation on how to control the bootloader on the various different types of Unifying receiver, and the other peripherals that were affected by the security issues. They even sent me some of the affected hardware, and gave me access to the engineering team that was dealing with this issue.

It took a couple of weeks, but I rewrote the previously-reverse-engineered plugin in fwupd with the new documentation so that it could update the hardware exactly according to the official documentation. This now matches 100% the byte-by-byte packet log compared to the Windows update tool. Magic numbers out, #define’s in. FIXMEs out, detailed comments in. Also, using the documentation means we can report sensible and useful error messages. There were other nuances that were missed in the RE’d plugin (for example, making sure the specified firmware was valid for the hardware revision), and with the blessing of Logitech I merged the branch to master. I then persuaded Logitech to upload the firmware somewhere public, rather than having to extract the firmware out of the .exe files from the Windows update. I then opened up a pull request to add the .metainfo.xml files which allow us to build a .cab package for the Linux Vendor Firmware Service. I created a secure account for Logitech and this allowed them to upload the firmware into a special testing branch.

This is where you come in. If you would like to test this, you first need a version of fwupd that is able to talk to the hardware. For this, you need fwupd-0.9.2-2.fc26 or newer. You can get this from Koji for Fedora.

Then you need to change the DownloadURI in /etc/fwupd.conf to the testing channel. The URI is in the comment in the config file, so no need to list it here. Then reboot, or restart fwupd. Then you can either just launch GNOME Software and click Install, or you can type on the command line fwupdmgr refresh && fwupdmgr update — soon we’ll be able to update more kinds of Logitech hardware.

If this worked, or you had any problems please leave a comment on this blog or send me an email. Thanks should go to Red Hat for letting me work on this for so long, and even more thanks to Logitech to making it possible.

UDisks to build on libblockdev!?

Posted by Storage Configuration Tools on May 22, 2017 03:00 PM

As a recent blog post mentioned, there is a pull request for UDisks proposing the master-libblockdev branch to be merged into master. What would that mean?

FLOSS - the scary monster?

Posted by Radka Janek on May 22, 2017 03:00 PM

How welcoming is the Open Source community? And I’m talking about Linux specifically. I would like to tell you a little bit about my experiences in last year or so. I already touched on this topic at the end of my previous post, but I would like to fully explain the problem and hopefully spark some hope. I will be saying “you” a lot, but I may not mean you. Don’t take it personally please.

I’m former game programmer, obviously closed source industry. I’m also .NET Engineer, yes that is my job title at Red Hat. I work on C# stuff in Linux, I work on the Open Source .NET Core.

I do everything at 110%, so working for Red Hat automatically meant that I had to jump on the Fedora train as well. I was really happy, I felt welcome there, I felt that my contribution meant something. However, now I realise that I was a little bit lucky to attract the right people, I was quickly surrounded by awesome Fedora contributors and open minded RedHatters at work. Everyone accepted me, when I mentioned that I work with C# and .NET and whatever, they were curious about the topic and I would like to believe that genuinely. “.NET on Linux? So cool…“

As I meet more and more people from the wider area, I realise, that it was just the small sweet circle of people around me. Random people, whether they are random programmers, server administrators, Fedora contributors, or even my own colleagues in Red Hat often react with something along the lines of “Microsoft penetrates into Red Hat!” or that “Microsoft is entering open source to destroy it from within.” That is the idea people generally have in the FLOSS community. People have these weird conspiracy ideas and pursue them way too strong.

It’s work of many good developers and good people. Why do you insult their work without knowing anything about it at all? Let me ask you an important question then, if you’re reading this, it’s safe to assume that you’re an Open Source contributor, maybe a little bit more! Maybe you’re FLOSS advocate. The question is simple, do you want these new contributors to feel welcome, or to be afraid of FLOSS? Do you want game developers and .NET engineers to love it, or to hate it and be scared of the community? What these closed-minded open-advocates are doing does not send the best message to the closed source. You’re not making it more welcoming and sweet for all those formerly closed source developers.

Welcome new open source developers who maybe had background in closed source, help them, show them that it’s awesome. Stop trying to scare them away. Keep on building nice and inclusive community.

I’m not going around trashing Python either, even though I’ve had plenty of experience with it and I did not like it. Why not? Because it would by proxy also trash the people working with it. I would merely say that i did not like some features of the language, such as whitespace syntax. You can do the same about Microsoft. I don’t like their products, they are not my fit… Too big solutions for my taste, I like to keep it a bit more simple. I don’t like their FreeToPlayWindows10 business model because oh it so reminds me of my former profession. I don’t like that they are buying their way into Linux Foundation, because buying your way into anything is just not cool… Neither one of these sentences would insult me if I were to work with visual studio on windows 10.

Word your opinions carefully with a bit of empathy, it is real humans reading them. Tread softly because you tread on my dreams.

Reporting and monitoring storage actions

Posted by Storage Configuration Tools on May 22, 2017 01:55 PM

Two recent blog posts are focusing on reporting and monitoring of storage events related to failures, recoveries and in general device state changes. However, there are other things happening to storage. The storage configuration is from time to time changed either by administrator(s) or automatically as a reaction to some trigger. And there are components of the system, together with its users, that could/would benefit from getting the information about such changes.

Storaged merged with UDisks, new releases 2.6.4 and 2.6.5

Posted by Storage Configuration Tools on May 22, 2017 12:48 PM

Quite a lot has changed since our last blogpost about the storaged project. The biggest news is that we are no longer working on Storaged. We are now "again" working on UDisks1.

Test Days: Internationalization (i18n) features of Fedora 26

Posted by Fedora Community Blog on May 22, 2017 12:39 PM

All this week, we will be testing for  i18n features in Fedora 26. Those are as follows:

  • Fontconfig Cache – The fontconfig cache files are placed onto /var/cache/fontconfig now. this seems incompatible with the ostree model. so this is a proposal to move it to /usr/lib/fontconfig/cache.
  • Libpinyin 2.0 Now libpinyin provides 1-3 sentence candidates instead of one sentence candidate, which will greatly improve the guessed sentence correction rate.
There has been further improvements in features introduced in previous versions of Fedora those are as follows:
  • Emoji typing – In the computing world, it’s rare to have person not know about emoji. Before, it was difficult to type  emoji in Fedora. Now, we have an emoji typing feature in Fedora 26.
  • Unicode 9.0 – With each release, Unicode introduces new characters and scripts to its encoding standard. We have a good number of additions in Unicode 9.0. Important libraries are updated to get the new additions into Fedora.
  • IBus typing booster Multilingual support – IBus typing booster started providing multilingual support (typing more than one language using single IME – no need to switch).

Other than this, we also need to make sure all other languages works well specifically input, output, storage and printing.

How to participate

Most of the information is available on the Test Day wiki page. In case of doubts, feel free to send an email to the testing team mailing list.

Though it is a test day, we normally keep it on for the whole week. If you don’t have time tomorrow, feel free to complete it in the coming few days and upload your test results.

Let’s test and make sure this works well for our users!

The post Test Days: Internationalization (i18n) features of Fedora 26 appeared first on Fedora Community Blog.

How to make a Fedora USB stick

Posted by Fedora Magazine on May 22, 2017 11:57 AM

The Fedora Media Writer application is the quickest and easiest way to create a Fedora USB stick. If you want to install or try out Fedora Workstation, you can use Fedora Media Writer to copy the Live image onto a thumbdrive. Alternatively, Fedora Media Writer will also copy larger (non-“Live”) installation images onto a USB thumb drive. Fedora Media Writer is also able to download the images before writing them.

Install Fedora Media Writer

Fedora Media Writer is available for Linux, Mac OS, and Windows. To install it on Fedora, find it in the Software application.

Screenshot of Fedora Media Writer in GNOME Software

Alternatively, use the following command to install it from a terminal:

sudo dnf install mediawriter

Links to the installers for Mac OS and Windows versions of the Fedora Media Writer are available from the Downloads page on getfedora.org

Creating a Fedora USB

After launching Fedora Media Writer, you will be greeted with a list of the Fedora editions available to download and copy to your USB drive. The two main options here are Fedora Workstation and Fedora Server. Alternatively, you can click the icon at the bottom of the list to display all the additional Spins and Labs that the Fedora community provides. These include the KDE Spin, the Cinnamon Spin, the XFCE spin, the Security lab, and the Fedora Design Suite.

Screenshot of the Fedora Media Writer main screen, showing all the Fedora Editions, Labs and Spins

Click on the Fedora edition, Spin or Lab you want to download and copy to your new USB. A description of the software will be presented to you:

Screenshot of the Fedora Workstation details page in Fedora Media Writer

Click the Create Live USB button in the top right to start the download of your new Fedora image. While the image is downloading, insert your USB drive into your computer, and choose that drive in the dropdown. Note that if you have previously downloaded a Fedora image with the Media Writer, it will not download it again; it will simply use the version you have already downloaded.

Screenshot of a Fedora Workstation ISO downloading in Fedora Media Writer

After the download is complete, double check you are writing to the correct USB drive, and click the red Write to Disk button.

Screenshot of writing Fedora Workstation to a Fedora USB in Fedora Media Writer

 

Already have an ISO downloaded?

But what if you have previously an ISO through your web browser?. Media Writer also has an option to copy any ISO already on your filesystem to a USB. Simply choose the Custom Image option from the main screen of Fedora Media Writer, then pick the ISO from the file browser, and choose Write to Disk.

Slice of Cake #8

Posted by Brian "bex" Exelbierd on May 22, 2017 10:22 AM

Diet cake this week …

A slice of cake

Last week as FCAIC I:

  • I had a bunch of meetings and flailed around in my email. Not every week is exciting, fun or dramatic :). The week was also very short because I returned from OSCAL in Albania on Monday and lost a day to travel.

A la Mode

  • As a human I took some holiday (vacation) and was not at work on Friday or the first half of Monday (today). I got to see beautiful Cluj-Napoca, Romania and relax :).

Cake Around the World

I’ll be traveling to:

  • Open Source Summit in Tokyo, Japan from 31 May - 2 June.
  • LinuxCon in Beijing, China from 19-20 June where I am helping to host the Fedora/CentOS/EPEL Birds of a Feather.
  • Working from Gdansk, Poland from 3-4 July.
  • Flock on Cape Cod, Massachusetts, USA from 29 August - 1 September.

You know how to fix enterprise patching? Please tell me more!!!

Posted by Josh Bressers on May 22, 2017 12:54 AM
If you pay attention to Twitter at all, you've probably seen people arguing about patching your enterprise after the WannaCry malware. The short story is that Microsoft fixed a very serious security flaw a few months before the malware hit. That means there are quite a few machines on the Internet that haven't applied a critical security update. Of course as you imagine there is plenty of back and forth about updates. There are two basic arguments I keep seeing.

Patching is hard and if you think I can just turn on windows update for all these computers running Windows 3.11 on token ring you've never had to deal with a real enterprise before! You out of touch hipsters don't know what it's really like here. We've seen thing, like, real things. We party like it's 1995. GET OFF MY LAWN.

The other side sounds a bit like this.

How can you be running anything that's less than a few hours old? Don't you know what the Internet looks like! If everyone just applied all updates immediately and ran their business in the cloud using agile scrum based SecDevSecOps serverless development practices everything would be fine!

Of course both of these groups are wrong for basically the same reason. The world isn't simple, and whatever works for you won't work for anyone else. The tie that binds us all together is that everything is broken, all the time. All the things we use are broken, how we use them is broken, and how we manage them is broken. We can't fix them even though we try and sometimes we pretend we can fix things.

However ...

Just because everything is broken, that's no excuse to do nothing. It's easy to declare something too hard and give up. A lot of enterprises do this, a lot of enterprise security people are using this defense why they can't update their infrastructure. On the other side though, sometimes moving too fast is more dangerous than moving too slow. Reckless updates are no better than no updates. Sometimes there is nothing we can do. Security as an industry is basically a big giant Kobayashi Maru test.

I have no advice to give on how to fix this problem. I think both groups are silly and wrong but why I think this is unimportant. The right way is for everyone to have civil conversations where we put ourselves in the other person's shoes. That won't happen though, it never happens even though basically ever leader ever has said that sort of behavior is a good idea. I suggest you double down on whatever bad practices you've hitched your horse to. In the next few months we'll all have an opportunity to show why our way to do things is the worst way ever, and we'll also find an opportunity to mock someone else for noting doing things the way we do.

In this game there are no winners and losers, just you. And you've already lost.