Fedora People

From NFS to LizardFS

Posted by Jonathan Dieter on September 30, 2016 08:59 PM

If you’ve been following me for a while, you’ll know that we started our data servers out using NFS on ext4 mirrored over DRBD, hit some load problems, switched to btrfs, hit load problems again, tried a hacky workaround, ran into problems, dropped DRBD for glusterfs, had a major disaster, switched back to NFS on ext4 mirrored over DRBD, hit more load problems, and finally dropped DRBD for ZFS.

As of March 2016, our network looked something like this:

Old server layout

Old server layout

Our NFS over ZFS system worked great for three years, especially after we added SSD cache and log devices to our ZFS pools, but we were starting to overload our ZFS servers and I realized that we didn’t really have any way of scaling up.

This pushed me to investigate distributed filesystems yet again. As I mentioned here, distributed filesystems have been a holy grail for me, but I never found one that would work for us. Our problem is that our home directories (including config directories) are stored on our data servers, and there might be over one hundred users logged in simultaneously. Linux desktops tend to do a lot of small reads and writes to the config directories, and any latency bottlenecks tend to cascade. This leads to an unresponsive network, which then leads to students acting out the Old Testament practice of stoning the computer. GlusterFS was too slow (and almost lost all our data), CephFS still seems too experimental (especially for the features I want), and there didn’t seem to be any other reasonable alternatives… until I looked at LizardFS.

LizardFS (a completely open source fork of MooseFS) is a distributed filesystem that has one fascinating twist: All the metadata is stored in RAM. It gets written out to the hard drive regularly, but all of the metadata must fit into the RAM. The main result is that metadata lookups are rocket-fast. Add to that the ability to direct different paths (say, perhaps, config directories) to different storage types (say, perhaps, SSDs), and you have a filesystem that is scalable and fast.

LizardFS does have its drawbacks. You can run hot backups of your metadata servers, but only one will ever be the active master at any one time. If it goes down, you have to manually switch one of the replicas into master mode. LizardFS also has a very complicated upgrade procedure. First the metadata replicas must be upgraded, then the master and finally the clients. And finally, there are some corner cases where replication is not as robust as I would like it to be, but they seem to be well understood and really only seem to affect very new blocks.

So, given the potential benefits and drawbacks, we decided to run some tests. The results were instant… and impressive. A single user’s login time on a server with no load… doubled. Instead of five seconds, it took ten for them to log in. Not good. But when a whole class logged in simultaneously, it took only 15 seconds for them to all log in, down from three to five minutes. We decided that a massive speed gain in the multiple user scenario was well worth the speed sacrifice in the single-user scenario.

Another bonus is that we’ve gone from two separate data servers with two completely different filesystems (only one which ever had high load) to five data servers sharing the load while serving out one massive filesystem, giving us a system that now looks like this:

New server setup

New server layout

So, six months on, LizardFS has served us well, and will hopefully continue to serve us for the next (few? many?) years. The main downside is that Fedora doesn’t have LizardFS in its repositories, but I’m thinking about cleaning up my spec and putting in a review request.

Updated to add graphics of old and new server layouts, info about Fedora packaging status, LizardFS bug links, and remove some grammatical errors


Weechat-Tmux

Posted by farhaan on September 30, 2016 06:16 PM

Recently I have been to pycon-india (will blog about that too!) there Sayan and Vivek introduced me to weechat which is a terminal based IRC client, from the time I saw Sayan’s weechat configuration I was hooked to it.

The same night I started configuring my weechat , it’s such a beautiful IRC client I was regretting why did I not use it before. It just transforms your terminal into IRC window.

For fedora you need to do:

sudo dnf install weechat

Some of the configuration and plugins you need are :

  1. buffer
  2. notify-send

That’s pretty much it but that doesn’t stop there you can make that client little more aesthetic.  You can set weechat by using their documentation.

The clean design kind of makes you feel happy, plus adding plugin is not at all a pain. In the weechat window you just say /script install buffer.pl and it just installs it in no time.  There are various external plugin in case you want to use them and writing plugin is actually fun , I have not tried that yet.

screenshot-from-2016-09-30-23-02-13

I also use to use bigger font but now I find this size more soothing to eyes. It is because of weechat I got to know or explore about this beautiful tool called tmux ,  because on normal terminal screen weechat lags , what I mean by lag is the keystroke somehow reach after like 5-6 seconds which makes the user experience go bad.  I pinged people on IRC in #weechat channel with the query the community is amazing they helped me to set it up and use it efficiently , they only told me to use tmux or screen . With tmux my session are persistent and without any lag.

To install tmux on fedora:

sudo install tmux

tmux is a terminal multiplexer which means it can extend one terminal screen into many screen . I got to learn a lot of concepts in tmux like session, pane and windows. Once you know these things in tmux its really a funride. Some of the blogs I went through for configuring and using tmux the best I found was hamvoke , the whole series is pretty amazing . So basically my workflow goes for every project I am working on I have a tmux session named after it, which is done by the command:

tmux new-session -s <name_session>

Switching between two session can be done by attach and detach. And I have one constant session running of weechat. I thought I have explored every thing in tmux but that can’t be it , I came to know that there is a powerline for tmux too. That makes it way more amazing so this is how a typical tmux session with powerline looks like.

screenshot-from-2016-09-30-23-31-10

I am kind of loving the new setup and enjoying it. I am also constantly using tmux cheatsheet :P because it’s good to look up what else you can do and also I saw various screencast on youtube where  tmux+vim makes things amazing.

Do let me know how you like my setup or how you use it .

Till then, Happy Hacking!🙂

 


Meeting users, lots of users

Posted by Jiri Eischmann on September 30, 2016 02:43 PM

Every year, I introduce Fedora to new students at Brno Technical University. There are approx. 500 of them and a sizable amount of them then installs Fedora. We also organize a sort of installfest one week after the presentation where anyone who has had any difficulties with Fedora can come and ask for help. It’s a great opportunity to observe what things new users struggle with the most. Especially when you have such a high number of new users. What are my observations this year?

  • I always ask how many people have experience with Linux (I narrow it down to GNU/Linux distributions excluding things like Android). A couple of years ago, only 25-30% of students raised their hands. This year, it was roughly 75% which is a significant increase. It seems like high school students interested in IT are more familiar with Linux than ever before.
  • Linux users tend to have strong opinions about desktops (too thick or thin title bars, too light or dark theme, no minimize button etc), but new users coming from Windows and MacOS don’t care that much. We give students Fedora Workstation with GNOME and receive almost no complains about the desktop from them, and literally zero questions how to switch to another desktop.
  • The most frequent question we receive is why they have multiple Fedora entries in GRUB. Like many other distributions, Fedora keeps three last kernels and allows you to boot with them via entries in GRUB. When you install Fedora, there is just one entry, but with kernel updates you get the second and then third. And new users are completely puzzled by that. One guy came and told us: “I’ve got two Fedora entries in the menu, I’m afraid I’ve installed the OS twice accidentally, can you help me remove the second instance?” Hiding the menu is not a solution because most students have dualboots with Windows and switching between OSes is a common use case for them. But we should definitely compress the Fedora entries into one somehow.
  • Hardware support evergreen are discrete graphics cards. They’re still not natively supported by Linux and you can find them on most laptops these days and laptops of students are not an exception. So this is currently the most frequent hardware support problem we get installing Fedora. Someone brought a Dell Inspiron 15 7000 series where Fedora didn’t boot at all (other distributions fail on this model, too).
  • Another common problem are Broadcom wireless cards. It’s easy to solve if you know what to do and have a wired connection. But some laptops don’t even have ethernet sockets any more. With one laptop, we ended up connecting to WiFi via phone and tethering with the laptop via a microUSB-USB cable.
  • Installation of Fedora is simple. A couple of clicks, several minutes and you’re done. But only if everything goes ideally. Anaconda handles the typical scenario “Installing Fedora next to Windows” well, but there was a student who had a relatively new Lenovo laptop with MBR and 4 primary partitions (MBR in 2016?!) which effectively prevents you from installing anything on the disk unless you want to lose a Windows recovery partition because MBR can’t handle more than 4 primary partitions. Someone had a dualboot of Windows and Hackintosh which is also in “not-so-easy” waters as well. It also shows how difficult life Linux installer developers have, you can cover most common scenarios, but you can’t cover all possible combinations laptop vendor or later users can create on disks.
  • We’ve also come a conclusion that it’s OK to admit that the hardware support in Linux for the laptop model is not good enough and offer the student an installation in a virtual machine in Windows. You can sometimes manage to get it working, but you know it’s likely to fall apart with the next update of kernel or whatever. Then it’s more responsible to recommend the student virtualization.

What is if __name__ == ‘__main__’ ?

Posted by Trishna Guha on September 30, 2016 08:59 AM

 

Module is simply Python file that has .py extension. Module can contain variables, functions, classes that can be reused.

In order to use module we need to import the module using import command. Check the full list of built-in modules in Python here https://docs.python.org/3.6/library.

The first time a module is loaded in to running Python script, it is initialized by executing the code in the module once. To know various ways of importing modules visit here: https://docs.python.org/3.6/tutorial/modules.html

if __name__ == ‘__main__’:

We see if __name__ == ‘__main__’: quite often. Let’s see what this actually is.

__name__ is global variable in Python that exists in all namespaces. It is attribute of module. It is basically the name of the module as str (string) type.

Show Me Code:

Create a file named ‘mymath.py’ and type the following code and save it. We have defined a simple mathematical square method here.

screenshot-from-2016-09-30-12-51-33

Now create another file named ‘result.py’ in the same directory and type the following code and save it.

screenshot-from-2016-09-30-12-57-10

Now on terminal run the program with ‘python3 result.py’
fotoflexer_photo

Here we have defined a method in a module and using it in another file.

Now let’s look into if __name__ == ‘__main__’:

Open the ‘mymath.py’ file and edit it as given in following:

screenshot-from-2016-09-30-13-56-50

Leave ‘result.py’ unchanged.

Now on your terminal run ‘result.py’. 

fotoflexer_photo1

Here we have imported the module mymath. The variable __name__ is set to the name of the module that is imported.

Now on terminal run ‘mymath.py’

fotoflexer_photo3

We have run the file mymath.py as program itself. And you can see here the variable __name__ is set to the string “__main__”.
And we have checked if __name__ == “__main__” is True execute the following instructions which means if the file is run as standalone program itself execute the following instructions.

If you do  print(type(__name__)) in the program, you will see it returns ‘str’ (string) type.

Happy Coding!


Heroes of Fedora (HoF) – F25 Alpha

Posted by Fedora Community Blog on September 30, 2016 08:15 AM
Heroes of Fedora is back

Heroes of Fedora is back! This time for F25 Alpha.

Hello, and welcome to the Heroes of Fedora: F25 Alpha edition! Heroes of Fedora is written so that the quality of any release can be quickly surveyed by viewing the stats regarding tests performed on that release, such as Bodhi updates, Bugzilla reports, and release validation testing. In this case, we’ll be looking at F25 Alpha, so let’s get right to it!

Updates Testing

Compared to the F24 Alpha release, there were 996 more testers who worked on F24 Alpha, however the F25 Alpha release had 4 more comments! This could be viewed as an improvement in testing-efficiency, as it took 996 less testers to produce 4 more results!

Test period: Fedora 25 Alpha (2016-07-26 – 2016-08-30)
Testers: 328
Comments1: 2053

Name Updates commented
Dmitri Smirnov (cserpentis) 171
Emerson Santos (em3rson) 163
Heiko Adams (heikoada) 131
Reindl Harald (hreindl) 94
anonymous 68
Christian Dersch (lupinix) 67
Filipe Rosset (filiperosset) 61
Major Hayden (mhayden) 52
Frederico Henrique Gonçalves Lima (fredlima) 48
Nie Lili (lnie) 46
Pete Walter (pwalter) 43
David H. Gutteridge (dhgutteridge) 35
mildew 34
Adam Williamson (adamwill) 33
bojan 31
yuwata 30
mastaiza 29
samoht0 27
Parag Nemade (pnemade) 23
ngompa 23
Björn Esser (besser82) 21
Geoffrey Marr (coremodule) 21
Alexander Kurtakov (akurtakov) 19
Mukundan Ragavan (nonamedotc) 19
Vinu Moses (vinumoses) 18
Kamil Páral (kparal) 17
manik1596 16
Nemanja Milosevic (nmilosev) 15
karthikrajpr (karthikrajprkkr17) 15
Héctor H. Louzao P. (hhlp) 15
Chad Hirsch (charims) 14
fszymanski 14
William Moreno (williamjmorenor) 13
H V ANAGH (anagh) 13
Gerald B. Cox (gbcox) 13
Don Swaner (dswaner) 11
Jiří Popelka (jpopelka) 11
Corey W Sheldon (linuxmodder) 10
k3rn3l 10
raj550 10
Daniel Dimitrov (dandim) 9
Colin J Thomson (g6avk) 8
wdpypere 8
Igor Gnatenko (ignatenkobrain) 8
Lukas Brabec (lbrabec) 7
Peter Robinson (pbrobinson) 7
Sérgio Monteiro Basto (sergiomb) 7
bradw 7
Raphael Groner (raphgro) 6
dimitrisk 6
Devin Henderson (devhen) 6
keramidas 5
Lukas Slebodnik (lslebodn) 5
Alexander Kolesnikov (karter) 5
Itamar Reis Peixoto (itamarjp) 5
Sumantro Mukherjee (sumantrom) 5
Matthew Smith (zenzizenzicube) 5
Jon Ciesla (limb) 5
Peter T. (ageha) 5
Kevin Fenzi (kevin) 5
Till Hofmann (thofmann) 5
Nathan (nathan95) 5
Orion Poplawski (orion) 4
Randy Barlow (bowlofeggs) 4
Joachim Frieben (frieben) 4
Nicolas (naphan) 4
Viorel Tabara (viorel) 4
Vitaly Zaitsev (xvitaly) 4
suraia 4
ghishadow 4
Patrick Creech (pcreech17) 4
xake 4
Michal Vala (michalvala) 3
Ben Williams (jbwillia) 3
Austin Macdonald (asmacdo) 3
Mike FABIAN (mfabian) 3
Anssi Johansson (avij) 3
reaperzn 3
Sudhir Khanger (sudhirkhanger) 3
augenauf 3
Pat Riehecky (jcpunk) 3
Benjamin Xiao (urbenlegend) 3
Rex Dieter (rdieter) 3
Hans Müller (cairo) 3
jack smith (paviluf) 3
Daniel Lara Souza (danniel) 3
Ziqian SUN (zsun) 3
Simone Caronni (slaanesh) 3
mbasti 3
David Woodhouse (dwmw2) 3
dutchy 3
Martin Krizek (mkrizek) 3
Dominik Mierzejewski (rathann) 3
pdestefa 3
Rodrigo de Araujo (rodrigodearaujo) 3
Justin M. Forbes (jforbes) 3
David Jeremias Vásquez Sicay (davidva) 3
Martin Sehnoutka (msehnout) 3
Maurizio Manfredini (mmanfred) 3
…and also 229 other reporters who created less than 3 reports each, but 294 reports combined!

1 If a person provides multiple comments to a single update, it is considered as a single comment. Karma value is not taken into account.

Validation Testing

Below are the validation test stats for F25 Alpha. Compared to F24 Alpha, there were 2 less testers, 220 less reports, and 31 less unique-referenced-bugs.

Test period: Fedora 25 Alpha (2016-07-26 – 2016-08-30)
Testers: 21
Reports: 231
Unique referenced bugs: 6

 

Name Reports submitted Referenced bugs1
kparal 38
pwhalen 37
pschindl 35 1369786 1370136 1370222 (3)
sumantrom 20
lbrabec 19 1369934 (1)
lnie 17
satellit 15 1363915 (1)
frantisekz 12
me2 6 1373156 (1)
adamwill 5
prakashmishra1598 4
siddharthvipul1 4
kevin 4
jsedlak 4
satellt 3
mattdm 2
puffi 2
sgallagh 1
tenk 1
coremodule 1
a2batic 1

1 This is a list of bug reports referenced in test results. The bug itself may not be created by the same person.

Bug Reports

Here are the stats regarding Bugzilla bug reports. Compared to F24 Alpha, there were 72 less reporters and 212 less reports in F25 Alpha. This could be viewed as a sign that Fedora is getting less buggy over time!

Test period: Fedora 25 Alpha (2016-07-26 – 2016-08-30)
Reporters: 76
New reports: 215

Name Reports submitted1 Excess reports2 Accepted blockers3
Joachim Frieben 26 1 (3%) 0
lnie 16 2 (12%) 0
Chris Murphy 15 2 (13%) 0
Kamil Páral 10 3 (30%) 0
Yonatan 10 0 (0%) 0
Igor Gnatenko 9 0 (0%) 0
Heiko Adams 8 1 (12%) 0
Emerson Santos 7 1 (14%) 0
Adam Williamson 6 0 (0%) 3
gil cattaneo 6 0 (0%) 0
Mike FABIAN 6 0 (0%) 0
Mikhail 4 0 (0%) 0
Petr Schindler 4 0 (0%) 0
Alex 3 1 (33%) 0
Dan Horák 3 0 (0%) 0
Geoffrey Marr 3 0 (0%) 0
Nivag 3 0 (0%) 0
Zbigniew Jędrzejewski-Szmek 3 0 (0%) 0
Dominika Krejčí 2 0 (0%) 0
Hans de Goede 2 1 (50%) 0
Joel Salas 2 0 (0%) 0
Leslie Satenstein 2 0 (0%) 0
Lukas Brabec 2 0 (0%) 0
Michal Schmidt 2 0 (0%) 0
Mustafa Muhammad 2 0 (0%) 0
Paul Whalen 2 0 (0%) 0
Peter H. Jones 2 1 (50%) 0
Rafael Fonseca 2 0 (0%) 0
Ricardo Ramos 2 0 (0%) 0
satellitgo at gmail.com 2 0 (0%) 0
srakitnican 2 1 (50%) 0
Tomasz Kłoczko 2 0 (0%) 0
Vedran Miletić 2 0 (0%) 0
…and also 43 other reporters who created less than 2 reports each, but 43 reports combined!

1 The total number of new reports (including “excess reports”). Reopened reports or reports with a changed version are not included, because it was not technically easy to retrieve those. This is one of the reasons why you shouldn’t take the numbers too seriously, but just as interesting and fun data.
2 Excess reports are those that were closed as NOTABUG, WONTFIX, WORKSFORME, CANTFIX or INSUFFICIENT_DATA. Excess reports are not necessarily a bad thing, but they make for interesting statistics. Close manual inspection is required to separate valuable excess reports from those which are less valuable.
3 This only includes reports that were created by that particular user and accepted as blockers afterwards. The user might have proposed other people’s reports as blockers, but this is not reflected in this number.

Conclusion

The general trend that has been seen since Fedora 24 Alpha release has been that Fedora is growing as a community, yet getting less buggy as a whole. The above, depending on how it is interpreted, could back this notion up. Thanks to all who participated in testing F25 Alpha, Fedora wouldn’t be what it is without your support. See you for Heroes of Fedora 25 – Beta!

The post Heroes of Fedora (HoF) – F25 Alpha appeared first on Fedora Community Blog.

PHP version 5.6.27RC1 and 7.0.12RC1

Posted by Remi Collet on September 30, 2016 05:04 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests. For x86_64 only.

RPM of PHP version 5.6.27RC1 are available as SCL and base packages in remi-test repository for Fedora 22 and Enterprise Linux 6.

RPM of PHP version 7.0.12RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 22 and Enterprise Linux 6.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 5.6 as Software Collection:

yum --enablerepo=remi-test install php56

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Update of system version 5.6:

yum --enablerepo=remi-php56,remi-test update php\*

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Notice: version 7.0.12RC1 is also available in Fedora rawhide.

emblem-notice-24.pngRC version is generally the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php56, php70)

Base packages (php)

Hello Planet GNOME!

Posted by Jiri Eischmann on September 29, 2016 09:19 PM

My blog was recently added to Planet GNOME, so I’d like to introduce myself to the new audience:

My name is Jiří Eischmann and I work as an engineering manager responsible for apps in Red Hat. Besides apps such as Firefox, LibreOffice, Thunderbird, Chromium, Fedora Media Writer my team also contributes to many GNOME apps and components: Evolution, Nautilus, Music, Evince, Vinagre, Control Center,… I’ve been a GNOME Foundation member since 2008 and for example organized GUADEC 2013 in Brno. I’m also involved in the Fedora Project (Fedora ambassador for the Czech Republic, Fedora packager, Fedora Magazine writer,…).

This is a blog where I write work and open source related topics in English. I also have a blog in Czech where I write about pretty much anything.


Episode 6 - Foundational Knowledge of Security

Posted by Open Source Security Podcast on September 29, 2016 07:03 PM
Kurt and Josh discuss interesting news stories

Download Episode

<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/285305681&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes



OVS 2.6 and The First Release of OVN

Posted by Russell Bryant on September 29, 2016 04:00 PM

In January of 2015, the Open vSwitch team announced that they planned to start a new project within OVS called OVN (Open Virtual Network).  The timing could not have been better for me as I was looking around for a new project.  I dove in with a goal of figuring out whether OVN could be a promising next generation of Open vSwitch integration for OpenStack and have been contributing to it ever since.

OVS 2.6.0 has now been released which includes the first non-experimental version of OVN.  As a community we have also built integration with OpenStack, Docker, and Kubernetes.

OVN is a system to support virtual network abstraction. OVN complements the existing capabilities of OVS to add native support for virtual network abstractions, such as virtual L2 and L3 overlays and security groups.

Some high level features of OVN include:

  • Provides virtual networking abstraction for OVS, implemented using L2 and L3 overlays, but can also manage connectivity to physical networks
  • Supports flexible ACLs (security policies) implemented using flows that use OVS connection tracking
  • Native support for distributed L3 routing using OVS flows, with support for both IPv4 and IPv6
  • ARP and IPv6 Neighbor Discovery suppression for known IP-MAC bindings
  • Native support for NAT and load balancing using OVS connection tracking
  • Native fully distributed support for DHCP
  • Works with any OVS datapath (such as the default Linux kernel datapath, DPDK, or Hyper-V) that supports all required features (namely Geneve tunnels and OVS connection tracking. See the datapath feature list in the FAQ for details.)
  • Supports L3 gateways from logical to physical networks
  • Supports software-based L2 gateways
  • Supports TOR (Top of Rack) based L2 gateways that implement the hardware_vtep schema
  • Can provide networking for both VMs and containers running inside of those VMs, without a second layer of overlay networking

Support for large scale deployments is a key goal of OVN.  So far, we have seen physical deployments of several hundred nodes.  We’ve also done some larger scale testing by simulating deployments of thousands of nodes using the ovn-scale-test project.

OVN Architecture

Components

ovn-architecture

OVN is a distributed system.  There is a local SDN controller that runs on every host, called ovn-controller.  All of the controllers are coordinated through the southbound database.  There is also a centralized component, ovn-northd, that processes high level configuration placed in the northbound database. OVN’s architecture is discussed in detail in the ovn-architecture document.

OVN uses databases for its control plane. One benefit is that scaling databases is a well understood problem.  OVN currently makes use of ovsdb-server as its database.  The use of ovsdb-server is particularly convenient within OVN as it introduces no new dependencies since ovsdb-server is already in use everywhere OVS is used.  However, the project is also currently considering adding support for, or fully migrating to etcd v3, since v3 includes all of the features we wanted for our system.

We have also found that this database driven architecture is much more reliable than RPC based approaches taken in other systems we have worked with.  In OVN, each instance of ovn-controller is always working with a consistent snapshot of the database.  It maintains a connection to the database and gets a feed of relevant updates as they occur.  If connectivity is interrupted, ovn-controller will always catch back up to the latest consistent snapshot of the relevant database contents and process them.

Logical Flows

OVN introduces a new intermediary representation of the system’s configuration called logical flows.  A typical centralized model would take the desired high level configuration, calculate the required physical flows for the environment, and program the switches on each node with those physical flows.  OVN breaks this problem up into a couple of steps.  It first calculates logical flows, which are similar to physical OpenFlow flows in their expressiveness, but operate only on logical entities.  The logical flows for a given network are identical across the whole environment.  These logical flows are then distributed to the local controller on each node, ovn-controller, which converts logical flows to physical flows.  This means that some deployment-wide computation is done once and the node-specific computation is fully distributed and done local to the node it applies to.

Logical flows have also proven to be powerful when it comes to implementing features.  As we’ve built up support for new capabilities in the logical flow syntax, most features are now implemented at the logical flow layer, which is much easier to work with than physical flows.

Data Path

OVN implements features natively in OVS wherever possible.  One such example is the implementation of security policies using OVS+conntrack integration.  I wrote about this in more detail previously.  This approach has led to significant data path performance improvements as compared to previous approaches.  The other area this makes a huge impact is how OVN implements distributed L3 routing.  Instead of combining OVS with several other layers of technology, we provide L3 routing purely with OVS flows.  In addition to the performance benefits, we also find this to be much simpler than the alternative approaches that other projects have taken to build routing on top of OVS.  Another benefit is that all of these features work with OVS+DPDK since we don’t rely on Linux kernel-specific features.

Integrations

OpenStack

Integration with OpenStack was developed in parallel with OVN itself.  The OpenStack networking-ovn project contains an ML2 driver for OpenStack Neutron that provides integration with OVN.  It differs from Neutron’s original OVS integration in some significant ways.  It no longer makes use of the Neutron Python agents as all equivalent functionality has been moved into OVN.  As a result, it no longer uses RabbitMQ.  Neutron’s use of RabbitMQ for RPC has been replaced by OVN’s database driven control plane.  The following diagram gives a visual representation of the architecture of Neutron using OVN.  Even more detail can be found in our documented reference architecture.

neutron-ovn-architecture

There are a few different ways to test out OVN integration with OpenStack.  The most popular development environment for OpenStack is called DevStack.  We provide integration with DevStack, including some instructions on how to do simple testing with DevStack.

If you’re a Vagrant user, networking-ovn includes a vagrant setup for doing multi-node testing of OVN using DevStack.

The OpenStack TripleO deployment project includes support for OVN as of the OpenStack Newton release.

Finally, we also have manual installation instructions to help with integrating OVN into your own OpenStack environment.

Kubernetes

There is active development on a CNI plugin for OVN to be used with Kubernetes.  One of the key goals for OVN was to have containers in mind from the beginning, and not just VMs.  Some important features were added to OVN to help support this integration.  For example, ovn-kubernetes makes use of OVN’s load balancing support, which is built on native load balancing support in OVS.

The README in that repository contains an overview, as well as instructions on how to use it.  There is also support for running an ovn-kubernetes environment using vagrant.

Docker

There is OVN integration with Docker networking, as well.  This currently resides in the main OVS repo, though it could be split out into its own repository in the future, similar to ovn-kubernetes.

Getting Involved

We would love feedback on your experience trying out OVN.  Here are some ways to get involved and provide feedback:

  • OVS and OVN are discussed on the OVS discuss mailing list.
  • OVN development occurs on the OVS development mailing list.
  • OVS and OVN are discussed in #openvswitch on the Freenode IRC network.
  • Development of the OVN Kubernetes integration occurs on Github but can be discussed on either the Open vSwitch IRC channel or discuss mailing list.
  • Integration of OVN with OpenStack is discussed in #openstack-neutron-ovn on Freenode, as well as the OpenStack development mailing list.

Fedora Hubs: Getting started

Posted by Fedora Community Blog on September 29, 2016 08:15 AM
Fedora Hubs: Getting started

Image courtesy of The Awkward Yeti

Fedora Hubs provides a consistent contributor experience across all Fedora teams and will serve as an “intranet” page for the Fedora Project. There are many different projects in Fedora with different processes and workflows. Hubs will serve as a single place for contributors to learn about and contribute to them in a standardized format. Hubs will also be a social network for Fedora contributors. It is designed as one place to go to keep up with everything and everybody across the project in ways that aren’t currently possible.

  • Want to hack on Hubs? The latest source code is on the open source git-based forge Pagure.
  • Want to learn more about the history behind Hubs? Máirín wrote a few blog posts on the progress of hubs.

This article will help you set up a Fedora Hubs development environment on your local machine.

Tips for new contributors

New contributors willing to contribute to Hubs typically set it up locally as a first step, but often face issues during the setup process. We decided to write this post to provide you with tips to help you hack your way into the project! We’ll talk about getting through some common pitfalls, provide a glossary of Hubs-related terms, and give you a walk-through of what to expect once you’ve got Hubs set up locally – all to help you get on your way contributing to Hubs!

Common pitfalls setting up Hubs locally

Note: The Hubs team regularly updates our project README file. You’ll want to have a copy of this open to refer to while you set Hubs up locally.

Here’s a list of Hubs local setup pitfalls with some background information and hints.

Certificate validation error

After having installed all the dependencies and cloned the project, you need to configure the project to authenticate against Ipsilon, Fedora’s multi-protocol identity provider service.

$ oidc-register --debug https://iddev.fedorainfracloud.org/ http://localhost:5000

If you get an error regarding certificate verification, the following command will replace httplib2’s CA certificate file. Without it, you won’t be able to authenticate since you don’t have the HTTP setting.

$ cp /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ~/.virtualenvs/hubs/lib/python2.7/site-packages/httplib2/cacerts.txt

500 errors / database issues after a git pull

Sometimes the database schema changes as a result of development efforts. When your database is out of date and you pull down new code requiring a schema change, the application might fail when you try to run it. You can try repopulating the database.

$ python populate.py

This might fail for various reasons with a traceback. The easiest way forward is to just wipe out your old database and generate a new one with the new schema. This one-liner will do that as well as restart the Hubs app:

$ rm /var/tmp/hubs.db; rm /var/tmp/fedora-hubs-cache.db; python populate.py; python runserver.py -c config

Login won’t work! I’m stuck in Ipsilon!

Make sure you’re passing in the config file when you run Hubs from your machine. Run the server like this.

python runserver.py -c config

Glossary of Hubs terminology

Hub:
A hub can be associated with a page displaying the info that concerns the hub’s topic (whether that topic is a Fedora contributor (i.e. FAS user) or a team (i.e. FAS group) within Fedora). Hubs consist of a feed / stream of notifications (typically on the left / main side of the screen) as well as customizable widgets (typically on the right sidebar).
Widget:
The various-sized rectangular cards on a hub page are called widgets. One can find the various widgets under the widgets/ directory in the source code. Every widget has a respective HTML file under the widgets/templates directory.
User Hub page:
The hub page of a FAS (Fedora Account System) user. User hubs are configurable and list the widgets each user has configured and wishes to display on their page. While hacking on the project, you might want to navigate to localhost:5000/fas_username to view the widgets on a user hub page.
Group Hub page:
A hub page for one of the various Fedora teams such as the Infrastructure team, Design team, CommOps, Marketing, i18n, etc. While hacking on a widget relevant to groups, you need to navigate to localhost:5000/group_name to view the respective widget on the group hub page. Note that group hubs are associated with FAS groups.

What to expect after local setup

Now, let’s talk about what you can expect to see once you’ve got Hubs up and running on your system.

Group Hubs

Once you’ve successfully completed the setup of Hubs locally on your system, you might see something similar to this on your localhost.

Example of the Fedora Infrastructure group hub

This is a group hub—in this case, the Infrastructure team hub—listing a feed for the Infrastructure team as well as their configured widgets. The widgets are listed in the same order as they were added with their index values, in the populate.py file under the Infrastructure team.

When hacking on a new widget, according to the hub page you wish to display, one might want to add it either to the hubs/defaults.py file or to the populate.py file. One can look into it under the “Stubbing out a new widget” section of the README file.)

User Hubs

This is an example of a user hub.

Example of a Fedora Hubs user hub

Another example of a Fedora Hubs user hub

These are the hub pages for Ralph (FAS username: ralph) and Máirín (FAS username: duffy). It lists the widgets configured for the respective user in the hubs/defaults.py file. Each user can configure their user hub (or profile) as they wish in order to appeal to visitors.

Getting help

IRC

This is a good place to get to know and interact with the rest of the Fedora Hubs team. It is a global communication tool, so asynchronous chat happens often. You can drop in and/or lurk in #fedora-hubs on irc.freenode.net. We have weekly meetings every Tuesday at 14:00 UTC.

Never used IRC before or a little bit intimidated? Check out this IRC Beginner’s Guide here on Fedora Magazine for more help getting started.

Mailing list

When you’ve got questions, want to have a discussion, get feedback, and/or catch up with what people are doing on the team, our mailing list is the place to be. A mailing list is a subscription-based tool. You have to subscribe to the list where you want to post to. Fedora Hubs has a mailing list that you need to subscribe to here.

After you hit the Login button, you need to authenticate with your FAS account. To subscribe to the list, you can use any email address you prefer. Just drop us an introduction mail on the list and we’ll reply back.

Hope to see you soon! Happy hacking on Hubs!

The post Fedora Hubs: Getting started appeared first on Fedora Community Blog.

Switched to HTTPS

Posted by Remi Collet on September 29, 2016 08:06 AM

Perhaps you already noticed it, I have switched all the sites for a secured browsing using HTTPS.

So, new addresses are:

For the repository, notice that it would only make sense to switch to https by default only if all the mirrors were using it. So, if you want to use the repository in secured mode, you have to select a https mirror, and replace the mirrorlist by the wanted baseurl, please avoid the main site, which is often under high load.

Certificates are kindly provided by Let's Encrypt.

Node.js 6.x LTS coming to EPEL 7

Posted by Fedora Magazine on September 29, 2016 08:00 AM

What is Node.js?

Node.js® is a JavaScript runtime built on Chrome’s V8 JavaScript engine. It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Its package ecosystem, npm, is the largest ecosystem of open source libraries in the world. You can read more about Node.js at the project website.

What is EPEL 7?

EPEL stands for Extra Packages for Enterprise Linux. EPEL is a Fedora Special Interest Group that maintains a high quality set of additional packages for Enterprise Linux systems. These systems include Red Hat Enterprise Linux (RHEL), CentOS, and Scientific Linux.

History of Node.js in EPEL 7

EPEL 7 was first released in 2014. Shortly after, the Node.js package in Fedora was cloned and produced for EPEL 7. At the time, the latest stable version was 0.10.x. EPEL included upstream bugfixes, security fixes and other backports regularly per update policies. For several years, EPEL provided a stable 0.10x platform to run Node.js applications on Enterprise Linux systems.

However, all good things must come to an end. The Node.js upstream has declared all support for the 0.10.x release series will conclude October 1st, 2016. After that, the Node.js SIG in Fedora would become solely responsible for bugfixes and security patches for any release shipping the 0.10.x release stream. After discussion, the SIG decided to perform a significant, breaking change and update EPEL 7 to the latest stable 6.x stream.

Why a breaking change?

The reasons for this decision were threefold. First, a project as large and prevalent as Node.js would be impossible to maintain. Users would suffer from slow updates, possibly with security implications.

Second, the EPEL SIG felt responsible to provide EPEL users with the latest platform for use with the newest available technology. The 6.x stream contains several years of advances and features since 0.10.x. In that time, Node.js has become a highly-successful infrastructure. It has support from hundreds of contributors and dozens of companies. Failing to provide these enhancements just because they include backwards-incompatible changes didn’t feel like the spirit of the Fedora Project.

Lastly, upstream has declared the new 6.x release stream a long-term stable release. This means upstream supports the current release for approximately 30 months of total life. This support will end on April 1st, 2019. Assuming the upstream maintains its currently-planned LTS schedule, EPEL 7 will probably begin another transition around October 2018 onto the Node.js 8.x LTS branch.

How this affects you

If you aren’t using Node.js 0.10.x today, of course this change won’t affect you at all. If you are using it on EPEL 7 today, you’ll likely experience a disruption soon. This is because Node.js 6.x, like 4.x and 0.12.x, is known to be backwards-incompatible with many packages designed for use with 0.10.x. As a result, applications built atop 0.10.x will likely malfunction once the update to 6.x lands and you apply it on your machines.

However, there’s some good news, too. Many developers have updated packages in the NPM ecosystem to work with the latest versions of Node.js. That means you can resolve many issues simply by upgrading to newer versions of those NPM libraries. Additionally, upstream has provided useful pages describing the breaking changes in each of the major releases.

Do you have applications that depend on Node.js? If so, it’s highly recommended that you try 6.x as soon as possible. This will help avoid disruption when the upgrade appears in the stable EPEL repositories. You can install 6.x packages today using the epel-testing repository with this command:

yum update --enablerepo=epel-testing nodejs

If you have questions or concerns about this upgrade, please direct them to the Node.js SIG mailing list.

Episode 5 - OpenSSL: The library we deserve

Posted by Open Source Security Podcast on September 29, 2016 12:39 AM
Kurt and Josh discuss the recent OpenSSL update(s)

Download Episode

<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/285193058&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes

PyCon 2016

Posted by Vivek Anand on September 28, 2016 10:40 PM

I come from a place  where everyone worships competitive coding and thus cpp, so the experience of attending my first pycon was much awaited for me.

This year’s PyCon India happened in Delhi and i along with a couple of my friends reached on 23rd September, the first day. We were a bit late but it was all right because, we didn’t miss anything.

Day 1

We had workshops and devsprints on the first day. I, along with Farhaan, were taking a devsprint for Pagure project. It was nice to see a couple of new contributors whom i meet on IRC asking for help and trying to make a contribution. It went all day long but i did manage to roam around a little.

Sayan came to the spot where we were sitting for devsprint with a camera in hand. I don’t know if he was hired as an official photographer for PyCon or not but if he wasn’t i am sure he must have clicked more photographs than all the rest of people combined.

I am not a sort of person who meets a lot of people. I, generally, feel awkward meeting new people and it’s not easy for me to get comfortable with anyone. With Farhaan and especially Sayan, this wasn’t the case. They made me comfortable in our first encounter. In fact, the most shocking thing about PyCon was simplicity of the people. I expected them to be nice but they were better.🙂

I then attended Pycharm workshop. Sayan was sitting in the first row and i and Farhaan joined him there. This workshop turned out to be really funny because we were comparing how we did things in Vim to the corresponding method in Pycharm. Pycharm does make things a little easier but moving the hand to mouse/touchpad for every little thing is too much to ask. We came to know that Sayan uses arrow keys instead of h/j/k/l in Vim (and he told me not to judge him for this :-p).

At the end of day one, I, my friend Saptak(saptaks), Sayan and Subhendu (subho or Ghost-Script) decided to visit Ambience Mall which was nearby, for some food. We ended up at KFC where we spend some time and  got to know each other better. Sayan told us about his academic history and  Hackerearth. We also talked about his experience of flock this year. After this, Sayan told me that i was a completely different person than i thought i would be :-p .

Day 2

It was a busy day for everyone. There were a lot of interesting talks. I managed to get a ticket to PyCon for one of my friend who sat  all day, the day before in the hotel room. At the end of the last day, i had joined the volunteers group since, most of the dgplug members were in the volunteers group and i didn’t want to miss anything. So, i was basically moving around during the talks doing small things. I couldn’t really concentrate on any particular talk except talk whose title read: ‘Helix and Salt: Case study in high volume and distributed python applications’, given by a linkedin guy named Akhil Malik and i didn’t understand much. At the end of the talk, Saptak asked me if i wanted to have a cold drink to which i replied: “I will need a harder drink than that.”. I didn’t realize Kushal was sitting just a row behind me and could easily have listened to this conversation.

At the end of second day, all the volunteers were supposed to have dinner together and i was supposed to meet a close friend of mine who lives in Delhi. Thankfully, I managed to do both but, i missed the starters:/ . The four of us from last night were joined by Farhaan and since, me and Farhaan both were present, the talk naturally shifted to our GSoC experiences. Sayan told me about writing blogs more often and he did have some valid points.

cttei5euaaawhj4-jpglarge

Day 3

 It was the last day and somehow i was feeling a little low because… i didn’t want it to end. I wasn’t interested in talks anymore. But, i did attend the lightening talks and was roaming rest of the time. There was a dgplug “staircase” meeting which i attended. Kushal was leading the talk and he was surrounded by about 30 people, most of whom hadn’t started their FOSS journey. He talked mainly about how they should start with something small and they will get better. It is a really nice initiative, i personally feel.

I had met Kushal last night at dinner and he said he had something to give to me and Farhaan. Just before the lightening talk, i was sitting in the second row and he came to me and stood by my side. He told me to be seated and gave me dgplug sticker and a set of stickers of fedora. This was a nice moment for me. Earlier that day, he had mentioned me and Farhaan for our contributions to pagure in his talk. In the small period of time that i have seen or been with Kushal, he has managed to earn a lot of respect from me.

At the end of this day, everybody had to leave so, the four people from first day and one other friend of mine, decided to visit the same mall once again, for food and roaming around. The Cab driver that we booked turned out to be very patriotic and got offended by my comment that we didn’t study at JNU (where the event was held) and we were “foreigners”. He kept talking about me being a foreigner the whole ride, no matter how many times i said i didn’t mean it. Obviously, everyone enjoyed it.

At the mall, there were no juniors or seniors. There were just 5 (single) computer science guys: being mean, pulling each other’s legs and talking about stuff i shouldn’t mention here. It turned out that Sayan and Subhendu got to know a few of my negative points as well. Sayan also managed to ask Saptak, me and Shubham (my other friend) to start contributing to fedora hubs.

Overall, it was a great experience meeting people whom i talk to on IRC. I won’t be able to mention all the people whom i have met during the event but it doesn’t matter. The important thing is that i enjoyed a lot and i am able to connect with them better.


GNU Tools Cauldron 2016, ARMv8 multi-arch edition

Posted by Siddhesh Poyarekar on September 28, 2016 06:49 PM

Worst planned trip ever.

That is what my England trip for the GNU Tools Cauldron was, but that only seemed to add to the pleasure of meeting friends again. I flewin to Heathrow and started on an almost long train journey to Halifax,with two train changes from Reading. I forgot my phone on the trainbut the friendly station manager at Halifax helped track it down andgot it back to me. That was the first of the many times I forgotstuff in a variety of places during this trip. Like I discovered thatI forgot to carry a jacket or an umbrella. Or shorts. Or full lengthpants for that matter. Like I purchased an umbrella from Sainsbury’s but forgot to carry it out. I guess you got the drift of it.

All that mess aside, the conference itself was wonderful as usual. My main point of interest at the Cauldron this time was to try and make progress on discussions around multi-arch support for ARMv8. I have never talked about this in my blog the past, so a brief introduction is in order.

What is multi-arch?

Processors evolve over time and introduce features that can be exploited by the C library to do work faster, like using the vectori SIMD unit to do memory copies and manipulation faster. However, this is at odds with the goal of the C library to be able to run on all hardware, including those that may not have a vector unit or may not have that specific type of vector unit (e.g. have SSE4 but not AVX512 on x86). To solve this problem, we exploit the concept of PLT and dynamic linking.

I thought we were talking about multiarch, what’s a PLT now?

When a program calls a function in a library that it links to dynamically (i.e. only the reference of the library and the function are present in the binary, not the function implementation), it makes the call via an indirect reference (aka a trampoline) within thebinary because it cannot know where the function entry point in another library resides in memory. The trampoline uses a table (called the Procedure Linkage Table, PLT for short) to then jump to the final location, which is the entry point of the function.

In the beginning, the entry point is set as a function in the dynamic linker (lets call it the resolver function), which then looks for the function name in libraries that the program links to and then updates the table with the result. The dynamic linker resolver function can do more than just look for the exact function name in the libraries the function links to and that is where the concept of Indirect Functions or IFUNCs come into the picture.

Further down the rabbit hole - what’s an IFUNC?

When the resolver function finds the function symbol in a library, it looks at the type of the function before simply patching the PLT with its address. If it finds that the function is an IFUNC type (lets call it the IFUNC resolver), it knows that executing that function will give the actual address of the function it should patch into the PLT. This is a very powerful idea because it now allows us to have multiple implementations of the same function built into the library for different features and then have the IFUNC resolver study its execution environment and return the address of the most appropriate function. This is fundamentally how multiarch is implemented in glibc, where we have multiple implementations of functions like memcpy, each utilizing different features, like AVX, AVX2, SSE4 and so on. The IFUNC resolver for memcpy then queries the CPU to find the features it supports and then returns the address of the implementation best suited to the processor.

… and we’re back! Multi-arch for ARMv8

ARMv8 has been making good progress in terms of adoption and it is clear that ARM servers are going to form a significant portion of datacenters of the future. That said, major vendors of such servers with architecture licenses are trying to differentiate by innovating onthe microarchitecture level. This means that a sequence of instructions may not necessarily have the same execution cost on all processors. This gives an opportunity for vendors to write optimal code sequences for key function implementations (string functions for example) for their processors and have them included in the C library. They can use the IFUNC mechanism to then identify their processors and then launch the routine best suited for their processor implementation.

This is all great, except that they can’t identify their processors reliably with the current state of the kernel and glibc. The way to identify a vendor processor is to read the MIDR_EL1 and REVIDR_EL1 registers using the MSR instruction. As the register name suggests, they are readable only in exception level 1, i.e. by the kernel, which makes it impossible for glibc to directly read this, unlike on Intel processors where the CPUID instruction is executable in userspace and is sufficient to identify the processor and its features.

… and this is only the beginning of the problem. ARM processors have a very interesting (and hence painful) feature called big.LITTLE, which allows for different processor configurations on a single die. Even if we have a way to read te two registers, you could end up reading the MIDR_EL1 from one CPU and REVIDR_EL1 from another, so you need a way to ensure that both values are read from the same core.

This led to the initial proposal for kernel support to expose the information in a sysfs directory structure in addition to a trap into the kernel for the MRS instruction. This meant that for any IFUNC implementation to find out the vendor IDs of the cores on the system, it would have to traverse a whole directory structure, which is not the most optimal thing to do in an IFUNC, even if it happens only once in the lifetime of a process. As a result, we wanted to look for a better alternative.

VDSO FTW!

The number of system calls in a directory traversal would be staggering for, say, a 128 core processor and things will undoubtedly get worse as we scale. Another way for the kernel to share this (mostly static) information with userspace is via a VDSO, with an opaque structure in userspace pages in the vdso and helper functionsto traverse that structure. This however (or FS traversal for that matter) exposed a deeper problem, the extent of things we can do in an IFUNC.

An IFUNC runs very early in a dynamically linked program and even earlier in a statically linked program. As a result, there is very little that it can do because most of the complex features are not even initialized at that point. What’s more, the things you can do in a dynamic program are different from the things you can do in a static program (pretty much nothing right now in the latter), so that’s an inconsistency that is hard to reconcile. This makes the IFUNC resolvers very limited in their power and applicability, at least in their current state.

What were we talking about again?

The brief introduction turned out to be not so brief after all, but I hope it was clear. All of this fine analysis was done by Szabolcs Nagy from ARM when we talked about multi-arch first and the conclusion was that we needed to fix and enhance IFUNC support first if we had any hope of doing micro-architecture detection for ARM. However, there is another way for now…

Tunables!

A (not so) famous person (me) once said that glibc tunables are the answer to all problems including world hunger and of course, the ARMv8 multi-arch problem. This was a long term idea I had shared at the Linaro Connect in Bangkok earlier this year, but it looks like it might become a reality sooner. What’s more, it seems like Intel is looking for something like that as well, so I am not alone in making this potentially insane suggestion.

The basic idea here would be to have environment variable(s) todo/override IFUNC selection via tunables until the multi-arch situation is resolved. Tunables initialization is much more lightweight and only really relies on what the kernel provides on the stackand in the auxilliary vector and what the CPU provides directly. It seems easier to delay IFUNC resolution at least until tunables are initialized and then look harder at how much further they can be delayed so that they can use other things like VDSO and/or files.

So here is yet another idea that has culminated into a “just finish tunables already!” suggestion. The glibc community has agreed on setting the 2.25 release as the deadline to get this support in, so hopefully we will see some real code in this time.

Use shortcuts for faster web browsing

Posted by Fedora Magazine on September 28, 2016 04:05 PM

The web browser has become the most important software on most users’ computers. You probably use your web browser for entertainment, work, learning, and other purposes. Therefore you can make your life easier by using it efficiently.

Here are some tips for keyboard shortcuts that will save you time and effort. These shortcuts were tested in the following browsers:

  • Mozilla Firefox
  • Google Chrome
  • Chromium
  • Midori
  • Epiphany

Tabs management

Tabs are the most basic element in modern browsers. You can use the following shortcuts:

  • Ctrl+T: Open a new tab
  • Ctrl+W: Close current tab
  • Ctrl+Shift+T: Re-open last closed tab
  • Ctrl+Tab: Move to following tab (doesn’t work in Epiphany)
  • Ctrl+PgDn (Page Down): Move to following tab
  • Ctrl+Shift+Tab: Move to previous tab (doesn’t work in Epiphany)
  • Ctrl+PgUp (Page Up): Move to previous tab
  • Ctrl+<Number>: Using a number between 1 and 8, opens the tab in that position, counting from left (doesn’t work in Epiphany). In Midori replace Ctrl with Alt.
  • Ctrl+9: Go to the last (right-most) tab (doesn’t works in Epiphany). In Midori replace Ctrl with Alt.
  • Ctrl+N: Open a new browser window
  • Ctrl+Q: Quit browser (works only in Firefox and Epiphany). Use this shortcut with caution, since it may close everything without notice

Browsing

From moving around a page to searching inside one, here are some available shortcuts:

  • Ctrl+L: Positions the cursor in the navigation bar to write a new URL
  • Ctrl+Enter: autocomplete www. and .com in a URL. Try typing google and then press Ctrl+Enter. (In Midori and Epiphany this shortcut launches a search on the content in the URL bar, using the default search engine.)
  • Ctrl+Shift+Enter performs different functions on each browser.
    • Firefox: Autocomplete www. and .org in a URL, similar to the shortcut above
    • Google Chrome and Chromium: Works like Ctrl+Enter
    • Midori: Launch a search on the content in the URL bar, using the default search engine in a new tab
    • Epiphany: Launch a search on the content in the URL bar, using the default search engine in a new window
  • Alt+Shift+Enter: autocomplete www. and .net in a URL, similar to shortcuts above (only in Firefox)
  • Ctrl+R or F5: Reload a page
  • ←↑→↓ and spacebar: Move across the page
  • Ctrl++: Zoom in
  • Ctrl+-: Zoom out
  • Ctrl+D: Bookmark current page
  • Ctrl+F: Search for text in the current page
  • Ctrl+P: Print the current page

Step by step

Obviously there are many shortcuts available. Don’t try to memorize all of them at once. The idea is to incorporate them slowly into your browsing sessions, until you start using them naturally.

This post is inspired by a blog entry by Elena Santos in ChicaGeek.


Image courtesy Peignault Laurent – originally posted to Unsplash as Untitled.

Run an example application on OpenShift

Posted by Gerard Braad on September 28, 2016 04:00 PM

In a previous article I have written on how easy it is to stand up a test environment of OpenShift. In this article I will describe an example application from the sourcecode to the created image and how this gets deployed. The steps are explained using manual steps, and how OpenShift does it all automated. You will notice, at no point do you have to write a Dockerfile.

Preparation

For this article it is not necessary to have a working test environment, however it does make things clearer. I would suggest you to use OpenShift Origin v1.3 on CentOS 7. Although my previous article showed how to get it up and running on Fedora 24, I experienced an issue with deployment not succeeding*. The steps in the deployment article can be performed by replacing dnf with yum.

Description of the example

The OpenShift project publishes several test applications on GitHub, of which one is a very simple Ruby applica8080. Please, have a look at: http://github.com/openshift/ruby-ex

You will see it consists of four files:

  • Gemfile
  • Gemfile.local
  • config.ru
  • README.md

The application itself is only described in config.ru and the needed dependencies are in Gemfile.

Dependencies

To make the application work, we first need:

$ gem install bundler

This will install bundler that can install dependencies as described in the Gemfile. This file describes:

source 'https://rubygems.org'
gem 'rack'
gem 'puma'

The first line says source which points to a gem repository, and each line starting wih gem are bundled packages containing libraries for use in your project. To install all of the required gems (dependencies) from the specified sources:

$ bundle install

The file Gemfile.lock is a snapshot of the Gemfile and is used internally.

config.ru

The application is specified in the file called config.ru. If you open the file you will see it contains route mappings, lines starting with map, for three urls:

  • /health
  • /lobster
  • /

/health

This is a commonly used to provide a simple health-check for applications that are automatically deployed. It allows to quickly test if the application got deployed. In projects I worked on, we also did quick dependency checks, such as a configuration file exists, or another needed endpoint is available. In this application it will respond with a HTTP status code 200 and returns 1 as value.

/lobster

This is a test provided by rack. It shows an ASCII-art lobster. By adding a variable to the URL querystring ?flip=left the direction can be changed.

/

This is the mapping to a bare route. It shows a greeting message on how to use the application using OpenShift to trigger automated builds.

Rackup

Rack is an interface for using Ruby and Ruby frameworks with webservers. It provides an application called rackup to start the application:

$ bundle exec rackup -p 8080 config.ru

Using this command the webserver will bind to port 8080, according to the description in the config.ru file. To see what the mappings do, open:

Use the example with OpenShift

Deploying an application on OpenShift from source is very simple. A single command can do this. First have a look

$ oc new-app openshift/ruby-20-centos7~https://github.com/[username]/ruby-ex

But before we do, I will explain what this command does. Oversimplified OpenShift does two things:

  1. Build
  2. Deploy

Note: If you want to perform the command, go ahead. Please fork the repository and change the [username] in this command.

Build: source to image

OpenShift runs container images which are in the Docker format. It will run the CMD instruction for this. So, how does OpenShift know what to run? Convention. Most frameworks have a standard way of doing things, and this is as you noticed also the case with the Ruby example. The creation of the image happens with a tool called source-to-image (S2I).

Source-to-Image (S2I) is a toolkit and workflow for building reproducible Docker images from source code. It uses a base image, and will layer the application on top, configures the runn command, which then results in a containter image for use.

$ s2i build https://github.com/[username]/ruby-ex openshift/ruby-20-centos7 ruby-ex

base image

The base image here is openshift/ruby-20-centos7. The source of this image can be found at the following GitHub repository: s2i-ruby-container

If you look at the Dockerfile source, you will see Software Collections is used to install a specific Ruby version. In this case version 2.0. Software collections solves one of the biggest complaints of using CentOS (or RHEL) as a basis as part of your delivery. It allows you to use multiple versions of software on the same system, without affecting system-wide installed packages.

The image also describes a label io.openshift.expose-services="8080:http" which inidcate that the application on port 8080 will be exposed as HTTP traffic. This also means the container does not need root privileges as the port assignment is above 1024. The application itself will be installed into the folder: /opt/app-root/src.

Running this container can be done with:

$ docker run -p 8080:8080 ruby-ex
[1] Puma starting in cluster mode...
[1] * Version 3.4.0 (ruby 2.0.0-p645), codename: Owl Bowl Brawl
[1] * Min threads: 0, max threads: 16
[1] * Environment: production
[1] * Process workers: 1
[1] * Phased restart available
[1] * Listening on tcp://0.0.0.0:8080
[1] Use Ctrl-C to stop
[1] - Worker 0 (pid: 32) booted, phase: 0

Open the links as previously stated will yield the same results.

$ curl http://localhost:8080/health

The build process can be as simple as a copy for static content, to compiling Java or C/C++ code. For the purpose of this article I will not explain more about the S2I process, but this will certainly be explained in future articles.

New application

If we now look at the previous command again:

$ oc new-app openshift/ruby-20-centos7~https://github.com/[username]/ruby-ex

you can clear see the structure. The first element openshift/ruby-20-centos7 describes the S2I container image for Ruby as hosted at the Docker hub. The second part is the source code path pointing to a git repository.

Please try the command now... OpenShift will create containers for each of the stages used: build, deploy and the final running container. You can check the containers using the command:

$ oc get pod
NAME               READY     STATUS         RESTARTS   AGE
ruby-ex-1-build    0/1       Completed      0          1m

Build stage

If you create this new application, a new container named ruby-ex-1-build. What happened is that the Source-to-image container got pulled which uses the base image and layers the source code on top.

To see what happened, as with the previous command, you can see the build configuration:

$ oc logs bc/ruby-ex
Cloning "https://github.com/gbraad/ruby-ex" ...
        Commit: f63d076b602441ebd65fd0749c5c58ea4bafaf90 (Merge pull request #2 from mfojtik/add-puma)
        Author: Michal Fojtik <mi@mifo.sk>
        Date:   Thu Jun 30 10:47:53 2016 +0200
---> Installing application source ...
---> Building your Ruby application from source ...
---> Running 'bundle install --deployment' ...
Fetching gem metadata from https://rubygems.org/...............
Installing puma (3.4.0)
Installing rack (1.6.4)
Using bundler (1.3.5)
Cannot write a changed lockfile while frozen.
Your bundle is complete!
It was installed into ./bundle
---> Cleaning up unused ruby gems ...
Pushing image 172.30.108.129:5000/myproject/ruby-ex:latest ...
Pushed 0/10 layers, 10% complete
Pushed 1/10 layers, 34% complete
Pushed 2/10 layers, 49% complete
Pushed 3/10 layers, 50% complete
Pushed 4/10 layers, 50% complete
Pushed 5/10 layers, 50% complete
Pushed 6/10 layers, 61% complete
Pushed 7/10 layers, 71% complete
Pushed 8/10 layers, 88% complete
Pushed 9/10 layers, 99% complete
Pushed 10/10 layers, 100% complete
Push successful

The difference is that the resulting image will be placed in the myproject namespace, and pushed to the local repository.

Deployment stage

After the image has been composed, OpenShift will run the container image on the scheduled node. What happens here can be checked with:

$ oc get pod                                                                                                                                                          
NAME              READY     STATUS      RESTARTS   AGE
ruby-ex-1-an801   1/1       Running     0          26s
ruby-ex-1-build   0/1       Completed   0          1m

This means that the build succeeded, the image got deployed and now runs in the a container identified with ruby-ex-1-an801. Note: The container ruby-ex-1-deploy is not shown here as only the logs are of importance.

The deployment configuration logs can be shown with:

$ oc logs dc/ruby-ex
[1] Puma starting in cluster mode...
[1] * Version 3.4.0 (ruby 2.0.0-p645), codename: Owl Bowl Brawl
[1] * Min threads: 0, max threads: 16
[1] * Environment: production
[1] * Process workers: 2
[1] * Phased restart available
[1] * Listening on tcp://0.0.0.0:8080
[1] Use Ctrl-C to stop
[1] - Worker 0 (pid: 32) booted, phase: 0
[1] - Worker 1 (pid: 35) booted, phase: 0

Events

To see the flow of execution, you can have a look at:

$ oc get events

This can be helpful if an error occured.

Verify

Now that the application has been deployed on OpenShift, we need to look up the IP address that has been assigned. For this we use:

$ oc get svc
NAME      CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
ruby-ex   172.30.91.160   <none>        8080/TCP   21h

Now we can open the application as http://172.30.91.160:8080/

Conclusion

OpenShift allows you to run prebuilt images or applications based on source. The Source-to-image tooling makes it possible to create reproducible images for deployment of applications based on source. This tool itself is very helpful and is certainly something I will be using, even outside the use of OpenShift. There is no need to create or modify a Dockerfile, which means that the developer can focus on the development process.

If you want to know more about the automated builds, please have a look at the README of the Ruby example. In future articles more detailed descriptions about these topics will certainly be given. I hope this has been helpful. Please consider leaving feedback or tweet this article.

Giving to your favourite content creators

Posted by Gerard Braad on September 28, 2016 04:00 PM

It is hard to remember the time I really sat down in front of the TV, waiting for a programme to show up, and really enjoy it. Mostly it was accompanied with plenty of interruptions. It might have been Star Trek on the public channels that I enjoyed the most. But even Discovery Channel had many interruptions, some were funny, or just slightly annoying, especially when you have the TV on as background sound. With the advent of YouTube, and other User Generated Content websites, or even streaming websites, it is almost unimaginable there used to be a time without being able to watch your favorites.

Some might not know, but I help(ed) a lot with music remixing and artwork. Too bad I do not have the time now, but this is so much fun. Content creation is probably the best people can do as it gives a lot of satisfaction. I consider the teaching I do as a form of content creation. I produced a lot of documents, notes, etc, and people enjoy the way I explain stuff. Even writing code is a creative process:

foreach step in I.take():
  step.move(DIRECTION_TOMORROW)

But remixing music, or posting artwork online, does not pay the bills. Unless of course you are Tiesto, etc. And asking for a donation has not been my thing, although I have received some. I am very thankful for this!

And this is why I give to the content creators that create good stuff. I love the cardgame Magic: the Gathering, and especially some of the youtube channels for this. Patreon is a great idea... and if people create content you read often, consider giving back. Even 1 USD is already good way to show you care. It can buy someone a cup of coffee... and the more people who do, the easier someone can take some time off to create the content you love!

One of the projects I really want to work on again is Gauth. it has been in a neglect, but I can't reserve enough time to do a rewrite... I ask you, consider to become a Patron of my work and hopefully I can improve by giving Gauth a new future, allow me to create more content on my blog... and maybe in the future even more.

Support my work on Patreon

And thank you to all who have donated in the past!

Test Day: Internationalization (i18n) features of Fedora 25

Posted by Fedora Community Blog on September 28, 2016 08:15 AM

Internationalization Test Day

Test Day: Internationalization (i18n) features of Fedora 25

We do have a badge for participating!

We have new, interesting i18n features (changes) introduced in Fedora 25. Those are as follows:

  • Emoji typing – In the computing world, it’s rare to have person not know about emoji. Before, it was difficult to type  emoji in Fedora. Now, we have an emoji typing feature in Fedora 25.
  • Unicode 9.0 – With each release, Unicode introduces new characters and scripts to its encoding standard. We have a good number of additions in Unicode 9.0. Important libraries are updated to get the new additions into Fedora.
  • IBus typing booster Multilingual support – IBus typing booster started providing multilingual support (typing more than one language using single IME – no need to switch) from Fedora 24, but the UI setup was not ready. Fedora 25 has this ready.

Other than this, we also need to make sure all other languages works well specifically input, output, storage and printing.

How to participate

Most of the information is available on the Test Day wiki page. In case of doubts, feel free to send an email to the testing team mailing list.

Though it is a test day, we normally keep it on for the whole week. If you don’t have time tomorrow, feel free to complete it in the coming few days and upload your test results.

Let’s test and make sure this works well for our users!

The post Test Day: Internationalization (i18n) features of Fedora 25 appeared first on Fedora Community Blog.

Participez à la journée de test de Fedora 25 sur l'internationnalisation !

Posted by Charles-Antoine Couret on September 28, 2016 06:00 AM

Aujourd'hui, ce mercredi 28 septembre, est une journée dédiée à un test précis : l’internationalisation de Fedora 25. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

Qu'est-ce que l’internationalisation ?

Cela consiste à s'assurer que Fedora et ses applications fonctionnent correctement dans toutes les langues. Sont concernés la traduction, la gestion des paquets de langue mais aussi des entrées de saisies. En effet, les langues non européennes notamment bénéficient d'assistance pour écrire dans leurs langues avec un clavier disposant de peu de touches par rapport aux caractères disponibles.

Typiquement les tests du jour couvrent :

  • La compatibilité Unicode 9.0 (nouveauté de Fedora 25)
  • Les paquets de langues des applications ;
  • IBus (dont la gestion des Emoji et l'assistance simultanée multilangue ont été ajoutés dans Fedora 25) ;
  • Les applications doivent être traduites (Gnome, Libreoffice et Firefox de préférence) ;
  • Les navigateurs doivent afficher les sites web dans la langue de l'utilisateur par défaut ;
  • Quelques autres...

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Report for Software Freedom Day 2016 – China Academy Science

Posted by Zamir SUN on September 28, 2016 03:06 AM
So it’s already SFD time this year. How time flies. This year I am asked to present SFD in China Academy Science by the company, so unlucky I am not proper to deliver a Fedora talk then. I bring some DVDs and stickers there, as well as a roll up poster. However there are people [...]

Bodhi 2.2.3 released to fix large updates

Posted by Bodhi on September 27, 2016 10:08 PM

This release fixes #951, which prevented updates with large numbers of packages to be viewable in web browsers.

Fedora 25 i18n Test Day tomorrow (2016-09-28)!

Posted by Adam Williamson on September 27, 2016 09:08 PM

Hi folks! It’s Fedora Test Day time once again! Tomorrow, 2016-09-28, will be the Fedora 25 i18n Test Day. i18n is internationalization; i18n and its buddy localization (l10n) (together known as ‘g11n’, for ‘globalization’) cover all the work needed to make Fedora usable in languages other than U.S. English and countries other than the U.S.

i18n specifically covers things like the special ‘input methods’ used to input languages which need something beyond a simple 1:1 key-to-character system, and the fonts and font engine capabilities needed to render non-Latin characters.

The test day page has all the information you need to get testing, and you can enter your results on the result page. Please, if you’re familiar with using anything but U.S. English to work with your computer, come along and help us test! As always, the event will be in #fedora-test-day on Freenode IRC. If you don’t know how to use IRC, you can read these instructions, or just use WebIRC.

New badge: Modularity WG Member !

Posted by Fedora Badges on September 27, 2016 07:30 PM
Modularity WG MemberYou're a member of the Modularity Working Group!

New badge: F25 i18n Test Day Participant !

Posted by Fedora Badges on September 27, 2016 07:07 PM
F25 i18n Test Day ParticipantYou helped test i18n features in Fedora 25! Thanks!

New badge: F24 i18n Test Day Participant !

Posted by Fedora Badges on September 27, 2016 07:06 PM
F24 i18n Test Day ParticipantYou helped test i18n features in Fedora 24! Thanks!

Setup Docker storage to use LVM thin pool

Posted by Gerard Braad on September 27, 2016 04:00 PM

If you install Docker on a new Fedora or CentOS system, it is very likely that you use devicemapper. Especially in the case of Fedora cloud images, no special configuration is done to the image. While Atomic images come pre-configured with a dedicated pool. Using devicemapper with loopback can lead to unpredictable behaviour, and while OverlayFS is a nice replacement, you will not be able to use SELinux at the moment. In this short article I will show how to setup a pool for storing the Docker images.

Preparation

For this article I will be using a Fedora 24 installation on an OpenStack cloud provider*. It is a standard Cloud image, which means the root is configured as 'ext4'. I will be attaching a storage volume to the instance. Just like using Virtual Manager, the disk will be identified as /dev/vdb.

First you need to stop the Docker process and remove the existing location. This means you will loose the images, but if they are important, you can either export or push them somewhere else.

$ systemctl stop docker
$ rm -rf /var/lib/docker

After this we will create a basic LVM setup which will use the whole storage volume.

$ pvcreate /dev/vdb
$ vgcreate docker_vol /dev/vdb

We know have a volumegroup named docker_vol.

Setup Docker storage

Fedora comes with a tool that makes it easy to setup the storage for Docker, called docker-storage-setup. The configuration is done with a file, and in our case it needs to contain the identification of the volume group to use:

$ vi /etc/sysconfig/docker-storage-setup 
VG="docker_vol"

Now you can run:

$ docker-storage-setup

This will create the filesystem and configures Docker to use the storage pool. After this successfully finishes, it has configured the pool to use the xfs filesystem.

Verify

To verify these changes, we will start Docker and run a basic image.

$ systemctl start docker
$ docker info
Storage Driver: devicemapper
 Pool Name: docker_vol-docker--pool
 Pool Blocksize: 524.3 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: 
 Metadata file: 
 Data Space Used: 37.75 MB
 Data Space Total: 21.45 GB
 Data Space Available: 21.41 GB
 Metadata Space Used: 53.25 kB
 Metadata Space Total: 54.53 MB
 Metadata Space Available: 54.47 MB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.122 (2016-04-09)
$ docker pull busybox
$ docker run -it --rm busybox
/ # 

Conclusion

Configuration of storage has been really simplified and shouldn't be a reason not to do this. However, still to many people are not aware of the issues with devicemapper and loopback. Having to assign additional software to use Docker can also be a reason why people do not consider doing this, but even OverlayFS is not perfect. Using overlay with SELinux will be possible in the future, and hopefully soon these steps will also not be needed. In any case, the steps involved are simple and if you use Docker on Fedora, needed!

More information

Well they can try!

Posted by Paul Mellors [MooDoo] on September 27, 2016 07:20 AM

It’s yet another day and yes another day where I relentlessly receive email spam.  9 out of 10 times my outlook account is catching this spam, but do you really know what you’re looking at?  Today I received this email.  It looks valid, but if I didn’t know better I might even had clicked the link to validate my account.

untitled

No I’m not daft and I didn’t click the link, why?  Well I hovered over it and noticed that it was taking me here

** WARNING DO NOT VISIT THIS LINK ** [in fact i’ve changed some of the chars]

“http***zug*bar*com/chin/gallery/zug/thumbs/1*.html”

You can clearly see this isn’t an Apple email, it’s just someone trying a phishing scam.  If you receive an email like this, and it contains links for you to click, hover over it, see if it’s a link to the actual companies site.  If it’s not and your spam filter hasn’t picked up on it, then delete the sucker.

 


radv: status update or is Talos Principle rendering yet?

Posted by Dave Airlie on September 27, 2016 04:33 AM
The answer is YES!!

I fixed the last bug with instance rendering and Talos renders great on radv now.

Also with the semi-interesting branch vkQuake also renders, there are some upstream bugs that needs fixing in spirv/nir that I'm awaiting and upstream resolution on, but I've included some prelim fixes in semi-interesting for now, that'll go away when upstream fixes are decided on.

Here's a screenshot:

Alterar ordem de boot no grub2 - Fedora 24

Posted by Emerson Santos on September 27, 2016 03:41 AM

Na empresa onde presto serviço tem um notebook “comunitário”, vários gerentes utilizam ele.

Como era de se esperar, o sistema desse notebook é o Windows 10, mas como agora eu sou responsável pelo setor de TI, coloquei o Fedora 24 nele, pra eu poder usar também, obviamente respeitei os demais usuários e deixei em dual boot.

Para garantir que os usuários menos experientes não iniciem o Fedora por engano (enquanto eu não “converto” todos), alterei a ordem do boot no Grub, e essa é a dica de 3 passos que deixarei aqui.

  • Passo 1 = Identificando o Windows no grub:

$sudo cat /boot/grub2/grub.cfg | grep Windows

No meu caso, o resultado foi: menuentry ‘Windows 10 (loader) (on /dev/sda1)’ …

Só nos interessa o que está entre as primeiras aspas simples, após “menuentry”

  • Passo 2 = Alterar ordem preferencial do grub:

$sudo grub2-set-default 'Windows 10 (loader) (on /dev/sda1)’

  • Passo 3 = Atualizar o grub:

$sudo grub2-mkconfig -o /boot/grub2/grub.cfg

Pronto, agora toda vez que o notebook iniciar, o Windows estará selecionado.

Fedora 25 Alpha and processing.

Posted by mythcat on September 26, 2016 11:32 PM
About processing you can find more from processing website.
Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. There are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning and prototyping. 
Is simple to use. You can used with java also with  python and android mode.
Come with many examples and tutorials.
Today I tested with Fedora 25 alpha.
I download the 64bits tgz file. I extract the archive into my home user.
I used the binary file to run it and I install some modes from menu Tools and them Modes tab.
I run one simple code to see if is running without errors.
And working well, see the result:

Deploy an OpenShift test cluster

Posted by Gerard Braad on September 26, 2016 04:00 PM

In my previous article I described how I used an Ansible playbook and a few roles to stand-up a Kubernetes test environment. In that article I mentioned that to deploy a production-ready environment some work more was required. Luckily, it is now very easy to stand up a production and enterprise-ready container platform for hosting applications, called OpenShift. Through the years OpenShift has undergone a lot of changes, and the latest version Origin v1.3 is very different from the original version. OpenShift sets up a complete Kubernetes environment and with a set of tools it can take care of the whole application lifecycle, from source to deployment. In this article I will give an introduction to setting up a test environment for a developer.

Setup machine

I will setup the environment on a standard Fedora 24 installation. You can use a cloud image as all the needed packages will be specified. After installing the machine, you login as a standard user, which can do a password-less sudo.

$ ssh fedora@89.42.141.96
$ sudo su -
#

Install docker and client

From here all the commands will be run as root, unless otherwise specified.

$ dnf install -y docker curl

This will install the basic packages we need to setup the test cluster. Now from a browser you open the following page: https://github.com/openshift/origin/releases/tag/v1.3.0. This shows the current deliverables for the OpenShift Origin v1.3 release. You need to download the file called like openshift-origin-client-tools-v1.3.0-[...]-linux-64bit.tar.gz

$ curl -sSL https://github.com/openshift/origin/releases/download/v1.3.0/openshift-origin-client-tools-v1.3.0-3ab7af3d097b57f933eccef684a714f2368804e7-linux-64bit.tar.gz -o oc-client.tar.gz
$ tar -zxvf oc-client.tar.gz
$ mkdir -p /opt/openshift/client
$ cp ./openshift-origin-client-tools-v1.3.0-3ab7af3d097b57f933eccef684a714f2368804e7-linux-64bit/oc /opt/openshift/client/oc

Note: I do not install the binary in /usr/bin or /usr/sbin to prevent a conflict with a packaged version, but also because this makes it easier for me to work on a different version of the application. E.g. the current packaged version is v1.2 and does not provide the command we will be using in the next step.

Configure docker

To allow OpenShift to pull and locally cache images, it will deploy a local docker registry. But before docker would be able to use this, we need to specify an insecure registry in the configuration. For this you need to add --insecure-registry 172.30.0.0/16 to /etc/sysconfig/docker.

$ vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --insecure-registry 172.30.0.0/16'

After this we will setup to allow the standard user to communicate with the docker daemon over the docker socket. This is not a necessary step, and does not make the system more secure. It does make it easier not having to move between user and using sudo all the time.

$ groupadd docker
$ usermod -a -G docker fedora
$ chgrp docker /var/run/docker.sock

After this you can start docker and move on the actual installation of OpenShift.

$ systemctl enable docker
$ systemctl start docker

Note: we will be running this environment with devicemapper as Storage Driver. This is not an ideal situation. If you do further tests, consider changing the storage with docker-storage-setup to use a dedicated volume.

Running OpenShift

Since version 1.3 of OpenShift, the client provides a cluster up commands which stands up a very simple all-in-one cluster, with a configured registry, router, image streams, and default templates.

As the fedora user, you can check if you can access docker

$ docker ps
CONTAINER ID    IMAGE   COMMAND     CREATED     STATUS      PORTS   NAMES

No containers should be returned. This mean you can communicate with the docker daemon. Now you are ready to start the test cluster.

cluster up

$ export PATH=$PATH:/opt/openshift/client/
$ ./oc cluster up
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.3.0 image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ... 
   Using nsenter mounter for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ... 
   Using 10.5.0.27 as the server IP
-- Starting OpenShift container ... 
   Creating initial OpenShift configuration
   Starting OpenShift using container 'origin'
   Waiting for API server to start listening
   OpenShift server started
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Server Information ... 
   OpenShift server started.
   The server is accessible via web console at:
       https://10.5.0.27:8443

   You are logged in as:
       User:     developer
       Password: developer

   To login as administrator:
       oc login -u system:admin

And that was it! Now you are running an OpenShift environment. You can check this as follows:

$ docker ps
CONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS              PORTS               NAMES
cebba70022a6        openshift/origin-haproxy-router:v1.3.0    "/usr/bin/openshift-r"   16 seconds ago      Up 15 seconds                           k8s_router.9426645a_router-1-o3454_default_ba2e0814-8483-11e6-924a-fa163e29da46_e9a13a8d
32aa5e84a04d        openshift/origin-docker-registry:v1.3.0   "/bin/sh -c 'DOCKER_R"   17 seconds ago      Up 15 seconds                           k8s_registry.f0a205a4_docker-registry-1-v57os_default_b9fc0130-8483-11e6-924a-fa163e29da46_24863324
03ee38d125cb        openshift/origin-pod:v1.3.0               "/pod"                   18 seconds ago      Up 16 seconds                           k8s_POD.4a82dc9f_router-1-o3454_default_ba2e0814-8483-11e6-924a-fa163e29da46_ea6d1d08
44d6f8d2d9d6        openshift/origin-pod:v1.3.0               "/pod"                   18 seconds ago      Up 16 seconds                           k8s_POD.9fa2fe82_docker-registry-1-v57os_default_b9fc0130-8483-11e6-924a-fa163e29da46_76754271
60e7cc5f4e5d        openshift/origin-deployer:v1.3.0          "/usr/bin/openshift-d"   21 seconds ago      Up 19 seconds                           k8s_deployment.59c7ba3f_router-1-deploy_default_b3660c7b-8483-11e6-924a-fa163e29da46_8e02f47a
f1fe993ddcac        openshift/origin-pod:v1.3.0               "/pod"                   22 seconds ago      Up 20 seconds                           k8s_POD.4a82dc9f_router-1-deploy_default_b3660c7b-8483-11e6-924a-fa163e29da46_9a38fe5e
72068a244ac8        openshift/origin:v1.3.0                   "/usr/bin/openshift s"   49 seconds ago      Up 48 seconds                           origin

Client connection

After running the command oc cluster up you will be automatically logged in. For this it writes the login configuration in ~/.kube/. If you want to change you can login using:

$ oc login

The standard user provided is developer.

Verify

Now we need to verify if we can deploy a simple application. However, without changes, OpenShift will not run containers with a root-user process. For example an nginx container would fail with a permission denied error.

Instead, we will for now run a simple Hello container:

$ oc run hello-openshift --image=docker.io/openshift/hello-openshift:latest --port=8080 --expose
service "hello-openshift" created
deploymentconfig "hello-openshift" created

This would create the container and schedule it. You can check the progress with:

$ oc get pod
NAME                        READY     STATUS    RESTARTS   AGE
hello-openshift-1-xi7f0     1/1       Running   0          9m

You will also see a -deploy container. This is not needed for our verification.

To check the application, we need to get the IP address that has been assigned to the Pod. You can do this as follows:

$ oc get pod hello-openshift-1-xi7f0 -o yaml | grep podIP
  podIP: 172.17.0.7

All you have to do now is open the endpoint:

$ curl 172.17.0.7:8080
Hello OpenShift!

And that is it, you have a working OpenShift test cluster.

Teardown

If you are down with this, you simply do a:

$ oc cluster down

and all the containers used in the deployment will be torn down.

Conclusion

Using OpenShift's cluster up command you can easily setup an environment for developers to run and test their applications. The current of OpenShift provided with Fedora 24 does not offer this command, as the packaged version is v1.2. However, this change is in Rawhide and is therefore expected to be released as part of Fedora 25.

In future articles I will detail more about how to create applications for the OpenShift container platform, and how to check and maintain the life cycle of the deployed application. For now, take a look at the other source of information below.

More information

GLPI version 9.1

Posted by Remi Collet on September 26, 2016 02:12 PM

GLPI (Free IT and asset management software) version 9.1 is available. RPM are available in remi repository for Fedora ≥ 22 and Enterprise Linux ≥ 5

As all plugins projets have not yet released a stable version, so version 0.90 stay available in remi repository.

Available in the repository:

  • glpi-9.1-2
  • glpi-data-injection-2.4.2-1
  • glpi-ocsinventoryng-1.2.3-1

Attention Warning: for security reason, the installation wizard is only allowed from the server where GLPI is installed. See the configuration file (/etc/httpd/conf.d/glpi.conf) to temporarily allow more clients.

You are welcome to try this version, in a dedicated test environment, give your feedback and post your questions and bugs on:

 

Epiphany Icon Refresh

Posted by Michael Catanzaro on September 26, 2016 02:00 PM

We have a nice new app icon for Epiphany 3.24, thanks to Jakub Steiner (Update: and also Lapo Calamandrei):

<figure class="wp-caption aligncenter" id="attachment_599" style="width: 300px">new-icon<figcaption class="wp-caption-text">Our new icon. Ignore the version numbers, it’s for 3.24.</figcaption></figure>

Wow pretty!

The old icon was not actually specific to Epiphany, but was taken from the system, so it could be totally different depending on your icon theme. Here’s the icon currently used in GNOME, for comparison:

<figure class="wp-caption aligncenter" id="attachment_600" style="width: 300px">old-icon<figcaption class="wp-caption-text">The old icon, for comparison</figcaption></figure>

You can view the new icon it in its full 512×512 glory by navigating to about:web:

<figure class="wp-caption aligncenter" id="attachment_601" style="width: 300px">big-icon<figcaption class="wp-caption-text">It’s big (click for full size)</figcaption></figure>

(The old GNOME icon was a mere 256×256.)

Thanks Jakub!

Who left all this fire everywhere?

Posted by Josh Bressers on September 26, 2016 02:00 PM
If you're paying attention, you saw the news about Yahoo's breach. Five hundred million accounts. That's a whole lot of data if you think about it.  But here's the thing. If you're a security person, are you surprised by this? If you are, you've not been paying attention.

It's pretty well accepted that there are two types of large infrastructures. Those who know they've been hacked, and those who don't yet know they've been hacked. Any group as large as Yahoo probably has more attackers inside their infrastructure than anyone really wants to think about. This is certainly true of every single large infrastructure and cloud provider and consumer out there. Think about that for a little bit. If you're part of a large infrastructure, you have threat actors inside your network right now, probably more than you think.

There are two really important things to think about.

Firstly, if you have any sort of important data, and it's not well protected, odds are very high that it's left your network. Remember that not every hack gets leaked in public, sometimes you'll never find out. On that note, if anyone has any data on what percentage of compromises leaked I'd love to know.

The most important thing is around how we need to build infrastructure with a security mindset. This is a place public cloud actually has an advantage. If you have a deployment in a public cloud, you're naturally going to be less trusting of the machines than you would be if they were in racks you can see. Neither is really any safer, it's just you trust one less which will result in a more secured infrastructure. Gone are the days where having a nice firewall is all the security you need.

Now every architect should assume whatever they're doing has bad actors on the network and in the machines. If you keep this in mind, it really changes how you do things. Storing lots of sensitive data in the same place isn't wise. Break things apart when you can. Make sure data is encrypted as much as possible. Plan for failure, have you done an exercise where you assume the worst then decide what you do next? This is the new reality we have to exist in. It'll take time to catch up of course, but there's not really a choice. This is one of those change or die situations. Nobody can afford to ignore the problems around leaking sensitive data for much longer. The times, they are a changin.

Leave your comments on Twitter: @joshbressers

New Firefox 49 features in Fedora

Posted by Fedora Magazine on September 26, 2016 08:00 AM

The latest release 49 of Firefox comes with some interesting new features. Here’s what they mean for Fedora users and how to enable them beyond default setup.

Make a safe playground

When you’re testing Firefox, you should create a new fresh profile. If something goes wrong, you won’t lose data. The extra profile also allows you to run additional instances at the same time, each with a different configuration.

Open a terminal and create a new Firefox profile:

$ firefox --ProfileManager

Then run your profile:

$ firefox -P profile_name --no-remote

The –no-remote parameter launches an independent instance, instead of connecting to a running one.

Now for the fun part! Type about:config in the location bar to bring up hidden configuration options. The remaining tips in this article require you to edit these configuration keys. All changes usually require you to restart the browser.

Graphics acceleration

Firefox integrates the Skia graphics library as seen in Google Chrome. Unlike Cairo, the former default, Skia promises faster and parallel graphics rendering on Linux.

Skia is not yet enabled completely, but only for canvas HTML5 elements. For a full Skia experience, which may provide anything from ultra-speed to a crash on startup, set gfx.content.azure.backends to skia.

Electrolysis

Electrolysis not only dissolves water but is also meant to speed up Firefox. When Electrolysis is enabled, all web content runs in a separated process under the plugin-container, emancipated from the main browser.

Firefox 49 is a bit picky, and not every piece of content will work this way. To check content status, open the about:support page and look at the Multiprocess Windows row. If some content is not working with Electrolysis, you can try other options to tune the function. A good start is to disable incompatible extensions and set browser.tabs.remote.autostart to true.

For more instructions, including how to force-enable Electrolysis, refer to the Mozilla Wiki.

Dark times are back

At least for your browser, they are. If you like dark themes on the desktop and want the same for the web, toogle widget.allow-gtk-dark-theme to true. Firefox will use a default dark theme for both the user interface and web content.

Advanced Multimedia on the Linux Command Line

Posted by ! Ⓐⓥⓘ Ⓐⓛⓚⓐⓛⓐⓨ ¡ on September 25, 2016 10:40 PM

There was a time that Apple macOS was the best platform to handle multimedia (audio, image, video). This might be still true in the GUI space. But Linux presents a much wider range of possibilities when you go to the command line, specially if you want to:

  • Process hundreds or thousands of files at once
  • Same as above, organized in many folders while keeping the folder structure
  • Same as above but with much fine grained options, including lossless processing that most GUI tools won’t give you

The Open Source community has produced state of the art command line tools as ffmpeg, exiftool and others, which I use every day to do non-trivial things, along with Shell advanced scripting. Sure, you can get these tools installed on Mac or Windows, and you can even use almost all these recipes on these platforms, but Linux is the native platform for these tools, and easier to get the environment ready.

These are my personal notes and I encourage you to understand each step of the recipes and adapt to your workflows. It is organized in Audio, Video and Image+Photo sections.

I use Fedora Linux and I mention Fedora package names to be installed. You can easily find same packages on your Ubuntu, Debian, Gentoo etc, and use these same recipes.

<section id="audio">

Audio

</section> <section id="audio.showinfo">

Show information (tags, bitrate etc) about a multimedia file

ffprobe file.mp3
ffprobe file.m4v
ffprobe file.mkv
</section> <section id="audio.flac2alac">

Lossless conversion of all FLAC files into more compatible, but still Open Source, ALAC

ls *flac | while read f; do
	ffmpeg -i "$f" -acodec alac -vn "${f[@]/%flac/m4a}" < /dev/null;
done
</section> <section id="audio.flac2mp3">

Convert all FLAC files into 192kbps MP3

ls *flac | while read f; do
   ffmpeg -i "$f" -qscale:a 2 -vn "${f[@]/%flac/mp3}" < /dev/null;
done
</section> <section id="audio.flac2mp3hierarchy">

Same as above but under a complex directory structure

# Create identical directory structure under new "mp3" folder
find . -type d | while read d; do
   mkdir -p "alac/$d"
done

find . -name "*flac" | sort | while read f; do
   ffmpeg -i "$f" -acodec alac -vn "alac/${f[@]/%flac/m4a}" < /dev/null;
done
</section> <section id="audio.cue2files">

Convert APE+CUE, FLAC+CUE, WAV+CUE album-on-a-file into a one file per track ALAC or MP3

If some of your friends has the horrible tendency to commit this crime and rip CDs as 1 file for entire CD, there is an automation to fix it. APE is the most difficult and this is what I’ll show. FLAC and WAV are shortcuts of this method.

  1. Make a lossless conversion of the APE file into something more manageable, as WAV:
    ffmpeg -i audio-cd.ape audio-cd.wav
  2. Now the magic: use the metadata on the CUE file to split the single file into separate tracks, renaming them accordingly. You’ll need the shnplit command, available in the shntool package on Fedora (to install: yum install shntool):
    shnsplit -t "%n • %p ♫ %t" audio-cd.wav < audio-cd.cue
  3. Now you have a series of nicely named WAV files, one per CD track. Lets convert them into lossless ALAC using one of the above recipes:
    ls *wav | while read f; do
       ffmpeg -i "$f" -acodec alac -vn "${f[@]/%wav/m4a}" < /dev/null;
    done

    This will get you lossless ALAC files converted from the intermediary WAV files. You can also convert them into FLAC or MP3 using one of the other recipes above.

Now the files are ready for your tagger.
</section>

<section id="video">

Video

</section> <section id="video.srt">

Add chapters and soft subtitles from SRT file to M4V/MP4 movie

This is a lossless and fast process, chapters and subtitles are added as tags and streams to the file; audio and video streams are not reencoded.

  1. Make sure your SRT file is UTF-8 encoded:
    bash$ file subtitles_file.srt
    subtitles_file.srt: ISO-8859 text, with CRLF line terminators
    

    It is not UTF-8 encoded, it is some ISO-8859 variant, which I need to know to correctly convert it. My example uses a Brazilian Portuguese subtitle file, which I know is ISO-8859-15 (latin1) encoded because most latin scripts use this encoding.

  2. Lets convert it to UTF-8:
    bash$ iconv -f latin1 -t utf8 subtitles_file.srt > subtitles_file_utf8.srt
    bash$ file subtitles_file_utf8.srt
    subtitles_file_utf8.srt: UTF-8 Unicode text, with CRLF line terminators
    
  3. Check chapters file:
    bash$ cat chapters.txt
    CHAPTER01=00:00:00.000
    CHAPTER01NAME=Chapter 1
    CHAPTER02=00:04:31.605
    CHAPTER02NAME=Chapter 2
    CHAPTER03=00:12:52.063
    CHAPTER03NAME=Chapter 3
    …
    
  4. Now we are ready to add them all to the movie along with setting the movie name and embedding a cover image to ensure the movie looks nice on your media player list of content. Note that this process will write the movie file in place, will not create another file, so make a backup of your movie while you are learning:
    MP4Box -ipod \
           -itags 'track=The Movie Name:cover=cover.jpg' \
           -add 'subtitles_file_utf8.srt:lang=por' \
           -chap 'chapters.txt:lang=eng' \
           movie.mp4
    

The MP4Box command is part of GPac.
OpenSubtitles.org has a large collection of subtitles in many languages and you can search its database with the IMDB ID of the movie. And ChapterDB has the same for chapters files.

</section> <section id="video.decrypt">

Decrypt and rip a DVD the loss less way

  1. Make sure you have the RPMFusion and the Negativo17 repos configured
  2. Install libdvdcss and vobcopy
    dnf -y install libdvdcss vobcopy
  3. Mount the DVD and rip it, has to be done as root
    mount /dev/sr0 /mnt/dvd;
    cd /target/folder;
    vobcopy -m /mnt/dvd .

You’ll get a directory tree with decrypted VOB and BUP files. You can generate an ISO file from them or, much more practical, use HandBrake to convert the DVD titles into MP4/M4V (more compatible with wide range of devices) or MKV/WEBM files.

</section> <section id="video.slowmotion">

Convert 240fps video into 30fps slow motion, the loss-less way

Modern iPhones can record videos at 240 or 120fps so when you’ll watch them at 30fps they’ll look slow-motion. But regular players will play them at 240 or 120fps, hiding the slo-mo effect.
We’ll need to handle audio and video in different ways. The video FPS fix from 240 to 30 is loss less, the audio stretching is lossy.

# make sure you have the right packages installed
dnf install mkvtoolnix sox gpac faac
#!/bin/bash

# Script by Avi Alkalay
# Freely distributable

f="$1"
ofps=30
noext=${f%.*}
ext=${f##*.}

# Get original video frame rate
ifps=`ffprobe -v error -select_streams v:0 -show_entries stream=r_frame_rate -of default=noprint_wrappers=1:nokey=1 "$f" < /dev/null  | sed -e 's|/1||'`
echo

# exit if not high frame rate
[[ "$ifps" -ne 120 ]] && [[ "$ifps" -ne 240 ]] && exit

fpsRate=$((ifps/ofps))
fpsRateInv=`awk "BEGIN {print $ofps/$ifps}"`

# loss less video conversion into 30fps through repackaging into MKV
mkvmerge -d 0 -A -S -T \
	--default-duration 0:${ofps}fps \
	"$f" -o "v$noext.mkv"

# loss less repack from MKV to MP4
ffmpeg -loglevel quiet -i "v$noext.mkv" -vcodec copy "v$noext.mp4"
echo

# extract subtitles, if original movie has it
ffmpeg -loglevel quiet -i "$f" "s$noext.srt"
echo

# resync subtitles using similar method with mkvmerge
mkvmerge --sync "0:0,${fpsRate}" "s$noext.srt" -o "s$noext.mkv"

# get simple synced SRT file
rm "s$noext.srt"
ffmpeg -i "s$noext.mkv" "s$noext.srt"

# remove undesired formating from subtitles
sed -i -e 's|<font size="8"><font face="Helvetica">\(.*\)</font></font>|\1|' "s$noext.srt"

# extract audio to WAV format
ffmpeg -loglevel quiet -i "$f" "$noext.wav"

# make audio longer based on ratio of input and output framerates
sox "$noext.wav" "a$noext.wav" speed $fpsRateInv

# lossy stretched audio conversion back into AAC (M4A) 64kbps (because we know the original audio was mono 64kbps)
faac -q 200 -w -s --artist a "a$noext.wav"

# repack stretched audio and video into original file while removing the original audio and video tracks
cp "$f" "${noext}-slow.${ext}"
MP4Box -ipod -rem 1 -rem 2 -rem 3 -add "v$noext.mp4" -add "a$noext.m4a" -add "s$noext.srt" "${noext}-slow.${ext}"

# remove temporary files 
rm -f "$noext.wav" "a$noext.wav" "v$noext.mkv" "v$noext.mp4" "a$noext.m4a" "s$noext.srt" "s$noext.mkv"
</section> <section id="video.1photo">

1 Photo + 1 Song = 1 Movie

If the audio is already AAC-encoded, create an MP4/M4V file:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.m4a -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a copy movie.m4v

The above method will create a very efficient 0.2 frames per second (-framerate 0.2) H.264 video from the photo while simply adding the audio losslessly. Such very-low-frames-per-second video may present sync problems with subtitles on some players. In this case simply remove the -framerate 0.2 parameter to get a regular 25fps video with the cost of a bigger file size.
The -vf scale=960:-1 parameter tells FFMPEG to resize the image to 960px width and calculate the proportional height. Remove it in case you want a video with the same resolution of the photo. A 12 megapixels photo file (around 4032×3024) will get you a near 4K video.
If the audio is MP3, create an MKV file:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.mp3 -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a copy movie.mkv

If audio is not AAC/M4A but you still want an M4V file, convert audio to AAC 192kbps:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.mp3 -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a aac -strict experimental -b:a 192k movie.mkv

See more about FFMPEG photo resizing.

</section> <section id="image">

Image and Photo

</section> <section id="image.noexif">

Move images with no EXIF header to another folder

mkdir noexif;
exiftool -filename -T -if '(not $datetimeoriginal or ($datetimeoriginal eq "0000:00:00 00:00:00"))' *jpg | xargs -i mv "{}" noexif/
</section> <section id="image.file2exif">

Set EXIF photo create time based on file create time

Warning: use this only if image files have correct creation time on filesystem and if they don’t have an EXIF header.

exiftool -overwrite_original '-DateTimeOriginal< ${FileModifyDate}' *CR2 *JPG *jpg
</section> <section id="image.rotate">

Rotate photos based on EXIF’s Orientation flag, plus make them progressive. Lossless

jhead -autorot -cmd "jpegtran -progressive '&i' > '&o'" -ft *jpg
</section> <section id="image.rename">

Rename photos to a more meaningful filename

This process will rename silly, sequential, confusing and meaningless photo file names as they come from your camera into a readable, sorteable and useful format. Example:

IMG_1234.JPG2015.07.24-17.21.33 • Max playing with water【iPhone 6s✚】.jpg

Note that new file name has the date and time it was taken, whats in the photo and the camera model that was used.

  1. First keep the original filename, as it came from the camera, in the OriginalFileName tag:
    exiftool -overwrite_original '-OriginalFileName<${filename}' *CR2 *JPG *jpg
  2. Now rename:
    exiftool '-filename<${DateTimeOriginal} 【${Model}】%.c.%e' -d %Y.%m.%d-%H.%M.%S *CR2 *JPG *jpg
  3. Remove the ‘0’ index if not necessary:
    \ls *JPG *jpg | while read f; do
        nf=`echo "$f" | sed -e 's/0.JPG/.jpg/i'`;
        t=`echo "$f" | sed -e 's/0.JPG/1.jpg/i'`;
        [[ ! -f "$t" ]] && mv "$f" "$nf";
    done
  4. Optional: make lower case extensions:
    \ls *JPG | while read f; do
        nf=`echo "$f" | sed -e 's/JPG/jpg/'`;
        mv "$f" "$nf";
    done
  5. Optional: simplify camera name, for example turn “Canon PowerShot G1 X” into “Canon G1X” and make lower case extension at the same time:
    ls *JPG *jpg | while read f; do
        nf=`echo "$f" | sed -e 's/Canon PowerShot G1 X/Canon G1X/;
          s/iPhone 6s Plus/iPhone 6s✚/;
          s/Canon PowerShot SD990 IS/Canon SD990 IS/;
          s/JPG/jpg/;'`;
        mv "$f" "$nf";
    done

You’ll get file names as 2015.07.24-17.21.33 【Canon 5D Mark II】.jpg. If you took more then 1 photo in the same second, exiftool will automatically add an index before the extension.

</section> <section id="image.semantic">

Even more semantic photo file names based on Subject tag

\ls *【*】* | while read f; do
	s=`exiftool -T -Subject "$f"`;
	nf=`echo "$f" | sed -e "s/ 【/ • $s 【/; s/\:/∶/g;"`;
	mv "$f" "$nf";
done
</section> <section id="image.fullrename">

Full rename: a consolidation of some of the previous commands

exiftool '-filename<${DateTimeOriginal} • ${Subject} 【${Model}】%.c.%e' -d %Y.%m.%d-%H.%M.%S *CR2 *JPG *jpg
</section> <section id="image.creator">

Set photo “Creator” tag based on camera model

  1. First list all cameras that contributed photos to current directory:
    exiftool -T -Model *jpg | sort -u

    Output is the list of camera models on this photos:

    Canon EOS REBEL T5i
    DSC-H100
    iPhone 4
    iPhone 4S
    iPhone 5
    iPhone 6
    iPhone 6s Plus
  2. Now set creator on photo files based on what you know about camera owners:
    CRE="John Doe";    exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/DSC-H100/'            *.jpg
    CRE="Jane Black";  exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/Canon EOS REBEL T5i/' *.jpg
    CRE="Mary Doe";    exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 5/'            *.jpg
    CRE="Peter Black"; exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 4S/'           *.jpg
    CRE="Avi Alkalay"; exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 6s Plus/'      *.jpg
</section> <section id="image.faces">

Recursively search people in photos

If you geometrically mark people faces and their names in your photos using tools as Picasa, you can easily search for the photos which contain “Suzan” or “Marcelo” this way:

exiftool -fast -r -T -Directory -FileName -RegionName -if '$RegionName=~/Suzan|Marcelo/' .

-Directory, -FileName and -RegionName specify the things you want to see in the output. You can remove -RegionName for a cleaner output.
The -r is to search recursively. This is pretty powerful.

</section> <section id="image.timezone">

Make photos timezone-aware

Your camera will tag your photos only with local time on CreateDate or DateTimeOriginal tags. There is another set of tags called GPSDateStamp and GPSTimeStamp that must contain the UTC time the photos were taken, but your camera won’t help you here. Hopefully you can derive these values if you know the timezone the photos were taken. Here are two examples, one for photos taken in timezone -02:00 (Brazil daylight savings time) and on timezone +09:00 (Japan):

exiftool -overwrite_original '-gpsdatestamp<${CreateDate}-02:00' '-gpstimestamp<${CreateDate}-02:00' *.jpg
exiftool -overwrite_original '-gpsdatestamp<${CreateDate}+09:00' '-gpstimestamp<${CreateDate}+09:00' Japan_Photos_folder

Use exiftool to check results on a modified photo:

exiftool -s -G -time:all -gps:all 2013.10.12-23.45.36-139.jpg
[EXIF]          CreateDate                      : 2013:10:12 23:45:36
[Composite]     GPSDateTime                     : 2013:10:13 01:45:36Z
[EXIF]          GPSDateStamp                    : 2013:10:13
[EXIF]          GPSTimeStamp                    : 01:45:36

This shows that the local time when the photo was taken was 2013:10:12 23:45:36. To use exiftool to set timezone to -02:00 actually means to find the correct UTC time, which can be seen on GPSDateTime as 2013:10:13 01:45:36Z. The difference between these two tags gives us the timezone. So we can read photo time as 2013:10:12 23:45:36-02:00.

</section> <section id="image.movesgeotag">

Geotag photos based on time and Moves mobile app records

Moves is an amazing app for your smartphone that simply records for yourself (not social and not shared) everywhere you go and all places visited, 24h a day.

  1. Make sure all photos’ CreateDate or DateTimeOriginal tags are correct and precise, achieve this simply by setting correctly the camera clock before taking the pictures.
  2. Login and export your Moves history.
  3. Geotag the photos informing ExifTool the timezone they were taken, -08:00 (Las Vegas) in this example:
    exiftool -overwrite_original -api GeoMaxExtSecs=86400 -geotag ../moves_export/gpx/yearly/storyline/storyline_2015.gpx '-geotime<${CreateDate}-08:00' Folder_with_photos_from_trip_to_Las_Vegas

Some important notes:

  • It is important to put the entire ‘-geotime’ parameter inside simple apostrophe or simple quotation mark ('), as I did in the example.
  • The ‘-geotime’ parameter is needed even if image files are timezone-aware (as per previous tutorial).
  • The ‘-api GeoMaxExtSecs=86400’ parameter should not be used unless the photo was taken more than 90 minutes of any detected movement by the GPS.
</section> <section id="image.grid">

Concatenate all images together in one big image

  • In 1 column and 8 lines:
    montage -mode concatenate -tile 1x8 *jpg COMPOSED.JPG
  • In 8 columns and 1 line:
    montage -mode concatenate -tile 8x1 *jpg COMPOSED.JPG
  • In a 4×2 matrix:
    montage -mode concatenate -tile 4x2 *jpg COMPOSED.JPG

The montage command is part of the ImageMagick package.
</section>

Retrospectiva HackLab Almería 2012-2015 y pico

Posted by Ismael Olea on September 25, 2016 10:00 PM

Este fin de semana tuve el privilegio de ser invitado por GDG Spain y en particular por ALMO para presentar en el Spanish GDG Summit 2016 la experiencia de la actividad en el HackLab Almería:

Aunque llegué muy inseguro porque soy muy crítico con los que considero fracasos míos, al conocer las vicisitudes de los grupos locales del GDG comprobé que a nosotros no nos va tan mal y que tenemos experiencias muy interesantes para terceros.

De paso me ha servido para reconsiderar parte del trabajo hecho y para documentar más claramente nuestras cosas para nuestra propia gente: creo que es buena idea que todos le demos un repaso.

Es posible que haya algún error y alguna carencia. Todas las opiniones son absolutamente personales y no todo el mundo ha de compartirlas. No tengo tanto interés en discutir las afirmaciones como de corregir errores o inconsistencias. Tened presente de que no es una memoria completa de actividades porque eso sería enooorme, sólo una retrospectiva esquemática.

Está escrita en formato de mapa-mental usando Freemind 1.0.1. El formato tal vez os parezca engorroso, pero las restricciones de tiempo y de presentación de la información no me han permitido nada mejor. Lamento las molestias. Podéis descargar el fichero que incluye el mapa desde aquí: 201609-GDG-Experiencia_HackLabAl.zip

PS: esta misma entrada ha sido publicada en el foro del HackLab Almería.

Rencontre avec Adrian/Jobava

Posted by Jean-Baptiste Holcroft on September 25, 2016 10:00 PM

Poursuivant mon objectif de connaître la communauté des traducteurs, j’ai eu l’occasion de rencontrer le coordinateur de la traduction de Fedora en roumain lors d’un déplacement à Bucarest.

Jobava (Adrian) est originaire de Iași (prononciation : i-a-ch) et est plus particulièrement impliqué dans la communauté Mozilla en tant que volontaire, pour laquelle il se déplace pour diverses occasions, notamment à Paris, Berlin, Ljubljana, etc. Vous pouvez parcourir son blog qui en parle un peu au milieu d’autres sujets souvent reliés au monde du logiciel libre.

Son blog est écrit en anglais, cependant ma conviction est la suivante : chacun d’entre nous devrait faire l’effort d’écrire dans sa langue dont nous sommes tous ambassadeurs et qu’un petit article écrit en roumain de temps en temps serait bénéfique à sa langue. Amis francophones, vous devrez également faire de même !

Je lui ai présenté mon projet concernant la traduction concernant l’approche de la langue d’une façon transversale, via des outils qui ne sont pas structurés par projet mais par plate-forme, lançant des tests et éclairant l’évolution sur l’ensemble d’une distribution Linux : ma proposition sur une approche globale.

Au détour de nombreux échanges sur les langues et leurs évolutions, nous avons abordés quelques outils, notamment la plate-forme transvision que je souhaiterais implémenter sur l’ensemble de Fedora 25. Pensez à utiliser le menu du haut, c’est une collection d’outils et non pas un seul service ! Lui-même ne connaissait pas ce menu d’un outil pourtant conçu par la communauté (française) Mozilla. Leur plate-forme de traduction est également intéressante à parcourir, car elle est utilisée tant par LibreOffice que Mozilla : Pootle.

Apparemment, je ne suis pas le seul à avoir ce type d’idées de vue globale, car en plus des outils qui sont cités dans ma proposition, il m’a fait découvrir un outil de consolidation des traductions dans le logiciel libre : Amagama.

Le travail que j’ai réalisé pour faire un état des lieux de la traduction des AppData l’a aussi bien intéressé, peut-être lancerons-nous une instance pour la langue roumaine un jour ? Mais la façon de traduire le logiciel libre est-elle vraiment adaptée à ce type de défis ?

Dans tous les cas, la découverte de l’outil de qualité syntaxique, orthographique et grammaticale Pology lui a beaucoup plut, surtout quand il a vu que ces tests sont déjà lancés fréquemment par Robert Antoni Buj Gelonch

J’ai beaucoup apprécié cette rencontre qui m’a permis de partager nos problématiques communes et ce qui nous motive profondément : contribuer à notre modeste échelle à une société meilleure.

Rencontre avec Adrian Mârza

Posted by Jean-Baptiste Holcroft on September 25, 2016 10:00 PM

Poursuivant mon objectif de connaître la communauté des traducteurs, j’ai eu l’occasion de rencontrer le coordinateur de la traduction de Fedora en roumain lors d’un déplacement à Bucarest.

Adrian Marza est originaire de Iași (prononciation : i-a-ch) et est plus particulièrement impliqué dans la communauté Mozilla en tant que volontaire, pour laquelle il se déplace pour diverses occasions, notamment à Paris, Berlin, Ljubljana, etc. Vous pouvez parcourir son blog qui en parle un peu au milieu d’autres sujets souvent reliés au monde du logiciel libre.

Son blog est écrit en anglais, cependant ma conviction est la suivante : chacun d’entre nous devrait faire l’effort d’écrire dans sa langue dont nous sommes tous ambassadeurs et qu’un petit article écrit en roumain de temps en temps serait bénéfique à sa langue. Amis francophones, vous devrez également faire de même !

Je lui ai présenté mon projet concernant la traduction concernant l’approche de la langue d’une façon transversale, via des outils qui ne sont pas structurés par projet mais par plate-forme, lançant des tests et éclairant l’évolution sur l’ensemble d’une distribution Linux : ma proposition sur une approche globale.

Au détour de nombreux échanges sur les langues et leurs évolutions, nous avons abordés quelques outils, notamment la plate-forme transvision que je souhaiterais implémenter sur l’ensemble de Fedora 25. Pensez à utiliser le menu du haut, c’est une collection d’outils et non pas un seul service ! Lui-même ne connaissait pas ce menu d’un outil pourtant conçu par la communauté (française) Mozilla. Leur plate-forme de traduction est également intéressante à parcourir, car elle est utilisée tant par LibreOffice que Mozilla : Pootle.

Apparemment, je ne suis pas le seul à avoir ce type d’idées de vue globale, car en plus des outils qui sont cités dans ma proposition, il m’a fait découvrir un outil de consolidation des traductions dans le logiciel libre : Amagama.

Le travail que j’ai réalisé pour faire un état des lieux de la traduction des AppData l’a aussi bien intéressé, peut-être lancerons-nous une instance pour la langue roumaine un jour ? Mais la façon de traduire le logiciel libre est-elle vraiment adaptée à ce type de défis ?

Dans tous les cas, la découverte de l’outil de qualité syntaxique, orthographique et grammaticale Pology lui a beaucoup plut, surtout quand il a vu que ces tests sont déjà lancés fréquemment par Robert Antoni Buj Gelonch

J’ai beaucoup apprécié cette rencontre qui m’a permis de partager nos problématiques communes et ce qui nous motive profondément : contribuer à notre modeste échelle à une société meilleure.

Fedora Ambassadors: Measuring Success

Posted by Charles Profitt on September 25, 2016 09:00 PM

Open Source Advocate

I have been a Linux dabbler since 1994 when I first tried Suse Linux. In 2006 I became a full-time Linux user when I converted by laptop to Linux in October of 2006. Like many Linux users I sampled many different distributions while choosing the one that best fit my personality. Eventually I settled on Ubuntu with the release of Ubuntu 7.10 (Gutsy Gibon). Despite choosing Ubuntu I always saw myself as a Linux and open source advocate first and an Ubuntu advocate second. I respected and valued that Linux and open source allowed people the freedom to make personal choices.

I helped organize the Ubuntu New York Local Team in their drive to become an approved team starting in November of 2008. In January of 2009 my application to become an Ubuntu Member was approved. Between November of 2008 and October of 2012 I helped organize and attended 93 Ubuntu, Linux or FOSS events. This included the first FOSSCON that was held at RIT in June of 2010.

In addition to local events I was involved in the global Ubuntu Community as a member of the Ubuntu Beginners Team, Ubuntu LoCo Council and Ubuntu Community Council. I was also fortunate to be sponsored to attend three Ubuntu Developer Summits (UDS). It was during my time serving on the Ubuntu Community Council that I yearned to have more time to get back to what I felt my core mission was; advocacy. I knew that when my term on the Ubuntu Community Council ended in November of 2015 that I could refocus my efforts.

Fedora Ambassador

I became a Fedora Ambassador on March 30th of 2009, but prior to December of 2015 I was more focused on Ubuntu related activities than Fedora. In late October of 2015 I reached out to a long time friend and FOSS Rock Star Remy DeCausmaker. Remy helped me find a few places I could contribute in the Fedora Project. Through these efforts I met Justin Flory who has an amazing passion for Open Source and Fedora. Almost a year later and I am very active as a contributor to the Fedora Project as an author and Ambassador. I have published 23 articles on Fedora Magazine including 19 How Do You Fedora interviews. Thanks to Justin inviting me along I also attended BrickHack and HackMIT as a Fedora Ambassador. HackMIT involved two six hour drives which allowed for a great amount of time to discuss and reflect on being a Fedora Ambassador. One of the topics in the discussion was how to measure the success of an event.

Measuring Success

Over the many years of being an open source advocate I have learned that the method to measure success can take many different forms. When organizing events for the New York LoCo we measured success by how many people attended the event. When I went to technical conferences success was measured by the number of CDs distributed. As I speaker I measured success by the number of people who attended the presentation. With Fedora Magazine I look at the number of views and comments for each article.

On the long ride home from HackMIT 2016 Justin and I discussed how to measure the success of our efforts. The Fedora Project has a badge for attending HackMIT 2016 and ten people have earned the badge. When your remove Justin and me that means 8 out of 1000 participants earned the Fedora HackMIT 2016 badge. What does this mean? I took a closer look at the badge and learned that six of the eight registered their FAS account during the event. Two already had FAS accounts. The numbers lead to several questions:

  • Will the six people who created an account to earn the badge become Fedora Contributors?
  • Will any of the people who did not earn the badge contribute to Fedora?
  • Is the badge a good measure of a successful outreach event?

The first two are good questions. It is difficult to track the first question and impossible to track the second one. The third question is the one that concerns me the most. I think badges are a good way to measure an inreach event, but a poor measure of an outreach effort. I would like to see a better way to measure the success of an event.

Fedora Ambassadors: Mission Statement

The mission of a Fedora Ambassador is clearly stated on the wiki page.

"Ambassadors are the representatives of Fedora. Ambassadors ensure the public understand Fedora's principles and the work that Fedora is doing. Additionally Ambassadors are responsible for helping to grow the contributor base, and to act as a liaison between other FLOSS projects and the Fedora community."

The Fedora Badge granted to attendees does not measure any of these items. I know that I personally handed out 200 fliers about the badge. In doing so I spoke to roughly 80% of the participants and had several good conversations about the Four Foundations. I showed excitement when people were using FOSS in their projects. I answered questions about the best light weight web server. I answered questions about why I chose Fedora. I expressed excitement when I found an entire team using Ubuntu Linux. All of those interactions embody the spirit of the mission. On the long drive home I posed a few questions as we discussed HackMIT:

  • Was the overall awareness of Fedora increased?
  • Was the overall awareness of Linux increased?
  • Was the overall awareness of FOSS increased?
  • Are the participants more likely to check Fedora out in the future?
  • Are the participants more likely to open source their work?

To answer these questions would require a survey. The survey would have to be relatively short, and not require a FAS account or require the person to identify themselves. This will make it more likely that participants would complete the survey. Beyond evaluating a single event the results for event categories could be combined and compared. Take all the answers for hackathon events and compare them to all the answers for maker faire events. With this data it might be possible to know what type of events provide the best opportunity for Ambassadors to make an impact. This would help the Fedora Community determine how to best spend limited funds and volunteer hours.

6 months a task warrior

Posted by Kevin Fenzi on September 25, 2016 06:53 PM

A while back I added a task to my taskwarrior database: evaluate how things were going at 6 months of use. Today is that day. 🙂

A quick recap: about 6 months ago I switched from my ad-hoc list in a vim session and emails in a mailbox to using task warrior for tracking my various tasks ( http://taskwarrior.org/ )

Some stats:

Category Data
Pending 17
Waiting 20
Recurring 12
Completed 1094
Deleted 18
Total 1161
Annotations 1059
Unique tags 45
Projects 10
Blocked tasks 0
Blocking tasks 0
Data size 2.0 MiB
Undo transactions 3713
Sync backlog transactions 0
Tasks tagged 39.8%
Oldest task 2016-03-24-13:45
Newest task 2016-09-25-10:03
Task used for 6mo
Task added every 3h
Task completed every 4h
Task deleted every 10d
Average time pending 4d
Average desc length 32 characters

Overall I have gotten a pretty good amount of use from task. I do find it a bit sad that I have only been completing a task every 4 hours and adding one every 3 hours. At that rate things aren’t going to be great after a while. I have been using annotations a lot, which I think is a good thing. Looking back at old tasks I can get a better idea of what I did to solve a task or more context around it (I always try and add links to bugzilla or trac or pagure if there’s a ticket or bug involved).

I’d say I am happier for using task and will continue using it. It’s very nice to be able to see what all is pending and easily add things when people ask you for things and you are otherwise busy. I’d recommend it to anyone looking for a nice way to track tasks.

Clickable Pungi logs

Posted by Lubomír Sedlář on September 25, 2016 05:42 PM

When debugging problems with composes, the logs left behind by all stages of the compose run are tremendously helpful. However, they are rather difficult to read due to the sheer volume. Being exposed to them quite intensively for close to a year helps, but it still is a nasty chore.

The most accessible way to look at the logs is via a web browser on kojipkgs. It's just httpd displaying the raw log files on the disk.

It took me too long to figure out this could be made much more pleasant that copy-pasting stuff from the wall of text.

How about a user script that would run in Greasemonkey and allow clicking through to different log files or even Koji tasks?

<figure> Is this not better?<figcaption>Is this not better?</figcaption> </figure>

Turns out it's not that difficult.

Did you know that when Firefox displays a text/plain file, it internally creates an HTML document with all the content in one <pre> tag.

The whole script essentially just runs a search and replace operation on the whole page. We can have a bunch of functions that take the whole content as text and return it slightly modified.

First step will make URLs clickable.

function link_urls(str) {
  let pat = /https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{2,256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_\+.~#?&//=]*)/g;
  return str.replace(pat, '<a href="$&">$&</a>');
}

I didn't write the crazy regular expression myself. I got from Stack Overflow.

Next step can make paths to other files in the same compose clickable.

function link_local_files(url, pathname, mount, str) {
  let pat = new RegExp(mount + pathname + '(/[^ ,"\n]+)', 'g');
  return str.replace(pat, function (path, file) {
    return '<a href="' + url + file + '">' + path + '</a>';
  });
}

The last thing left is not particularly general: linking Koji tasks identifiers.

function link_tasks(taskinfo, str) {
  return str.replace('\d{8,}/m', '<a href="' + taskinfo + '$&">$&</a>')
            .replace(/(Runroot task failed|'task_id'): (\d{8,})/g,
                     '$1: <a href="' + taskinfo + '$2">$2</a>);
  }
}

Tying all these steps together and passing in the extra arguments is rather trivial but not very generic.

window.onload = function () {
  let origin = window.location.origin;
  let pathname = window.location.pathname.split('/', 4).join('/');
  let url = origin + pathname;
  let taskinfo = 'https://koji.fedoraproject.org/koji/taskinfo?taskID=';
  let mount = '/mnt/koji';

  var content = document.getElementsByTagName('pre')[0];
  var text = content.innerHTML;
  content.innerHTML = link_local_files(
    url, pathname, mount,
    link_tasks(taskinfo, link_urls(text))
  );
}

If you find this useful, feel free to grab the whole script with a header.

Deploying Kubernetes using Ansible

Posted by Gerard Braad on September 25, 2016 04:00 PM

Recently I did a lot of tests with Atomic, such as a creating custom images for Ceph, and ways to provide an immutable infrastructure. However, Atomic is meant to be a host platform for a container platform using Kubernetes. Their Getting Started guide describes how to setup a basic environment to host containerized applications. However, this is a manual approach and with the help of Vincent* I create a way to deploy this Kubernetes environment on Atomic and a standard CentOS installation using Ansible. In this article I will describe the components and provide instructions on how to deploy this basic environment yourself.

Requirements

To deploy the Kubernetes environment you will need either Atomic Host or a standard CentOS installation. If you are using this for testing purposes, a cloud image on an OpenStack provider will do. The described Ansible scripts will work properly on both Atomic Host and CentOS cloud images. Although it has not been tested on Fedora, it should be able to make this work with minimal changes. If you do, please contribute these changes back.

You will need at least 2 (virtual) machines. One will be configured as the Kubernetes master and the remaining node(s) can be configured as minions or deployment nodes. I have used at least 4 nodes; a general controller node to perform the deployment from (also configured to install the Kubernetes client), a master node and at least two deployment nodes. Take note that this deployment does not handle docker-storage-setup and High Availability.

Setup for deployment

Almost all the deployments I perform are initiated from a short-lived controller node. This is a machine that allows incoming and outgoing traffic, and mostly gets configured with an external Floating IP. This host can be seen as a jumphost. I will configure it with a dedicated set of SSH keys for communication with the other machines. You do not have to do this, and if you have limited resources, consider this host to be the same as the Kubernetes master node.

On this machine do the following:

$ yum install -y ansible git
$ git clone https://gitlab.com/gbraad/ansible-playbook-kubernetes.git
$ cd ansible-playbook-kubernetes
$ ansible-galaxy install -r roles.txt

This will install the Ansible playbook and the required roles:

gbraad.docker
gbraad.docker-registry
gbraad.kubernetes-master
gbraad.kubernetes-node
gbraad.kubernetes-client

Each of the roles take care of the installation and/or configuration of the environment. Do take note that the Docker and Kubernetes roles do not install packages on the Atomic hosts as these already come with the needed software.

Configure the deployment

If you look at the files deploy-kubernetes.yml you will see three tasks for a different group of hosts. As mentioned before, they will each take care of the installation when needed.

- name: Deploy kubernetes Master
  hosts: k8s-master
  remote_user: centos
  become: true
  roles:
  - gbraad.docker
  - gbraad.kubernetes-master

- name: Deploy kubernetes Nodes
  hosts: k8s-nodes
  remote_user: centos
  become: true
  roles:
  - gbraad.docker
  - gbraad.kubernetes-node

- name: Install kubernetes client
  hosts: k8s-client
  remote_user: centos
  become: true
  roles:
  - gbraad.kubernetes-client

Take notice of the setting of remote_user. On an Atomic host this is a passwordless sudo user, which can also login with SSH using a passwordless key-based authentication. If you use CentOS, please configure a user and allow add an entry with echo "username ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/username.

Change the inventory

Ansible targets a playbook against a single or group of nodes that you specify in an inventory file. This file is named hosts in this playbook repository. When you open it you will see the same set of names as specified above in the hosts entry in the deploy-kubernetes.yml playbook. For our purposes you will always have to deploy a master and a node. If you do not specify the master node, the installation will fail as some of the deployment variables will be used for configuration of the nodes.

$ vi hosts
[k8s-master]
atomic-01 ansible_ssh_host=10.5.0.11

[k8s-nodes]
atomic-02 ansible_ssh_host=10.5.0.14
atomic-03 ansible_ssh_host=10.5.0.13
atomic-04 ansible_ssh_host=10.5.0.12

Group variables

At the moment the roles are not very configurable as they are mainly targeting a simple test environment. Inside the folder group_vars you will find the configration for the Kubernetes nodes. These are as follows

skydns_enable: true
dns_server: 10.254.0.10
dns_domain: kubernetes.local

Perform the deployment

After changing the variables in the hosts inventory file and the group variables, you are actually all set to perform the deployment. We will start with the following.

$ ansible-playbook -i hosts deploy-docker-registry.yml

This first step will install Docker on the master node and pull the Docker Registry container. This is needed to provide a local cache of container images that you have pulled.

After this we can install the Kubernetes environment with:

$ ansible-playbook -i hosts deploy-kubernetes.yml

Below I will describe what each part of the playbook does and some information about the functionality.

Playbook and role description

Below I will give a short description of what each part of the playbook and role does.

Role gbraad.docker

Each node in the playbook will be targeted with the following role: gbraad.docker. This role determines if the node is an Atomic Host or not. This check is performed in the tasks/main.yml file. It the node is not an Atomic Host, it will include install.yml to perform additional installation tasks. At the moment this is a simple package installation for docker. After this step, the role will set state started and enabled for the services; docker.

Source

Role: gbraad.kubernetes-master

As part of the playbook, first we will configure the master node. For this, the role gbraad.kubernetes-master is used. Just like in the previous role, file tasks/main.yml will perform a simple check to determine if an Atomic Host is used or not. If not, some packages will be installed:

  • kubernetes-master
  • flannel
  • etcd

Source

Configure Kubernetes

After this step Kubernetes will be configured on this hosts. Tasks are described in the file tasks/configure_k8s.yml, source.

Role

Congfigure Kubernetes: etcd

This role will create a single 'etcd' server on the master. For simplicty all IP address will be allowed by using 0.0.0.0 as the endpoint. The port used for etcd clients is 2379, but since 4001 is also widely used we will also configure to use this.

Tasks:

  • Configure etcd LISTEN CLIENTS
  • Configure etcd ADVERTISE CLIENTS
Configure Kubernetes: common services

After this the role will set an entry in the file /etc/kubernetes/config. This is a general configuration file used by all the services. It sets the local IP address, identified as default_ipv4_address as etcd server and the kubernetes master nodes.

Tasks:

  • Configure k8s common services
Configure Kubernetes: apiserver

For the apiserer the file /etc/kubernetes/apiserver is used. We will configure it to listen on all IP address. An admission control plug-in is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object. For this role we removed ServiceAccount which is normally used to limit the creation requests of Pods based on the service account.

Restart services

After this the services of the master node are started and enabled. These are:

  • etcd
  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler

Configure flannel

For this environment we will use flannel to provide an overlay network. An overlay network exists on top of another network to provide a virtual path between nodes who use this overlay network. The steps for this are specified in tasks/configure_flannel.yml source.

The configuration of flannel is controller by etcd. We will copy a file to the master node containing:

{
  "Network": "172.16.0.0/12",
  "SubnetLen": 24,
  "Backend": {
    "Type": "vxlan"
  }
}

This file is located in the role as files/flanneld-conf.json. Using curl we will post this file to etcd on the master.

Tasks:

  • Copy json with network config
  • Configure subnet

After these steps the Kubernetes master is configured.

Role: gbraad.kubernetes-node

For the configuration of a Kubernetes node we use the role gbraad.kubernetes-node. Just like in the previous roles we determine in tasks/main.yml to install packages or not. For all the nodes we will configure:

  • SELinux
  • flannel
  • kubernetes specific settings

Source, Role

Configure SELinux

Because I use persistent storage using NFS with Docker, I configure SELinux to set the boolean virt_use_nfs.

Configure flannel

In the tasks:

  • Overlay | configure etcd url
  • Overlay | configure etcd config key
  • Flannel systemd | create service.d
  • Flannel systemd | deploy unit file

We configure all the nodes to use the etcd instance on k8s-master as the endpoint for flannel configuration. These tasks configure the networking to use the flanneld provided bridge IP and MTU settings. Using the last two tasks we configure systemd to start the service.

After this change we will restart the flanneld service.

Configure Kubernetes

In the file tasks/configure_k8s.yml we do final configuration of the Kubernetes node to point it to the master node.

In the tasks:

  • k8s client configuration
  • k8s client configuration | KUBELET_HOSTNAME
  • k8s client configuration | KUBELET_ARGS

we configure how the node is identified.

In the tasks:

  • k8s client configuration | KUBELET_API_SERVER
  • Configure k8s master on client | KUBE_MASTER

we set the location of the API server and the master node. Both of these will point to the IP address of the k8s-master.

After this change we will restart the kubelet and kube-proxy service.

After deployment

If all these tasks succeeded, you should have a working Kubernetes deployment. In the next steps we will perform some simple commands to verify if the environment works.

Deploy client and verify environment

As you have noticed from the hosts inventory file, there is a possibility to specify a client host:

[k8s-client]
controller ansible_ssh_host=10.5.0.3

You do not have to deploy a kubernetes client using this playbook. It will install a single package, but you could also just use the statically compiled Go binary that is provided from the Kubernetes releases:

$ wget http://storage.googleapis.com/kubernetes-release/release/v1.3.4/bin/linux/amd64/kubectl
$ chmod +x kubectl

Verify nodes

Oncae all the nodes have been deployed using the playbook, you canb verify communication. From a client node you can perform the following command:

$ kubectl --server=10.5.0.11:8080 get node`
NAME             LABELS    STATUS
10.5.0.12        <none>    Ready
10.5.0.13        <none>    Ready
10.5.0.14        <none>    Ready

Schedule workload

If all looks good, you can schedule a workload on the environment. The following commands will create a simple nginx instance.

$ vi kube-nginx.yml
apiVersion: v1
kind: Pod
metadata:
  name: www
spec:
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
          hostPort: 8080

This file described a Pod with a container called nginx using the 'nginx' image. To schedule it, do:

$ kubectl --server=10.5.0.11:8080 create -f kube-nginx.yml
pods/www

Using the following command you can check the status:

$ kubectl --server=192.168.124.40:8080 get pods
POD       IP            CONTAINER(S)   IMAGE(S)   HOST                            LABELS    STATUS    CREATED      MESSAGE
www       172.16.59.2                             10.5.0.12/10.5.0.12             <none>    Running   52 seconds
                        nginx          nginx                                                Running   24 seconds

If you now open a browser pointing to the node Kubernetes used to create it (http://10.5.0.12:8080/), you will see an nginx welcome page.

Conclusion

Setting up Kubernetes can be a daunting task as there are many different components involved. Kelsey Hightower has a very good guide, called Kubernetes The Hard Way that will teach you how to deploy kubernetes manually on different cloud providers, such as AWS and the Google Compute Engine. After gaining some experience, it is advised to look at automation to deploy an environment, such as the the Ansible scripts that can be found in the contrib repository of Kubernetes. this allows you to stand up an environment with High Availability. The scripts described in this article should only serve as an introduction to gain understanding what is needed for a Kubernetes environment.

If you want a production-ready environment, please have a look at OpenShift. OpenShift deals with setting up a cluster of kubernetes nodes, scheduling workload and most important, how to set up images for deployment. This is done using what is called 'source to image'. This and OpenShift itself will be the topic of future articles.

More information

A GNU Start

Posted by Jeremy Cline on September 25, 2016 02:30 PM

I am thrilled to say that last week I became the newest member of the Fedora Engineering team. I will be working on the applications that help the Fedora community create a fantastic Linux distribution. I’m excited to be joining the team and I look forward to working with everyone!

Previously, I worked on the Pulp project, which is a content management system used in Red Hat Satellite 6. I learned a great deal while working with some excellent engineers on this project.

Bodhi 2.2.2 released

Posted by Bodhi on September 24, 2016 05:35 PM

This is another in a series of bug fix releases for Bodhi this week. In this release, we've fixed
the following issues:

  • Disallow comment text to be set to the NULL value in the database #949.
  • Fix autopush on updates that predate the 2.2.0 release #950.
  • Don't wait on mashes when there aren't any 68de510c.

Fedora Join meetings to begin this week

Posted by Ankur Sinha "FranciscoD" on September 24, 2016 10:06 AM

We've had the Fedora Join SIG around for a bit now, but we haven't been very active. Recently we've seen an increase in community members willing to participate in the SIG, and in combination with the work that CommOps is doing to improve the "joining experience" for newbies, we thought that it's a good time to gain some traction.

What is the pupose of the SIG?

(I'm quoting this off the SIG wiki page)

The Fedora Join SIG aims to set up and maintain channels that let prospective contributors engage with the community. The idea here is to enable people looking to join the Fedora community to converse with existing members, make friends, find mentors, and get a feeling of what and how the community does in general, with a view to reduce the learning gradient that joining a new community entails - and make it more enjoyable!

Different teams already have different, mostly well documented, join SOPs (standard operating procedures). The infrastructure team is a great example of this. However, we often meet people who are unsure of how their skills fit into the community. We want to provide these people a channel where they can speak to existing members of the community, learn about what they do and use this information to find the right team to get started with. We help new members form relationships with members; we point them to the right resources - wiki or otherwise; and we expect that this should greatly improve the joining experience.

So, our goals are to:

  • set up a communication channel between the existing contributors and prospective contributors. Speaking to current team members is always encouraging. We could even set up a system to send "easyfix" tasks to this mailing list giving folks a chance to work on them and learn in the process.
  • guide/aid prospective contributors to turn into solid contributors. Rather than just pointing them to join.fp.o, talk to them, see what issues they face, help them decide where they want to get started.
  • to form better mentor-mentee relationships. Here, I mean "mentor" in the real sense of the word.
  • give prospective contributors a communication channel to converse amongst themselves. This is very important. Take the Google Summer of Code mailing list for instance. It is set up specifically so that the candidates can talk to each other. Since they're all in the same boat, they feel more comfortable discussing certain issues amongst themselves. They'll also be aware of what different people are up to which will give them a better idea of what they can do. It would be great if they could discuss and share the cool stuff they've begun to do. It would surely be encouraging.

Basically, look for potential, not polish. We can help them gain the polish that established contributors have.

How do we plan to help?

To begin with, we've got the infrastructure in place. We've set up a mailing list and an IRC channel on Freenode (#fedora-join). We have a group and home repository for tickets and such on Pagure, and a FAS group too. (We used to be on trac, but we closed that down in favour of Pagure.)

CommOps is currently working on improving the join process and we've started to help them with that. There's a discussion thread on the CommOps mailing list about our web space and how it can be improved for newbies, for example. (I keep saying this to everyone I meet - I dislike how http://join.fedoraproject.org redirects to an ugly data dump on a wiki page that most of us will find tedious and overwhelming to go over without any prior knowledge of Fedora!)

How can you help?

Ah, now we're talking. ;)

We need more contributors that enjoy helping out newbies: not to spoon feed newbies, rather, to teach them the Open Source way. It's a philosophy that one learns over time, but the learning process can be greatly accelerated by hanging out and speaking to others that already follow it.

We need people that understand the Free software philosophy to pass it on and educate new members: taking up tasks and closing tickets is all good, but I'd rather newbies first understood what Free software was all about, and how we, the Fedora community do our part. There are thousands of Linux distributions in existence, why should one contribute to Fedora and not another?

So, we're not discussing tooling at the moment. We already have quite a few tools, and we'll improve on what we have to begin with. What we need is more people engaging with newbies whether by just hanging out in the right places, or by helping us come up with fun ways to get more people involved.

Come to the meeting!

The IRC meeting is on Monday in #fedora-meeting-3 - please check the Fedocal entry for your local timezone. We're going to begin with introductions and then go over tickets that we've had open for a bit. Once we get that out of the way, we'll begin planning for the future, and just.. talk.

See you there!

Fedora 25: Webkitgtk4 Update knockt Evolution, Epiphany und andere aus

Posted by Fedora-Blog.de on September 24, 2016 08:26 AM

Im Ramen des Updates auf Gnome 3.22 für Fedora 25 werden auch das webkitgtk4 Pakete auf Version 2.14.0-1 aktualisiert, was jedoch dazu führt, das Evolution keine Mails mehr darstellt und Epiphany keine Webseiten mehr anzeigt. Potentiell sind jedoch alle Anwendungen betroffen, die webkitgtk4 nutzen.

Wer das Update bereits eingespielt hat und von dem Problem betroffen ist, kann als Workaround ein Downgrade der webkitgtk4 Pakete mittels

su -c'dnf downgrade webkitgtk4\*'

durchführen. Jedoch muss bei zukünftigen Updates darauf geachtet werden, das webkitgtk4 nicht wieder auf die kaputte Version aktualisiert wird. Bei dnf lässt sich dies durch den zusätzlichen Parameter „-x“ erreichen, welcher dnf anweist, etwaige Updates eines Paketes zu ignorieren. Im aktuellen Fall würde der Update-Befehl für dnf wie folgt aussehen:

su -c'dnf update -x webkitgtk4\*'

We’re looking for a GNOME developer

Posted by Jiri Eischmann on September 23, 2016 11:57 AM

We in the Red Hat desktop team are looking for a junior software developer who will work on GNOME. Particularly in printing and document viewing areas of the project.

The location of the position is Brno, Czech Republic, where you’d join a truly international team of desktop developers. It’s a junior position, so candidates just off the university, or even still studying are welcome. We require solid English communication skills and experience with C (and ideally C++, too). But what is a huge plus is experience with GNOME development and participation in the community.

Interested? You can directly apply for the position at jobs.redhat.com or if you have any question, you can write me: eischmann [] redhat [] com.

198px-gnomelogo-svg


Blender nightly in Flatpak

Posted by Mathieu Bridon (bochecha) on September 23, 2016 07:02 AM

Over the past week I started on an experiment: building the git master branch of Blender in Flatpak.

And I decided to go crazy on it, and also build all its dependencies from their respective git master branch.

I've just pushed the result to my flatpak repo, and it seems to work in my limited testing.

As a result, you can now try out the bleeding edge of Blender development safely with Flatpak, and here's how.

First, install the Freedesktop Flatpak runtime:

$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg
$ flatpak remote-add --user --gpg-import=./gnome-sdk.gpg gnome https://sdk.gnome.org/repo/
$ flatpak install --user gnome org.freedesktop.Platform 1.4

Next, install the Blender app from the master branch of my repo:

$ flatpak remote-add --user --no-gpg-verify bochecha https://www.daitauha.fr/static/flatpak/repo-apps/
$ flatpak install --user bochecha org.blender.app master

That's it!

I want to be clear that I will not build this every day (or every night) as a real "nightly". I just don't have the computing resources to do that, and every build is a big hit on my laptop. (Did I mention this includes building Boost from git master? 😅)

However I'll try to rebuild it from time to time, to pick up updates.

Also, I want to note that this is an experiment in pushing the bleeding edge for Blender to the maximum with Flatpak. If upstream Blender eventually provided nightly builds as Flatpak (for which I'd be happy to help them), they probably would compromise on which dependencies to build from stable releases, and which ones to build from their git master branches.

For example, they probably wouldn't use Python from master like I do. Right now that means this build uses the future 3.7 release of Python, even though 3.6 hasn't been released yet. ☻

Another bad idea in this build is Boost from master, which takes ages just to fetch its myriad of git submodules, let alone build it.

But for an experiment in craziness, it works surprisingly well.

Try it out, and let me know how it goes!