Fedora People

Apply the STIG to even more operating systems with ansible-hardening

Posted by Major Hayden on July 21, 2017 05:38 PM

FortressTons of improvements made their way into the ansible-hardening role in preparation for the OpenStack Pike release next month. The role has a new name, new documentation and extra tests.

The role uses the Security Technical Implementation Guide (STIG) produced by the Defense Information Systems Agency (DISA) and applies the guidelines to Linux hosts using Ansible. Every control is configurable via simple Ansible variables and each control is thoroughly documented.

These controls are now applied to an even wider variety of Linux distributions:

  • CentOS 7
  • Debian 8 Jessie (new for Pike)
  • Fedora 25 (new for Pike)
  • openSUSE Leap 42.2+ (new for Pike)
  • Red Hat Enterprise Linux 7
  • SUSE Linux Enterprise 12 (new for Pike)
  • Ubuntu 14.04 Trusty
  • Ubuntu 16.04 Xenial

Any patches to the ansible-hardening role are tested against all of these operating systems (except RHEL 7 and SUSE Linux Enterprise). Support for openSUSE testing landed this week.

Work is underway to put the finishing touches on the master branch before the Pike release and we need your help!

If you have any of these operating systems deployed, please test the role on your systems! This is pre-release software, so it’s best to apply it only to a new server. Read the “Getting Started” documentation to get started with ansible-galaxy or git.

Photo credit: Wikipedia

The post Apply the STIG to even more operating systems with ansible-hardening appeared first on major.io.

SECURITY FOR THE SECURITY GODS! SANDBOXING FOR THE SANDBOXING THRONE

Posted by Bastien Nocera on July 21, 2017 04:53 PM
@GodTributes took over my title, soz.

Dude, where's my maintainer?

Last year, probably as a distraction from doing anything else, or maybe because I was asked, I started reviewing bugs filed as a result of automated flaw discovery tools (from Coverity to UBSan via fuzzers) being run on gdk-pixbuf.

Apart from the security implications of a good number of those problems, there was also the annoyance of having a busted image file bring down your file manager, your desktop, or even an app that opened a file chooser either because it was broken, or because the image loader for that format didn't check for the sanity of memory allocations.

(I could have added links to Bugzilla entries for each one of the problems above, but that would just make it harder to read)

Two big things happened in gdk-pixbuf 2.36.1, which was used in GNOME 3.24:

  • the removal of GdkPixdata as a stand-alone image format loader. We really don't want to load GdkPixdata files from sources other than generated sources or embedded data structures, and removing that loader closed off those avenues. We still ended up fixing a fair number of naive assumptions in helper functions though.
  • the addition of a thumbnailer for gdk-pixbuf supported images. Images would not be special-cased any more in gnome-desktop's thumbnailing code, making the file manager, the file chooser and anything else navigating directories full of broken and huge images more reliable.
But that's just the start. gdk-pixbuf continues getting bug fixes, and we carry on checking for overflows, underflows and just flows, breaks and beats in general.

Programmatic Thumbellina portrait-maker

Picture, if you will, a website making you download garbage files from the Internet, the ROM dump of a NES cartridge that wasn't properly blown on and digital comic books that you definitely definitely paid for.

That's a nice summary of the security bugs foisted upon GNOME in past year or so, even if, thankfully, we were ahead of the curve in terms of fixing those issues (the GStreamer NSF decoder bug was removed in 2013, the comics backend in evince was rewritten over a period of 2 years and committed in March 2017).

Still, 2 pieces of code were running on pretty much every file downloaded, on purpose or not, from the Internet: Tracker's indexers and the file manager's thumbnailers.

Tracker started protecting itself not long after the NSF vulnerability, even if recent versions of GStreamer weren't vulnerable, as we mentioned.

That left the thumbnailers. Some of those are first party, like the gdk-pixbuf, and those offered by core applications (Evince, Videos), written by GNOME developers (yours truly for both epub/mobi and Nintendo DS).

They're all good quality code I'd vouch for (having written or maintained quite a few of them), but they can rely on third-party libraries (say GStreamer, poppler, or libarchive), have naive or insufficiently defensive code (gdk-pixbuf loaders,  GStreamer plugins) or, worst of all: THIRD-PARTY EXTENSIONS.

There are external plugins and extensions for image formats in gdk-pixbuf, for video and audio formats in GStreamer, and for thumbnailers pretty much anywhere. We can't control those, but the least we can do when they explode in a wet mess is make sure that the toilet door is closed.

Not even Nicholas Cage can handle this Alcatraz

For GNOME 3.26 (and today in git master), the thumbnailer stall will be doubly bolted by a Bubblewrap sandbox and a seccomp blacklist.

This closes a whole vector of attack for the GNOME Desktop, but doesn't mean we're completely out of the woods. We'll need to carry on maintaining and fixing security bugs in those libraries and tools we depend on, as GStreamer plugin bugs still affect Videos, gdk-pixbuf bugs still affect Photos and Eye Of Gnome, etc.

And there are limits to what those 2 changes can achieve. The sandboxing and syscall blacklisting avoids those thumbnailers writing anything but an image file in PNG format in a specific directory. There's no network, the filename of the original file is hidden and sanitised, but the thumbnailer could still create a crafted PNG file, and the sandbox doesn't work inside a sandbox! So no protection if the application running the thumbnailer is inside Flatpak.

In fine

GNOME 3.26 will have better security for thumbnailers, so you won't "need to delete GNOME Files".

But you'll probably want to be careful with desktops that forked our thumbnailing code, namely Cinnamon and MATE, which don't implement those security features.

The next step for the thumbnailers will be beefing up our protection against greedy thumbnailers (in terms of CPU and memory usage), and sharing the code better between thumbnailers.

Note for later, more images of cute animals.

Microsoft TechTalks in Prague - aka what else could go wrong?

Posted by Radka Janek on July 21, 2017 10:00 AM
In the morning.

This time around I was actually prepared with everything in github for interested people to check out after the talk and i was wearing my new Red Hat loves .NET shirt, what else would I need?

I woke up just before 7am, after really awful night - I couldn’t sleep, had a nosebleed, and in the morning I was freezing cold. I spent an hour in bathroom and then I woke Eric up. I packed up while he did his (much shorter) bathroom. We left the house at 9am with one bottle of water, and all the computery stuff - my notebook, mouse, keyboard, and whatever else was in my backpack which I take to work when I go by bike. And my purse with the usual contents (everything one could possibly need or wish for haha..)

We went to Red Hat office first, and stopped by to buy some breakfast and lunch/dinner for later. We got some sweet-ish type of bread thing. By the time we got to work my legs were in a lot of pain already - I got hurt last time we went out mountainbiking, I probably cracked a bone *shrugs*

We got to the office and I finally picked up a package that was waiting for me for a week or two, since I was a cripple and couldn’t walk for a week or so >_< …and he also helped me carry my IT stuffs up to my desk (i got new monitor.) Then we nommed our breakfast in a meeting room and then we set off to a bus station. By tram. In 30°C. Half an hour of Eww…

Prague
Radka in her new Red Hat loves .NET T-Shirt.

Our new T-Shirts are awesome ;)   (Photo taken at home, next to an awesome painting!)

The trip itself was fine since the bus has AC, but it was late 45 minutes. Great. We needed to pick up my new T-Shirt that the “awesome” shop failed to ship to me within the promised timeframe. I had a Red Hat loves .NET shirt made based on our design, because the ones that Red Hat is making wouldn’t make it to me in time for this conference. When we got there, it turned out, that they still did not even make it!! So we had another delay, another 15 minutes waiting for it. Oh and I forgot to mention that it was another 25 minutes by really boiling city bus… And to get to the venue another 15 minutes in one. I think that it was 32°C at that point, if not more.

We sat in a nearby restaurant to get 5 minutes of rest and something to drink to cool off, and I used their toilettes to change into my new T-Shirt. It looks good, but if you take a good look, they screwed it up. The letters are kinda jagged and the label is not exactly straight either. =(

…nevermind the T-Shirt, I’ll have a nice one when it gets here from US. OH it’s 17:20 - the conference is starting, I totally lost the track of time!

The talk
Radka talking in the Microsoft Prague.

Photo credit: wug.cz

We sat down in the first row and I prepared my stuff during the first talk - I was going second. The room was full. Not big, but full. It has a capacity of 150-ish people. Okay, I’m ready… ready to improvise anyway, eh?

So it’s the time. Oh turns out that the microphone is not ready. Nevermind, the room does have pretty good acoustics and I should be fine. People in the back did nod when I asked if they can hear me. (Edit a day later: I did damage my voice a little and it’s all messy now.)

The talk itself did not go well either. I plugged in a cable that was sticking out, some sort of ethernet situation you know. I did not notice that it didn’t actually connect into any network, it must have been unplugged on the other end - wherever that was heh. So halfway through my talk I found out that I’m not connected, when I couldn’t ssh into my Azure VM where I wanted to demonstrate simple ASP.NET Core application deployment with apache proxy and systemd running it. I had to quickly connect to wifi, where they had some log-in system with speaker password that wasn’t hidden, so i had to unplug the hdmi cable cause i couldn’t shut it off in the software (don’t ask why it didn’t work, I don’t know.)

Finished the rest fine, except that I forgot to ask if anyone had any questions after all that trouble :<

The conference room in Microsoft Prague.

Photo credit: wug.cz

I really liked the performance talk from Adam Sitnik, I fully recommend looking it up on youtube or something!

Afterparty?

After it was over we still had two hours before we had to catch metro and bus home so we joined Microsoft engineers in a pub. FINALLY! PROPER FOOD! …and a nice chat with them. Overall I was happy, got to meet more awesome people =)

And so it is late night and I’m sitting in a bus, writing this post. I’ve been awake for 20 hours, and it’s at least two more til we get home.

I hope to see you guys again in Brno on the 1st of August. Hopefully with a few less issues x.x

Cockpit 146

Posted by Cockpit Project on July 21, 2017 10:00 AM

Cockpit with Software Updates improvements and GCE support

Changing Fedora kernel configuration options

Posted by Fedora Magazine on July 21, 2017 08:00 AM

Fedora aims to provide a kernel with as many configuration options enabled as possible. Sometimes users may want to change those options for testing or for a feature Fedora doesn’t support. This is a brief guide to how kernel configurations are generated and how to best make changes for a custom kernel.

Finding the configuration files

Fedora generates kernel configurations using a hierarchy of files. Kernel options common to all architectures and configurations are listed in individual files under baseconfig. Subdirectories under baseconfig can override the settings as needed for architectures. As an example:

$ find baseconfig -name CONFIG_SPI
baseconfig/x86/CONFIG_SPI
baseconfig/CONFIG_SPI
baseconfig/arm/CONFIG_SPI
$ cat baseconfig/CONFIG_SPI
# CONFIG_SPI is not set
$ cat baseconfig/x86/CONFIG_SPI
CONFIG_SPI=y
$ cat baseconfig/arm/CONFIG_SPI
CONFIG_SPI=y

As shown above, CONFIG_SPI is initially turned off for all architectures but x86 and arm enable it.

The directory debugconfig contains options that get enabled in kernel debug builds. The file config_generation lists the order in which directories are combined and overridden to make configs. After you change a setting in one of the individual files, you must run the script build_configs.sh to combine the individual files into configuration files. These exist in kernel-$flavor.config.

When rebuilding a custom kernel, the easiest way to change kernel configuration options is to put them in kernel-local. This file is merged automatically when building the kernel for all configuration options. You can set options to be disabled (# CONFIG_FOO is not set), enabled (CONFIG_FOO=y), or modular (CONFIG_FOO=M) in kernel-local.

Catching and fixing errors in your configuration files

The Fedora kernel build process does some basic checks on configuration files to help catch errors. By default, the Fedora kernel requires that all kernel options are  explicitly set. One common error happens when enabling one kernel option exposes another option that needs to be set. This produces errors related to .newoptions, as an example:

+ Arch=x86_64
+ grep -E '^CONFIG_'
+ make ARCH=x86_64 listnewconfig
+ '[' -s .newoptions ']'
+ cat .newoptions
CONFIG_R8188EU
+ exit 1
error: Bad exit status from /var/tmp/rpm-tmp.6BXufs (%prep)

RPM build errors:
 Bad exit status from /var/tmp/rpm-tmp.6BXufs (%prep)

To fix this error, explicitly set the options (CONFIG_R8188EU in this case) in kernel-local as well.

Another common mistake is setting an option incorrectly. The kernel Kconfig dependency checker silently changes configuration options that are not what it expects. This commonly happens when one option selects another option, or has a dependency that isn’t satisfied. Fedora attempts a basic sanity check that the options specified in tree match what the kernel configuration engine expects. This may produce errors related to mismatches:

+ ./check_configs.awk configs/kernel-4.13.0-i686-PAE.config temp-kernel-4.13.0-i686-PAE.config
+ '[' -s .mismatches ']'
+ echo 'Error: Mismatches found in configuration files'
Error: Mismatches found in configuration files
+ cat .mismatches
Found CONFIG_I2C_DESIGNWARE_CORE=y  after generation, had CONFIG_I2C_DESIGNWARE_CORE=m in Fedora tree
+ exit 1

In this example, the Fedora configuration specified CONFIG_I2C_DESIGNWARE_CORE=m, but the kernel configuration engine set it to CONFIG_I2C_DESIGNWARE_CORE=y. The kernel configuration engine is ultimately what gets used, so the solution is either to change the option to what the kernel expects (CONFIG_I2C_DESIGNWARE_CORE=y in this case) or to further investigate what is causing the unexpected configuration setting.

Once the kernel configuration options are set to your liking, you can follow standard kernel build procedures to build your custom kernel.

Goldilocks Security: Bad, Won’t Work, and Plausible

Posted by Russel Doty on July 20, 2017 11:03 PM

Previous posts discussed the security challenge presented by IoT devices, using IP Video Cameras as an example. Now let’s consider some security alternatives:

Solution 1: Ignore Security

This is the most common approach to IoT security today. And, to a significant degree, it works. In the same way that ignoring fire safety usually works – only a few businesses or homes burn down each year!

Like fire safety, the risks from ignoring IoT security grow over time. Like fire safety, the cost of the relatively rare events can be catastrophic. Unlike fire safety, an IoT event can affect millions of entities at the same time.

And, unlike traditional IT security issues, IoT security issues can result in physical damage and personal injury. Needless to say, I do not recommend ignoring the issue as a viable approach to IoT security!

Solution 2: Secure the Cameras

Yes, you should secure IP cameras. They are computers sitting on your network – and should be treated like computers on your network! Best practices for IT security are well known and readily available. You should install and configure them securely, update them regularly, and monitor them continuously.

If you have a commercial implementation of an IP video security system you should have regular updates and maintenance of your system. You should be demanding strong security – both physical security and IT security – of the video security system.

You did have IT involved in selection, implementation and operation of the video security system, didn’t you? You did make security a key part of the selection process, just as you would for any other IT system, didn’t you? You are doing regular security scans of the video security system and monitoring all network traffic, aren’t you? Good, you have nothing to worry about!

If you are like many companies, you are probably feeling a bit nervous right now…

For home and small business customers, a secure the camera approach simply won’t work.

  • Customer ease of use expectations largely prevent effective security.
  • Customer knowledge and expertise doesn’t support secure configuration or updates to the system.
  • The IoT vendor business model doesn’t support security: Low cost, short product life, a great feature set, ease of use, and access over the Internet all conspire against security.
  • There is a demonstrated lack of demand for security. People have shown, by their actions and purchasing decisions, the effective security is not a priority. At least until there is a security breach – and then they are looking for someone to blame. And often someone to sue…

Securing the cameras is a great recommendation but generally will not work in practice. Unfortunately. Still, it should be a requirement for any Industrial IoT deployment.

Solution 3: Isolation

If ignoring the problem doesn’t work and fixing the problem isn’t viable, what is left? Isolation. If the IP cameras can’t be safely placed on the Internet, then isolate them from the Internet.

Such isolation will both protect the cameras from the Internet and protect the Internet from the cameras.

The challenge is that networked cameras have to be on the network to work.

Even though the cameras are designed to be directly connected to the Internet, they don’t have to be directly connected to the Internet. The cameras can be placed on a separate isolated network.

In my next post, I will go into detail on how to achieve this isolation using an IoT Gateway between the cameras and all the other systems.


New badge: Flock 2017 Organizer !

Posted by Fedora Badges on July 20, 2017 03:07 PM
Flock 2017 OrganizerThis badge is awarded to anyone who helped with planning and preparation for Flock 2017

New badge: Flock 2017 Attendee !

Posted by Fedora Badges on July 20, 2017 03:07 PM
Flock 2017 AttendeeYou attended Flock 2017, the Fedora Contributor Conference

Summer is coming

Posted by Josh Bressers on July 20, 2017 12:27 PM
I'm getting ready to attend Black Hat. I will miss BSides and Defcon this year unfortunately due to some personal commitments. And as I'm packing up my gear, I started thinking about what these conferences have really changed. We've been doing this every summer for longer than many of us can remember now. We make our way to the desert, we attend talks by what we consider the brightest minds in our industry. We meet lots of people. Everyone has a great time. But what is the actionable events that come from these things.

The answer is nothing. They've changed nothing.

But I'm going to put an asterisk next to that.

I do think things are getting better, for some definition of better. Technology is marching forward, security is getting dragged along with a lot of it. Some things, like IoT, have some learning to do, but the real change won't come from the security universe.

Firstly we should understand that the world today has changed drastically. The skillset that mattered ten years ago doesn't have a lot of value anymore. Things like buffer overflows are far less important than they used to be. Coding in C isn't quite what it once was. There are many protections built into frameworks and languages. The cloud has taken over a great deal of infrastructure. The list can go on.

The point of such a list is to ask the question, how much of the important change that's made a real difference came from our security leaders? I'd argue not very much. The real change comes from people we've never heard of. There are people in the trenches making small changes every single day. Those small changes eventually pile up until we notice they're something big and real.

Rather than trying to fix the big problems, our time is better spent ignoring the thought leaders and just doing something small. Conferences are important, but not to listen to the leaders. Go find the vendors and attendees who are doing new and interesting things. They are the ones that will make a difference, they are literally the future. Even the smallest bug bounty, feature, or pull request can make a difference. The end goal isn't to be a noisy gasbag, instead it should be all about being useful.



New to Fedora: wordgrinder

Posted by Ben Cotton on July 20, 2017 11:21 AM

Do you ever wish you had a word processor that just processed words? Font selection? Pah! Styling? Just a tiny bit, please. Or maybe you read Scott Nesbitt’s article on Opensource.com and thought “I’d like to try this!” If this sounds like you, then it may interest you to know that WordGrinder is now available on Fedora 25, 26, and Rawhide.

View of WordGrinder in a terminal

WordGrinder

I should clarify that it’s only available on some architectures (x86_64, i686, aarch64, and armv7hl). WordGrinder depends on luaJIT which is only available on those platforms.

This is my first new Fedora package, and I have to say I’m kind of proud of myself. I tried to volunteer someone else for it, but he didn’t know how to build RPMs so I ended up volunteering myself. In the process, I had to patch the upstream release to build on Fedora, and then patch my patch to get it to build on Rawhide. In true Fedora fashion, I submitted my patch upstream and it was accepted. So not only did I make a new package available, but I also made an improvement to a project written in a language that I don’t know.

Yay open source!

The post New to Fedora: wordgrinder appeared first on Blog Fiasco.

Three must haves in Fedora 26

Posted by Harish Pillay 9v1hp on July 20, 2017 08:57 AM

I’ve been using Fedora ever since it came out back in 2003. The developers of Fedora and the greater community of contributors have been doing a amazing job in incorporating features and functionality that subsequently has found its way into the downstream Red Hat Enterprise Linux distributions.

There are lots to cheer Fedora for. GNOME, NetworkManager, systemd and SELinux just to name a few.

Of all the cool stuff, I particularly like to call out three must haves.

a) Pomodoro – A GNOME extension that I use to ensure that I get the right amount of time breaks from the keyboard. I think it is a simple enough application that it has to be a must-have for all. Yes, it can be annoying that Pomodoro might prompt you to stop when you are in the middle of something, but you have the option to delay it until you are done. I think this type of help goes a long way in managing the well-being of all of us who are at our keyboards for hours.

b) Show IP: I really like this GNOME extension for it does give me at a glance any of a long list of IPs that my system might have. This screenshot shows ten different network end points and the IP number at the top is that of the Public IP of the laptop. While I can certainly use the command “ifconfig”, while I am on the desktop, it is nice to have it needed info tight on the screen.

 

 

c) usbguard: My current laptop has three USB ports and one SD card reader. When it is docked, the docking station has a bunch more of USB ports. The challenge with USB ports is that they are generally completely open ports that one can essentially insert any USB device and expect the system to act on it. While that is a convenience, the possibility of abuse isincreasing given rogue USB devices such as USB Killer, it is probably a better idea to deny, by default, all USB devices that are plugged into the machine. Fortunately, since 2007, the Linux kernel has had the ability to authorise USB devices on a device by device basis and the tool, usbguard, allows you to do it via the command line or via a GUI – usbguard-applet-qt. All in, I think this is another must-have for all users. It should be set up with default deny and the UI should be installed by default as well. I hope Fedora 27 onwards would be doing that.

So, thank you Fedora developers and contributors.

 

 


Still plugging away

Posted by Suzanne Hillman (Outreachy) on July 20, 2017 01:06 AM

Website

<figure></figure>

I’m playing around with Wix for my website, in part because it’s a giant pain to change things around in my (still official) Pelican-based website, and in part because it’s useful to have the very ‘what you see is what you get’ perspective that Wix offers. I’m still deciding where a good point between ‘offer an overview’ — missing from the pelican version — and ‘not enough details’ — true of much of the Wix version right now — is.

<figure><figcaption>Pelican version of my site</figcaption></figure>

For the moment, any major changes that I think are important to include, I’m trying out in Wix first, and then figuring them out in Pelican. That which is frustrating me most right now is the apparent lack of grid support in Pelican, since that would make so many things look nicer and be easier to follow — indeed, that’s why I don’t have much overview in Pelican right now.

I’m hoping I can get Pelican-alchemy to work as a theme, as it appears to support bootstrap, which itself supports grids. Unfortunately, I can’t figure out how to get it to stop ignoring the settings I have in the style.css file. And, because it’s not as professional-looking without that, and it’s hard to see what things look like there without publishing them first, it’s slow going to figure out. I just want a clean style and grids!

Alternately, I need to continue to move things over to Wix and just give up on Pelican. But it’s a lot of work. And slow. Which is why I’m trying to get obvious wins over to Pelican in the meantime.

Projects

I’ll be meeting up with the person who is working in Querki next week, to get a decent basic understanding of his goals and needs, as well as to figure out the reasoning behind some of the current decisions.

I need to get in touch with people about doing a contextual interview with them about putting their recycling out for pickup. This is for the project I’m working on with the Northeastern student.

I’m also hoping to get a contextual interview with the developer who originally had concerns about user dropdowns. He has provided some screenshots of the kinds of places he runs into the problem, so I need to integrate those into our shared google doc, and figure out some next steps if he’s not willing to do a contextual interview.

I also need to grab some time to continue with my review of the accessibility document in patternfly.

Job Hunting

I am thoroughly confused about the status of my RedHat application. Theoretically, I was supposed to hear something after 5 days when I went through Mo for applying. Of course, I was also supposed to have had three applications through her, and only one managed to actually associate with her name. As of right now, it still says ‘manager review’ — whatever that means. That’s better than the other two, which say “no longer under consideration”. Confusingly, the job titles are all very different from what I actually applied for (the one I’m “under review” for talks about doing development, which… not so much).

I’ve also got an application in with Wayfair, whose UX team is fairly large and has openings at multiple levels of skill. We shall see.

I was contacted by someone at Onward Search, yet another UX recruiting agency. He seemed pretty impressed with my background, and optimistic about being able to find me some possibilities. We’ll see — I’m working with a _lot_ of UX recruiting companies at this point.

Use a DoD smartcard to access CAC enabled websites

Posted by Fedora Magazine on July 19, 2017 04:41 PM

By now you’ve likely heard the benefits of two factor authentication. Enabling multi-factor authentication can increase the security of accounts you use to access various social media websites like Twitter, Facebook, or even your Google Account. This post is going to be about a bit more.

The U.S. Armed Services spans millions of military and civilian employees. If you’re a member of these services, you’ve probably been issued a DoD CAC smartcard to access various websites. With the smartcard comes compatibility issues, specific instructions tailored to each operating system, and a host of headaches. It’s difficult to find reliable instructions to access military websites from Linux operating systems. This article shows you how to set up your Fedora system to login to DoD CAC enabled websites.

Installing and configuring OpenSC

First, install the opensc package:

sudo dnf install -y opensc

This package provides the necessary middleware to interface with the DoD Smartcard. It also includes tools to test and debug the functionality of your smartcard.

With that installed, next set it up under the Security Devices section of Firefox. Open the menu in Firefox, and navigate to Preferences -> Advanced.

In the Certificates tab, select Security Devices. From this page select the Load button on the right side of the page. Now set a module name (“OpenSC” will work fine) and use this screen to browse to the location of the shared library you need to use.

Browse to the /lib64/pkcs11/ directory, select opensc-pkcs11.so, and click Open. If you’re currently a “dual status” employee, you may wish to select the onepin-opensc-pkcs11.so shared library. If you have no idea what “dual status” means, carry on and simply select the former package.

Click OK to finish the process.

Now you can navigate to your chosen DoD CAC enabled site and login. You’ll be prompted to enter the PIN for your CAC, then select a certificate to use. If you’re logging into a normal DoD website, select the Authentication certificate. If you’re logging into a webmail service such as https://web.mail.mil, select the Digital Signing certificate. NOTE: “Dual status” personnel should use the Authentication certificate.

New version of buildah 0.2 released to Fedora.

Posted by Dan Walsh on July 19, 2017 01:01 PM
New features and bugfixes in this release

Updated Commands
buildah run
     Add support for -- ending options parsing
     Add a way to disable PTY allocation
     Handle run without an explicit command correctly
Buildah build-using-dockerfile (bud)
    Ensure volume points get created, and with perms
buildah containers
     Add a -a/--all option - Lists containers not created by buildah.
buildah Add/Copy
     Support for glob syntax
buildah commit
     Add flag to remove containers on commit
buildah push
     Improve man page and help information
buildah export:
    Allows you to export a container image
buildah images:
    update commands
    Add JSON output option
buildah rmi
    update commands
buildah containers
     Add JSON output option

New Commands
buildah version
     Identify version information about the buildah command
buildah export
     Allows you to export a containers image

Updates
Buildah docs: clarify --runtime-flag of run command
Update to match newer storage and image-spec APIs
Update containers/storage and containers/image versions


Holidays

Posted by Remi Collet on July 19, 2017 06:34 AM

My holidays start today, time for me to take some rest, in offline mode.

So, the repository won't be updated before the 1st of August.

Getting Ready for ‎GUADEC 2017

Posted by Julita Inca Chiroque on July 19, 2017 05:14 AM

Only a few days left until the GUADEC 2017 takes place in Manchester! 😀

 

Thanks so much to the GNOME Foundation for placing its trust and confidence in my abilities in involvement and commitment to the community over the past seven years.    So, this time I will talk about the ways of reaching newcomers during the last year.

It is also a pleasure to me to help in the organization of the GNOME Games to my friend Sam Thursfield as part of the celebration of the Twentieth Anniversary Party of GNOME. Ballons are my favorite tools to connect people and I will definitely carry with me lots of them! Please be prepared with History of GNOME and some author of GNOME apps fot the “trivias”.

This is a special ocassion! I will definitely share Pisco from Peru and I am packing in advance(to not forget), my power adapter plug converter, ticket flights, another card as backup for pictures, coins of the pound sterling, my passport and a pack of eye eyelashes 😉

See you then GNOME! Can’t wait to see again lovely GNOME people! ❤


Filed under: FEDORA, GNOME Tagged: ballons, fedora, GNOME, Gnome foundation, GNOME people, GUADEC, GUADEC 2017, Julita Inca, Julita Inca Chiroque, Manchester, Sam organizer, talk at GUADEC, trip

Episode 56 - Devil's Advocate and other fuzzy topics

Posted by Open Source Security Podcast on July 18, 2017 08:50 PM
Josh and Kurt talk about forest fires, fuzzing, old time Internet, and Net Neutrality. Listen to Kurt play the Devil's Advocate and manage to change Josh's mind about net neutrality.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/5551879/height/90/width/640/theme/custom/autonext/no/thumbnail/yes/autoplay/no/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="640"></iframe>

Show Notes



F25-20170418 updated isos Released

Posted by Ben Williams on July 18, 2017 06:25 PM

We the Fedora Respins-SIG are happy to announce new F25-20170718 Updated Lives. (with Kernel 4.11.8-200).

This will be the Final Set of updated isos for Fedora 25. We are converting our builders to start providing Updated Fedora 26 isos in the near future.

With F25 we are now using Livemedia-creator to build the updated lives.

To build your own please look at  https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

With this new build of F25 Updated Lives will save you about 850 M of updates after install.

As always the isos can be found at http://tinyurl.com/Live-respins2


Easy way to fix non functional ctrl key

Posted by Luya Tshimbalanga on July 18, 2017 05:18 PM
Ctrl key buttons refused to work on laptop?
  • Tried pressing Ctrl + Alt + Fn? Mixed result.
  • Reboot hardware? No dice.
  • Pressing Ctrl + Left click on Touchpad? Worked

I am not sure what exactly caused the problem as the issue surprisingly affects more models than expected.

Fedora Community on Telegram

Posted by Jiri Eischmann on July 18, 2017 03:23 PM

I noticed today that the official Fedora chat group on Telegram had passed the mark of 1000 users. I can’t believe how rapidly it has grown. I created the group for attendees of Flock 2015 and it was supposed to be a single-purpose thing. But after the event people were like “hey, let’s rename it to Fedora and keep it for general chat about Fedora”. Fast forward and we have 1000 users and a lot of other Fedora-related groups popped up.

It’s not an easy job to moderate such a large group. The number of admins has grown to 7 and there is even a separate private chat for communication among admins. Big kudos to Justin Flory who took the leadership here early after Flock and I’ve been mostly just enjoying the position of the group creator and honorable admin.

Fedora Project also has its official news channel on Telegram which is followed by almost 500 users. There are also at least 11 national chat groups, and for example the Russian one has over 300 users. There are also specialized groups (for ambassadors, for packagers,…).

Telegram recently raised the maximum number of users per (super)group to 10,000, so the Fedora community still has some room to grow 🙂


LATAM Organizational FAD

Posted by Brian "bex" Exelbierd on July 18, 2017 12:23 PM

LATAM Organizational FAD

Cusco, Peru

13-15 July 2017

Ambassadors from six (or seven depending on how you count) countries came together in Cusco, Peru to discuss how to innovate in the LATAM community. Alex (alexove) coordinate the group and located a fantastic meeting space courtesy of Universidad Global. Organizing this FAD was hard work as plane tickets are difficult to arrange and frequently have large price swings in LATAM. It was worth it though as I think the FAD has some very real and practical outcomes. In attendance were 7 ambassadors and me as a representative from the council and to support the FAD. We were occasionally joined by additional ambassadors by phone and some student assistants.

  • Adrian Soliard (asoliard), Argentina
  • Alex Irmel Oviedo Solis (alexove), Peru
  • Eduard Lucena (x3mboy), Chile/Venezuela
  • Itamar Reis Peixoto (itamarjp), Brazil
  • José A. Reyes H. (josereyesjdi), Panama
  • Samuel José Gutiérrez Avilés (searchsam), Nicaragua
  • Tonet Jallo (tonet666p), Peru

LATAM Ambassadors The LATAM Ambassadors and some student assistants hard at work on Day 1.

I am always impressed by the backgrounds and skills of our contributors. This group is no exception and was filled with students, system administrators, programmers and project managers working with a variety of environments and language stacks.

On the first day we were welcomed to the University by the Rector and other officials. Then we got down to work with a SWOT analysis of LATAM. We looked at the Strengths and Weaknesses of the region and our external Opportunities and Threats. This generated a great deal of information and a lot of it showed just how connected the LATAM Ambassadors consider their work with the rest of the project. Some (but not all!) of the SWOT output included:

  • Strengths
    • Diversity (Technical skills, ethnicity, culture, etc.)
    • Availability of tools in Fedora (IRC, Pagure, etc.)
  • Weaknesses
    • Language Barrier
    • Lack of presence in colleges

Identifying and recognizing these items makes it easier to brainstorm ways to move the project and the community forward. For example, Day 2 featured a lot of discussion about how to breakdown the segregation that occurs when people are divided by language. Many members of the LATAM community are functionally fluent in English, especially in a technical context. A lot of our double systems, for example, having one ask.fedoraproject.org database for English and a separate one for Spanish forces the community members to actively choose which “community” to participate in. Participating in both is often a hassle and is easily forgotten. It is filled with friction. The Ambassadors have brainstormed some ideas about how to better integrate the LATAM community with the larger world-wide Fedora Community. I hope we will all see those proposed in the project soon.

A very productive discussion was held around event management and publicity. The team has come up with a simple 4x3 process (4 steps before an event and 3 steps afterward) that should improve quality, communications, accountability, and publicity for events in LATAM. I hope that other regions will look at it when it is published and offer suggestions and improvements. Some regions may even want to adopt it as it seems fairly complete.

One of my roles as the council representative was to answer questions about how budgeting works and to present the council’s goals and the new mission statement. In this area we have two major changes that need to be explained clearly. The first is how the new mission statement represents the goal for a lot (but not all) activity the project undertakes. For ambassadors this means that the council would like to see that most events support the goals of the mission statement. From a budget perspective the council is trying to accomplish two things with the new budget structure. First we are trying to eliminate the idea that we have to do certain things solely because they have been done in the past. We are at an point where we can safely evaluate all of our resource usage and make sure we aren’t on auto-pilot toward an outdated goal. Secondly, we are reserving some money for good ideas that can come from anywhere. What we have learned in the past was that it is hard to move money from a region that isn’t spending it but has it allocated to them to support an idea from another group. So we are asking the regions to request additional funds as they have new ideas. This way non-Ambassador groups in the project also have a pool of money they can make requests against.

LATAM Ambassadors Stepping away from the laptops and using the whiteboard is a key productivity enhancer.

The LATAM team is very concerned about the state of documentation. I used some of my time sharing what has been going on in the docs project with them. Their ideas for how to energize contribution once we can work with topics has me very excited. There may also be some coding contributions coming out of LATAM to help make this all happen.

I also heavily participated in a conversation about swag. LATAM has not been able to centrally produce swag in the past which has led to shortages, reimbursement problems and quality issues. We have a framework to move forward with centralized production. It would be ideal to see LATAM and other regions work together on centralized production of swag as it will drive down prices for everyone. We can also leverage some Red Hat provided administrative resources to make distribution faster and easier. As LATAM starts works on their framework I hope other regions will join the conversation and work with them to make swag better for everyone.

For those who are wondering, the discussion for all days was a mix of Spanish and English with not everything being translated. As the only non-Spanish speaker this didn’t bother me. My job is to support the community by helping with information and administrative solutions. I followed most of the conversation and the group was fantastic about bringing me up to speed as they went. This allowed me to provide new ideas or offer my opinion and input.

Slice of Cake #14

Posted by Brian "bex" Exelbierd on July 18, 2017 11:54 AM

A slice of cake

In the last week as FCAIC I:

  • Attended the LATAM organizational FAD. It was a fantastic opportunity to work with a group of ambassadors and to share what I know. My biggest take was ideas that I want to cross-pollinate with other parts of the project. Read more in my event report which will be posted today or tomorrow.
  • Continued working on Flock. In particular most of the air travel is now booked. I will be working with the funding committee to release the final funding we can provide this week. I’ve also gotten additional information from the CfP committee and sent final submission notices.

À la mode

  • Posted this late as I needed about 12 hours of sleep to recover from a flight :(.

Cake Around the World

I’ll be traveling some and hope you’ll ping me for coffee if you’re nearby.

  • Flock on Cape Cod, Massachusetts, USA from 29 August - 1 September.

Avoiding TPM PCR fragility using Secure Boot

Posted by Matthew Garrett on July 18, 2017 06:48 AM
In measured boot, each component of the boot process is "measured" (ie, hashed and that hash recorded) in a register in the Trusted Platform Module (TPM) build into the system. The TPM has several different registers (Platform Configuration Registers, or PCRs) which are typically used for different purposes - for instance, PCR0 contains measurements of various system firmware components, PCR2 contains any option ROMs, PCR4 contains information about the partition table and the bootloader. The allocation of these is defined by the PC Client working group of the Trusted Computing Group. However, once the boot loader takes over, we're outside the spec[1].

One important thing to note here is that the TPM doesn't actually have any ability to directly interfere with the boot process. If you try to boot modified code on a system, the TPM will contain different measurements but boot will still succeed. What the TPM can do is refuse to hand over secrets unless the measurements are correct. This allows for configurations where your disk encryption key can be stored in the TPM and then handed over automatically if the measurements are unaltered. If anybody interferes with your boot process then the measurements will be different, the TPM will refuse to hand over the key, your disk will remain encrypted and whoever's trying to compromise your machine will be sad.

The problem here is that a lot of things can affect the measurements. Upgrading your bootloader or kernel will do so. At that point if you reboot your disk fails to unlock and you become unhappy. To get around this your update system needs to notice that a new component is about to be installed, generate the new expected hashes and re-seal the secret to the TPM using the new hashes. If there are several different points in the update where this can happen, this can quite easily go wrong. And if it goes wrong, you're back to being unhappy.

Is there a way to improve this? Surprisingly, the answer is "yes" and the people to thank are Microsoft. Appendix A of a basically entirely unrelated spec defines a mechanism for storing the UEFI Secure Boot policy and used keys in PCR 7 of the TPM. The idea here is that you trust your OS vendor (since otherwise they could just backdoor your system anyway), so anything signed by your OS vendor is acceptable. If someone tries to boot something signed by a different vendor then PCR 7 will be different. If someone disables secure boot, PCR 7 will be different. If you upgrade your bootloader or kernel, PCR 7 will be the same. This simplifies things significantly.

I've put together a (not well-tested) patchset for Shim that adds support for including Shim's measurements in PCR 7. In conjunction with appropriate firmware, it should then be straightforward to seal secrets to PCR 7 and not worry about things breaking over system updates. This makes tying things like disk encryption keys to the TPM much more reasonable.

However, there's still one pretty major problem, which is that the initramfs (ie, the component responsible for setting up the disk encryption in the first place) isn't signed and isn't included in PCR 7[2]. An attacker can simply modify it to stash any TPM-backed secrets or mount the encrypted filesystem and then drop to a root prompt. This, uh, reduces the utility of the entire exercise.

The simplest solution to this that I've come up with depends on how Linux implements initramfs files. In its simplest form, an initramfs is just a cpio archive. In its slightly more complicated form, it's a compressed cpio archive. And in its peak form of evolution, it's a series of compressed cpio archives concatenated together. As the kernel reads each one in turn, it extracts it over the previous ones. That means that any files in the final archive will overwrite files of the same name in previous archives.

My proposal is to generate a small initramfs whose sole job is to get secrets from the TPM and stash them in the kernel keyring, and then measure an additional value into PCR 7 in order to ensure that the secrets can't be obtained again. Later disk encryption setup will then be able to set up dm-crypt using the secret already stored within the kernel. This small initramfs will be built into the signed kernel image, and the bootloader will be responsible for appending it to the end of any user-provided initramfs. This means that the TPM will only grant access to the secrets while trustworthy code is running - once the secret is in the kernel it will only be available for in-kernel use, and once PCR 7 has been modified the TPM won't give it to anyone else. A similar approach for some kernel command-line arguments (the kernel, module-init-tools and systemd all interpret the kernel command line left-to-right, with later arguments overriding earlier ones) would make it possible to ensure that certain kernel configuration options (such as the iommu) weren't overridable by an attacker.

There's obviously a few things that have to be done here (standardise how to embed such an initramfs in the kernel image, ensure that luks knows how to use the kernel keyring, teach all relevant bootloaders how to handle these images), but overall this should make it practical to use PCR 7 as a mechanism for supporting TPM-backed disk encryption secrets on Linux without introducing a hug support burden in the process.

[1] The patchset I've posted to add measured boot support to Grub use PCRs 8 and 9 to measure various components during the boot process, but other bootloaders may have different policies.

[2] This is because most Linux systems generate the initramfs locally rather than shipping it pre-built. It may also get rebuilt on various userspace updates, even if the kernel hasn't changed. Including it in PCR 7 would entirely break the fragility guarantees and defeat the point of all of this.

comment count unavailable comments

casync Video

Posted by Lennart Poettering on July 17, 2017 10:00 PM

Video of my casync Presentation @ kinvolk

The great folks at kinvolk have uploaded a video of my casync presentation at their offices last week.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/JnNkBJ6pr9s" width="560"></iframe>

The slides are available as well.

Enjoy!

Representative IoT Device: IP Video Camera

Posted by Russel Doty on July 17, 2017 09:58 PM

One of the most flexible, powerful, and useful IoT sensors is a video camera. Video streams can be used directly. They can also be analyzed using modern software and an incredible range of information extracted from the images: motion detection for eventing and alerts, automobile license recognition for parking systems and theft detection, facial recognition, manufacturing quality control, part location and orientation for robotics, local environment for autonomous vehicles, crop analysis for health and pests, and new uses that haven’t been thought of yet!

The IoT revolution for video cameras is the IP (Internet Protocol) camera – a video camera with integrated computer that can talk directly to a network and provide video and still images in a format that can be directly manipulated by software. An IP camera is essentially a computer with an image sensor and a network interface. A surprisingly powerful computer which can do image processing, image analysis, image conversion, image compression, and send multiple real-time video streams over the Internet. The IP cameras use standard processors, operating systems, and toolkits for video processing and networking.

Modern IP security cameras have high resolution – 3MP-5MP – excellent image quality, the ability to see in complete darkness, and good mechanical construction that can withstand direct exposure to the elements for many years. Many of these IP Video Cameras have enough processing power to be able to do motion detection inside the camera – a rather advanced video analysis capability! They can be connected to the network over WiFi or Ethernet. A popular capability is PoE or Power over Ethernet, which allows a camera to use a single Ethernet cable for both network and power. For ease of use these IP cameras are designed to automatically connect to back-end servers in the cloud and then to display the video stream on smartphones.

These IP cameras are available with full support and regular updates from industrial suppliers at prices ranging from several hundred to a few thousand dollars per camera. They are commonly sold in systems that include cameras, installation, monitoring and recording systems and software, integration, and service and support. There are a few actual manufacturers of the cameras, and many OEMs place their own brand names on the cameras.

These same cameras are readily available to consumers for less than $100 through unofficial, unsupported, “grey market” channels.

IP cameras need an account for setup, configuration and management. They contain an embedded webserver with full control of the camera. Virtually all cameras have a root level account with username of admin and password of admin. Some of them even recommend that you change this default password… One major brand of IP cameras also has two hardcoded maintenance accounts with root access; you can’t change the password on these accounts. And you can discover the username and password with about 15 seconds of Internet research.

The business model that allows you to purchase a high quality IP camera for <$100 does not support lifetime updates of software. It also does not support high security – ease of use and avoiding support calls is the highest priority. Software updates can easily cause problems – and the easiest way to avoid problems caused by software updates is to avoid software updates. The result is a “fire and forget” model where the software in the IP camera is never updated after the camera is installed. This means that security vulnerabilities are never addressed.

Let’s summarize:

  • IP video cameras are powerful, versatile and flexible IoT sensors that can be used for many purposes.
  • High quality IP cameras are readily available at low cost.
  • IP video cameras are powerful general purpose computers.
  • The business model for IP video cameras results in cameras that are seldom updated and are typically not configured for good security.
  • IP video cameras are easy to compromise and take over.
    • Can be used to penetrate the rest of your network.
    • Can be used to attack the Internet.
  • There are 10’s of millions of IP video cameras installed.

So far we have outlined the problem. The next post will begin to explore how we can address the security issues – including obvious approaches that won’t work…


Mozilla try recognition systems .

Posted by mythcat on July 17, 2017 08:06 PM
The recognition systems improves the more people use them, but they are closed systems so no one other than Apple, Amazon, or Google.
The project named Project Common Voice website includes an option to donate your voice.
If you opt to do so, a number of sentences will be popped up in your web browser for you to say. Once recorded, you can play them back to check they are acceptable before submitting them to help the voice recognition engine learn.
You need to read a sentence to help our machine learn how real people speak.
The area of interest for this project come with demographic data. You can try this project on the official website.

Encrypting drives with LUKS

Posted by Kushal Das on July 17, 2017 09:10 AM

Encrypting hard drives should be a common step in our regular computer usage. If nothing else, this will help you sleep well, in case you lose your computer (theft) or that small USB disk you were carrying in your pocket. In this guide, I’ll explain how to encrypt your USB disks so that you have peace of mind, in case you lose them.

But, before we dig into the technical details, always remember the following from XKCD.

What is LUKS?

LUKS or Linux Unified Key Setup is a disk encryption specification, first introduced in 2004 by Clemens Fruhwirth. Notice the word specification; instead of trying to implement something of its own, LUKS is a standard way of doing drive encryption across tools and distributions. You can even use drives from Windows using the LibreCrypt application.

For the following example, I am going to use a standard 16 GB USB stick as my external drive.

Formatting the drive

Note: check the drive name/path twice before you press enter for any of the commands below. A mistake, might destroy your primary drive, and there is no way to recover the data. So, execute with caution.

In my case, the drive is detected as /dev/sdb. It is always a good idea to format the drive before you start using it. You can use wipefs tool to clean any signature from the device,

$ sudo wipefs -a /dev/sdb1

Then you can use fdisk tool to delete the old partitions , and create a new primary partition.

Next step is to create the LUKS partition.

$ sudo cryptsetup luksFormat /dev/sdb1

WARNING!
========
This will overwrite data on /dev/sdb1 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase: 
Verify passphrase:

Opening up the encrypted drive and creating a filesystem

Next, we will open up the drive using the passphrase we just gave, and create a filesystem on the device.

$ sudo cryptsetup luksOpen /dev/sdb1 reddrive
Enter passphrase for /dev/sdb1
$ ls -l /dev/mapper/reddrive
lrwxrwxrwx. 1 root root 7 Jul 17 10:18 /dev/mapper/reddrive -> ../dm-5

I am going to create an EXT4 filesystem on here.
Feel free to create which ever filesystem you want.

$ sudo mkfs.ext4 /dev/mapper/reddrive -L reddrive
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 3815424 4k blocks and 954720 inodes
Filesystem UUID: b00be39d-4656-4022-92ea-6a518b08f1e1
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done   

Mounting, using, and unmounting the drive

The device is now ready to use. You can manually mount it with the mount command. Any of the modern desktops will ask you to unlock using the passphrase if you connect the device (or try to double click on the file browser).

I will show the command line option. I will create a file hello.txt as an example.

$ sudo mount /dev/mapper/reddrive /mnt/red
$ su -c "echo hello > /mnt/red/hello.txt"
Password:
$ ls -l /mnt/red
total 20
-rw-rw-r--. 1 root root     6 Jul 17 10:26 hello.txt
drwx------. 2 root root 16384 Jul 17 10:21 lost+found
$ sudo umount /mnt/red
$ sudo cryptsetup luksClose reddrive

When I attach the drive to my system, the file browser asks me to unlock it using the following dialog. Remember to choose forget immediately so that the file browser forgets the password.

On passphrases

The FAQ entry on cryptsetup page, give us hints and suggestions about passphrase creation.

If paranoid, add at least 20 bit. That is roughly four additional characters for random passphrases and roughly 32 characters for a random English sentence.

Key slots aka different passphrases

In LUKS, we get 8 different key slots (for passphrases) for each device(partition). You can see them using luksDump sub-command.

$ sudo cryptsetup luksDump /dev/sdb1 | grep Slot
Key Slot 0: ENABLED
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

Adding a new key

The following command adds a new key to the drive.

$ sudo cryptsetup luksAddKey /dev/sdb1 -S 5
Enter any existing passphrase: 
Enter new passphrase for key slot: 
Verify passphrase:

You will have to use any of the existing passphrases to add a new key.

$  sudo cryptsetup luksDump /dev/sdb1 | grep Slot
Key Slot 0: ENABLED
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: ENABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

Removing a passphrase

Remember that removing a passphrase is based on the passphrase itself, not by the key slot number.

$ sudo cryptsetup luksRemoveKey /dev/sdb1
Enter passphrase to be deleted: 
$ sudo cryptsetup luksDump /dev/sdb1 | grep Slot
Key Slot 0: ENABLED
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

Now in case you don’t know the passphrase, then you can use luksKillSlot.

$ sudo cryptsetup luksKillSlot /dev/sdb1 3
Enter any remaining passphrase:

Overview of the disk layout

The disk layout looks like the following. The header or phdr contains various details like magic value, version, cipher name, following the 8 keyblocks (marked as kb1, kb2.. in the drawing), and then the encrypted bulk data block. We can see all of those details in the C structure.

struct luks_phdr {
        char            magic[LUKS_MAGIC_L];
        uint16_t        version;
        char            cipherName[LUKS_CIPHERNAME_L];
        char            cipherMode[LUKS_CIPHERMODE_L];
        char            hashSpec[LUKS_HASHSPEC_L];
        uint32_t        payloadOffset;
        uint32_t        keyBytes;
        char            mkDigest[LUKS_DIGESTSIZE];
        char            mkDigestSalt[LUKS_SALTSIZE];
        uint32_t        mkDigestIterations;
        char            uuid[UUID_STRING_L];

        struct {
                uint32_t active;

                /* parameters used for password processing */
                uint32_t passwordIterations;
                char     passwordSalt[LUKS_SALTSIZE];

                /* parameters used for AF store/load */
                uint32_t keyMaterialOffset;
                uint32_t stripes;
        } keyblock[LUKS_NUMKEYS];

        /* Align it to 512 sector size */
        char                _padding[432];
};

Each (active) keyblock contains an encrypted copy of the master key. When we enter the passphrase, it unlocks the master key, that in turn unlocks the encrypted data.

But, remember, all of this is of no use if you have a very simple passphrase. We have another XKCD to explain this.

I hope this post encourages you to use encrypted drives more. All of my computers have their drives encrypted; (I do that while installing the Operating System.) This means, without decrypting the drive you can not boot the system properly. On a related note, remember to turn off your computer completely, (not hibernation or suspend mode) when you’re traveling.

Southeast Linux Fest (SELF) 2017 Ambassadors report

Posted by Fedora Community Blog on July 17, 2017 08:15 AM

 

Ambassadors Event Report

Southeast Linux Festival – Charlotte North Carolina
June 9 – 11, 2017
event website: http://www.southeastlinuxfest.org/

Attending Ambassadors

Ben and Kathy Williams (kk4ewt/cewillia) (Fedora event coordinators)
Andrew and Julie Ward (award3535/jward78) (Event report author)
Nick Bebout (nb)
Dan Mossor (danofsatx)
Rosnel Echevarria (reher)

Summary

Fedora has been involved in this particular event since its first festival in 2009 and has continued to be a vital part of the event through 2017. The event is the only large scale Linux and open source festival for the South Eastern United States. Even though there is a large number of Linux Users Groups throughout the south this is the only event that draws various communities together in celebration of Linux and Open Source Software. There was a similar event in Orlando Florida in 2015 called FOSSETCON, but the event coordinator announced that the event would occur every two years vice annually. Hopefully the event did not come apart for the fall this year as it normally is scheduled. Most of the Ambassadors arrived on Thursday 8 June the night prior to the event commencement.

Percona

The actual event began on the 9th at 9 am with the first set of speakers’ sessions to kick-off the event. The Fedora booth setup began at 7:35 am that day due to the uncertainty of attendance numbers and since the attendees were actually showing up early. We had the booth put together by 8:20 am and started making Fedora 25 Media Desktops. We always provide a different variety of items and demonstrate the different desktops offered by Fedora. We also had some remaining F25 pressed media that was available for distribution. The booth also was equipped with the IBM Laptop with the multi-desktop environment (available for demonstration of various desktop environments) and the ever so popular OLPC. We also have a selection of stickers and case badges available for visitors stopping by. The Fedora pens were the popular swag item for the first two days until our supply was depleted. The first day was busier than expected. The previous year’s attendance on the first day was minimal due to weekday vice weekend. Friday turned out to be a busy day for all of us. Even though the Expo Hall Flex Day (Optionally open on a per-exhibitor basis) we were glad to be open and ready to demonstrate and assist with any Fedora issues. We took the opportunity to walk around later that morning while several talks were in progress and noticed that there were no other operating system vendors there other than BSD and Pogo. Ubuntu who had been a staple along with us at this event was quite peculiar and alarming. There were a lot of familiar faces with booths that included Pogo, Linode, Percona, and some new attendees such as Black Duck. This gave us a unique opportunity in promoting Fedora. Usually at these events you do have the hard-lined folks towards Ubuntu and since they were not in attendance we did not see or have the questions of why should I use your product. Although there still was a presence of Ubuntu users that were actually interested in using Fedora since the unity desktop is no longer the default. This time we did get a lot of how do get this to work with Fedora questions as well as what is the difference between Mate and KDE. Ben, Nick, Dan, Ross, and I all fielded many technical questions on several variants with desktops and driver problems. They ranged from MacBook pro driver questions with wireless cards, through video driver problems encountered while loading and running Fedora.

Friday

The first day we all were busy with question answering and demonstrations. We actually were the busiest booth of the day, and attendees actually were very interested in what we had available and were quite impressed with the knowledge of our Ambassadors. The majority in attendance on Friday was a mixture of novice and experienced Linux users and was there for learning and help with their personal desktop environments. Almost every person we met was not in search of corporate/enterprise functionality but there to better their own experience and knowledge. I guess you could correlate this to the “Average Joe” or “Average Jane” individuals looking for something better to use at home and how it functions with applications available to Linux. Most were just looking for a better Operating System than Windows but were lost on what applications were available similar to those they were using in Windows.

Two of those individuals that were lost in the sea of applications and availability came up from Jacksonville Florida (from the LUG I attend) with a lot of confusion on how to find software they were looking for. One had a specific need and didn’t know where to find applications that he needed and felt overwhelmed with the applications/software is listed on various other platforms. When you search for an application it tends to be a sporadic listing of various sources but no examples or specifications listed. He was quite frustrated with trying to find the right application for his specific need. After discussing at length what the end product need to be, I said that I could help with that. After showing this person the Labs section on Fedora specifically with the scientific capabilities listed with both the Robotics and Scientific Labs bundles that he was not aware of. He also was not aware of the various desktop environments available with Fedora. I presented him all the various live desktops available on a removable hard drive he provided, and the available Robotics and Scientific Lab media as well.

By mid-day on Friday we had already given out approximately 100 F25 pressed media DVD’s and about the same for the various desktop environments Julie had created using the DVD duplicator. It seemed that Gnome desktop was the most popular desktop environment, and taking a second the Cinnamon desktop that totaled about 50 percent of the Gnome desktop environment. There were many questions about Gnome and what was the difference between the other desktop environments. Mostly what the user was taking away from the various desktops was what they were most comfortable with. We demonstrated all of them at various times with what the customer was looking for. Most liked the feel of Cinnamon and Mate, and others really liked the Gnome environment. I showed a few individuals what my desktop looked like and the tools I used to get the appearance and feel for me. Many individuals were not aware of the Gnome tweak tool and its capability to change the appearance of the environment from the default. Since we had all the desktops available it was quite easy to find out what individuals were looking for and many of our own personal laptops were available to show how easy it was to personalize each type of desktop. Since I was running Gnome, Ben was running Mate, and Ross running Mate as well, there was vast differences in appearance and what was running. I think that when people left our booth that day we had achieved giving the choice of Freedom and Friends to everyone we met.

As the day continued Ben and Nick had some events on the schedule. Nick hosted the GPG Key Signing session and Ben was scheduled to conduct the amateur radio study guide cram session in preparation for the exam the next day. Both events were scheduled after the Expo area had closed, so both had a late night of events and also had the speaker’s dinner that evening as well.

Saturday

Julie and Cathy working the booth

The next morning (the 10th) we began to set up with the anticipation that it was going to be busier than the previous day. The booth was set up and ready to go by 8:20 a.m. By this time we already had visitors at the booth, they actually began asking questions at 8:15. The first scheduled talk was not until 9 a.m. that morning nor was the expo area due to be open until 9 a.m. but we are always ready to help and to answer questions our visitors may have. The morning progressed to be the busiest we have had in a long time. This I believe was attributed due the fact there was no other non-Corporation operating systems present which made us the only one available to the users. We also had some repeat visitors that always make it to the event and spend a lot of time discussing upcoming changes and features with Fedora.

There were some individuals that showed up to the booth that required some assistance in configuration and driver issues. Ben Williams aided in loading and configuring one individual’s laptop successfully. We received a lot of praise from this individual for assisting correcting the problem and loading the laptop with one of Fedora’s Desktop environments. Several people inquired on a number of driver issues regarding wireless cards and video driver issues. All questions were answered or either demonstrated on how to correct the issues. Most problems were corrected on the spot with the available equipment or definitive answers give to each person. The one significant event that stood out was a return customer from the previous year a young boy was entirely fascinated with Fedora. Last year Nick helped him choose a specific desktop environment and his father was quite interested in the SOSA spin. So Nick had provided his contact information to the father of this enthusiast and it wasn’t long before they started communicating on getting Fedora his computer. This year, the family returned and spent a significant amount of time at the booth. This young man is only 8, his name is Carter, and with some help from Nick, he demonstrated all of the different desktop environments, and once Carter picked the environment that he liked (I believe it was Cinnamon) we provided the media for him to load. Carter was quite pleased to see how easy it was to run a game (Minecraft) on a Fedora PC. He spent a lot of time at our booth playing the game until his father said it was time to leave and learn more at the event. Several times durig the day Carter stopped by to just play with the different desktops and a little Minecraft. As the rest of the day went on we all were busy discussing points about Fedora and upcoming features the day began to wind down. The next event was in Ben Williams’s court with the Amateur Radio License Exam. There were 28 that took the exam (General and Technician) with a passing rate of approximately 50% (estimated). Nick also had the GPG Key signing event at 8 p.m. A long day for all.

Sunday

Nick Bebout, a Fedora contributor at the Southeast Linux Fest (SELF) 2017Sunday, the last day of the event, still had a full day of talks scheduled until about 4 pm. Ben and Nick both had talks scheduled that day. The booth was ready by 8:30 a.m. and we were ready to get started. Although the traffic of people was not the same as the previous days, but we did have more of the repeat individuals including Carter and his father, there were many more questions we were asked. One in particular came back to the booth to thank us for the previous day’s assistance. This was the one individual we discussed earlier that was lost in a sea of applications. He let us know that he loaded Cinnamon the previous night and starting working with it. He was quite impressed and came back to let us know that he was quite happy with the software (I received and email the day after the event ended; Quoted “I really enjoyed the show and meeting with you guy.

I got the Fedora Cinnamon loaded it looks really good. The Fedora Lab’s looks like it will do everything I need for my engineering work. I am waiting for the 16GB memory I ordered for the I7 Asus so I can get everything setup properly. I got some Fedora Live disks from your bud to give out at Jaxlug and JaxDlug. You guys have a good week talk later.”). As mentioned earlier, both of these individuals are from North East Florida (Jacksonville), took the trip to Charlotte specifically to attend this event since there was no other event in the south east, they both last attended FOSSETCON in 2015 but found this event far more informative and well organized.

The event ended later in the afternoon. Most of the major contributors had begun to pack up in mid afternoon when the crowd started to dwindle as well did we. The remaining media was given out to individuals that would bring the items to their local LUG including the North East Florida individuals (they would provide the media to the JaxLug and JaxDlug). The even officially ended at 3:45p.m. on Sunday 11 June 2017.

To answer the question “Why” does Fedora attend Southeast Linux Fest. Well to start, the obvious was Carter and his father. To have such an enthusiast so interested in what we do and the product we represent goes right to the Four Foundations of Friends and Features. After spending time using each of the available desktops and choosing the one that was right for him (remember he is only 8). His father only guided him when he asked for help but was making his own decisions on what he wanted to accomplish with his laptop. Everyone of us at the booth was quite impressed with with this young man. The other obvious subject was media production of our variants in desktops. The duplicator produced 225 desktop environments for distribution with all available spins. We also gave out another 200 F25 media DVD’s during the event. All locally duplicated spins were given out and the rest of the media brought to the event was given to those individuals that would further distribute them to local colleges and Linux User Group meetings. Here some other things that we accomplished;

  • Aided the installation of three laptops with Fedora
  • Demonstrated the various lab environments available

    Nick and Ross discussing next release dates

  • Answered numerous questions on device driver installation and configuration
  • Demonstrated the various Desktop environments (Cinnamon the most popular)
  • Discussed upcoming release and features
  • Demonstrated F26A releases
  • Demonstrated the abilities of Gnome and the tweak tool
  • Demonstrated the software GUI loader within Gnome vice using terminal
  • Produced and handed out Live USB media for those who were truly interested in Fedora

The event in our opinion is more structured to the Desktop users, novice, and to moderately technically savvy users and enthusiasts . Even though we did have some individuals that were corporate level system administrators, they were more interested in items and software for their personal use rather than the enterprise level engineering or administration. We also had a wide range of expertise on hand with our ambassadors that covered areas of system administration (Ben, Ross, and Dan), security and network security (Andrew), and cryptology (Nick). There were no questions that we could not answer with respect to Fedora! As always we had a survey available for those who wish to leave us comments or suggestions that will be available separately for review.

The Fedora Southeast Linux Fest (SELF) 2017 gang

The Fedora SELF Gang

The post Southeast Linux Fest (SELF) 2017 Ambassadors report appeared first on Fedora Community Blog.

Enhancing smart backups with Duply

Posted by Fedora Magazine on July 17, 2017 08:00 AM

Welcome to Part 2 in a series on taking smart backups with duplicity. This article builds on the basics of duplicity with a tool called duply.

Duply is a frontend for duplicity that integrates smoothly with recurring tools like cron or systemd. Its headline features are:

  • keeps recurring settings in profiles per backup job
  • automates import/export of keys between profile and keyring
  • enables batch operations eg. backup_verify_purge
  • runs pre/post scripts
  • precondition checking for flawless duplicity operation

The general form for running duply is:

duply PROFILE COMMAND [OPTIONS]

Installation

duply is available in the Fedora repositories. To install it, use the sudo command with dnf:

dnf install duply

Create a profile

duply stores configuration settings for a backup job in a profile. To create a profile, use the create command.

$ duply documents create

Congratulations. You just created the profile 'documents'.
The initial config file has been created as 
'/home/link/.duply/documents/conf'.
You should now adjust this config file to your needs.

IMPORTANT:
  Copy the _whole_ profile folder after the first backup to a safe place.
  It contains everything needed to restore your backups. You will need 
  it if you have to restore the backup from another system (e.g. after a 
  system crash). Keep access to these files restricted as they contain 
  _all_ informations (gpg data, ftp data) to access and modify your backups.

  Repeat this step after _all_ configuration changes. Some configuration 
  options are crucial for restoration.

The newly created profile includes two files: conf and exclude. The main file, conf, contains comments for variables necessary to run duply. Read over the comments for any settings unique to your backup environment. The important ones are SOURCE, TARGET, GPG_KEY and GPG_PW.

To convert the single invocation of duplicity from the first article, split it into 4 sections:

duplicity --name duply_documents --encrypt-sign-key **************** --include $HOME/Documents --exclude '**'  $HOME   s3+http://**********-backup-docs
          [                         OPTIONS                        ] [                 EXCLUDES             ] [SOURCE] [             TARGET           ]

Comment out the lines starting with TARGET, SOURCE, GPG_KEY and GPG_PW by adding # in front of each line. Add the following lines to conf:

SOURCE=/home/link
TARGET=s3+http://**********-backup-docs
GPG_KEY=****************
GPG_PW=************
AWS_ACCESS_KEY_ID=********************
AWS_SECRET_ACCESS_KEY=****************************************

The second file, exclude, stores file paths to include/exclude from the backup. In this case, add the following to $HOME/.duply/documents/exclude.

+ /home/link/Documents
- **

Running duply

Run a backup with the backup command. An example run appears below.

$ duply documents backup
Start duply v2.0.2, time is 2017-07-04 17:14:03.
Using profile '/home/link/.duply/documents'.
Using installed duplicity version 0.7.13.1, python 2.7.13, gpg 1.4.21 (Home: ~/.gnupg), awk 'GNU Awk 4.1.4, API: 1.1 (GNU MPFR 3.1.5, GNU MP 6.1.2)', grep 'grep (GNU grep) 3.0', bash '4.4.12(1)-release (x86_64-redhat-linux-gnu)'.
Autoset found secret key of first GPG_KEY entry 'XXXXXXXXXXXXXXXX' for signing.
Checking TEMP_DIR '/tmp' is a folder and writable (OK)
Test - Encrypt to 'XXXXXXXXXXXXXXXX' & Sign with 'XXXXXXXXXXXXXXXX' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.15349.1499213643_*'(OK)
Backup PUB key 'XXXXXXXXXXXXXXXX' to profile. (OK)
Write file 'gpgkey.XXXXXXXXXXXXXXXX.pub.asc' (OK)
Backup SEC key 'XXXXXXXXXXXXXXXX' to profile. (OK)
Write file 'gpgkey.XXXXXXXXXXXXXXXX.sec.asc' (OK)

INFO:

duply exported new keys to your profile.
You should backup your changed profile folder now and store it in a safe place.


--- Start running command PRE at 17:14:04.115 ---
Skipping n/a script '/home/link/.duply/documents/pre'.
--- Finished state OK at 17:14:04.129 - Runtime 00:00:00.014 ---

--- Start running command BKP at 17:14:04.146 ---
Reading globbing filelist /home/link/.duply/documents/exclude
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Tue Jul  4 14:16:00 2017
Reuse configured PASSPHRASE as SIGN_PASSPHRASE
--------------[ Backup Statistics ]--------------
StartTime 1499213646.13 (Tue Jul  4 17:14:06 2017)
EndTime 1499213646.40 (Tue Jul  4 17:14:06 2017)
ElapsedTime 0.27 (0.27 seconds)
SourceFiles 1205
SourceFileSize 817997271 (780 MB)
NewFiles 1
NewFileSize 4096 (4.00 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 1
RawDeltaSize 0 (0 bytes)
TotalDestinationSizeChange 787 (787 bytes)
Errors 0
-------------------------------------------------

--- Finished state OK at 17:14:07.789 - Runtime 00:00:03.643 ---

--- Start running command POST at 17:14:07.806 ---
Skipping n/a script '/home/link/.duply/documents/post'.
--- Finished state OK at 17:14:07.823 - Runtime 00:00:00.016 ---

Remember duply is a wrapper around duplicity. Because you specified –name during the backup creation in part 1, duply picked up the local cache for the documents profile. Now duply runs an incremental backup on top of the full one created last week.

Restoring a file

duply offers two commands for restoration. Restore the entire backup with the restore command.

$ duply documents restore ~/Restore
Start duply v2.0.2, time is 2017-07-06 22:06:23.
Using profile '/home/link/.duply/documents'.
Using installed duplicity version 0.7.13.1, python 2.7.13, gpg 1.4.21 (Home: ~/.gnupg), awk 'GNU Awk 4.1.4, API: 1.1 (GNU MPFR 3.1.5, GNU MP 6.1.2)', grep 'grep (GNU grep) 3.0', bash '4.4.12(1)-release (x86_64-redhat-linux-gnu)'.
Autoset found secret key of first GPG_KEY entry 'XXXXXXXXXXXXXXXX' for signing.
Checking TEMP_DIR '/tmp' is a folder and writable (OK)
Test - Encrypt to 'XXXXXXXXXXXXXXXX' & Sign with 'XXXXXXXXXXXXXXXX' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.12704.1499403983_*'(OK)

--- Start running command RESTORE at 22:06:24.368 ---
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 6 21:46:01 2017
--- Finished state OK at 22:06:44.216 - Runtime 00:00:19.848 ---

Restore a single file or directory with the fetch command.

$ duply documents fetch Documents/post_install ~/Restore
Start duply v2.0.2, time is 2017-07-06 22:11:11.
Using profile '/home/link/.duply/documents'.
Using installed duplicity version 0.7.13.1, python 2.7.13, gpg 1.4.21 (Home: ~/.gnupg), awk 'GNU Awk 4.1.4, API: 1.1 (GNU MPFR 3.1.5, GNU MP 6.1.2)', grep 'grep (GNU grep) 3.0', bash '4.4.12(1)-release (x86_64-redhat-linux-gnu)'.
Autoset found secret key of first GPG_KEY entry 'XXXXXXXXXXXXXXXX' for signing.
Checking TEMP_DIR '/tmp' is a folder and writable (OK)
Test - Encrypt to 'XXXXXXXXXXXXXXXX' & Sign with 'XXXXXXXXXXXXXXXX' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.14438.1499404312_*'(OK

--- Start running command FETCH at 22:11:52.517 ---
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Jul 6 21:46:01 2017
--- Finished state OK at 22:12:44.447 - Runtime 00:00:51.929 ---

duply includes quite a few commands. Read the documentation for a full list of commands.

Other features

Timer runs become easier with duply than with duplicity. The systemd user session lets you create automated backups for your data. To do this, modify ~/.config/systemd/user/backup.service, replacing ExecStart=/path/to/backup.sh with ExecStart=duply documents backup. The wrapper script backup.sh is no longer required.

duply also makes a great tool for backing up a server. You can create system wide profiles inside /etc/duply to backup any part of a server. Now with a combination of system and user profiles, you’ll spend less time worrying about your data being backed up.

Bilan mensuel de la documentation francophone de Fedora-fr.org, numéro 1

Posted by Charles-Antoine Couret on July 17, 2017 06:00 AM

J'ai décidé d'essayer de publier un bilan mensuel de l'état d'avancement des travaux de la documentation francophone de Fedora-fr. Ce bilan compilera en somme les bilans hebdomadaires envoyés sur la liste de diffusion.

Tout d'abord, nous avons catalogué les pages à traiter suivant l'état actuel des pages pour déterminer l'étendu du travail à mener sur chacune. Ensuite chaque page sera mise à jour afin d'être compatible avec les versions de Fedora récentes quitte à supprimer les passages concernant les versions plus anciennes, n'étant plus supportées. Ce qui rendra la documentation plus claire et utile.

Ensuite, les travaux se sont concentrés surtout autour de trois axes :

  • Les premiers pas d'un utilisateur ;
  • Le matériel ;
  • Le réseau.

Le premier cas était sans doute prioritaire, les nouveaux utilisateurs de Fedora ne pouvaient pas débuter avec notre documentation qui était bien trop ancienne. La procédure d'installation a été profondément remaniée, GNOME 3 a subi des changements non négligeables depuis ses débuts, la procédure de mise à niveau également. Les dual-boot avec Windows, via GRUB2, ont aussi changé, cela s'est considérablement simplifié.

Enfin, dnf a remplacé yum pour gérer les paquets ce qui rend beaucoup de manipulations de fait obsolètes. Les pages doivent s'adapter à cette nouvelle donne pour simplifier la vie des lecteurs.

C'est pourquoi je me suis attelé sur ces articles en premier afin qu'un curieux puisse être le plus autonome possible.

Le second cas est dû à l'évolution du matériel opérée depuis 5-6 ans. Les SSD s'imposent peu à peu tout comme les systèmes type Optimus, les pilotes graphiques libres et proprio ont également beaucoup changé (positivement bien entendu). Mettre à jour ces sujets permet de mieux servir le lecteur par rapport à ses besoins d'aujourd'hui. Nicolas a beaucoup travaillé sur la question et je le remercie car cela devenait nécessaire.

Il faudra peut être compléter cela par des articles à propos de Wayland ou libinput par exemple qui ont un impact non négligeable sur la couche graphique et des entrées / sorties du système.

Le troisième cas reflète les améliorations apportées par NFS depuis mais aussi le pare-feu de Fedora qu'est firewalld. Ce dernier n'étant globalement utilisé que dans l'écosystème Fedora / RHEL, la documentation existante en français reste assez pauvre, il est important de pouvoir guider pour apprendre son fonctionnement de base.

Nicolas là encore a beaucoup œuvré sur ce point. Merci à lui.

À l'heure actuelle, en un peu plus d'un mois de travail à 2 (+ quelques relecteurs et contributeurs dont l'aide est toujours appréciée), près de 25 articles ont été traités. Et il y a encore beaucoup à faire. Dans notre première estimation du travail à mener, près de 45 articles restent à gérer. Et beaucoup d'autres pages n'ont pas été incluses dans cette procédure, une nouvelle itération aura lieu pour s'en charger.

Et c'est sans compter les pages à créer pour traiter des situations nouvelles et répondre aux besoins des utilisateurs comme Wayland ou Flatpaks.

En tout cas je suis satisfait des progrès réalisés sur la documentation. Il y a beaucoup de travail à mener encore, mais il semble possible que la documentation soit dans un état très acceptable d'ici Fedora 27 ou la fin de l'année 2017. Ensuite il faudra veiller à maintenir la documentation à jour continuellement et ajouter des articles suivant les besoins du moment.

Je vous invite en tout cas à nous donner un coup de main, pour cela je vous conseille de suivre la procédure pour contribuer à la documentation et si possible de participer à nos ateliers hebdomadaires tous les lundi soir à partir de 21h (heure de Paris) sur le canal IRC #fedora-doc-fr du serveur Freenode. Rien ne vous empêche de contribuer en dehors du cadre des ateliers, toute l'aide est la bienvenue. Alors, n'hésitez pas !

Closing the GNOME Peru Challenge 2017

Posted by Julita Inca Chiroque on July 17, 2017 04:12 AM

It’s been three months since a group of students from different universities decided to learn more about GNU/Linux in a local community. This idea started while LinuXatUNI had been organized and powered by Fedora and the GNOME project.

Thanks to the financial support of GNOME, we were able to celebrated a Breakfast and a little Parade.  We had newbies and people who were interested in become part of Fedora and the GNOME a year ago.  Here you can see our initial students:After this experience, we set a plan registered in the wiki. We officially started on May, 7th.

Today July 16th, as the wiki pointed out, it was our last meeting as a group to fix a bug.Voluntarialy, during this period, all our Sundays  were reserved to accomplish that goal. Throughout this challenge, minitasks were set to be also done, as well as presentations in events to promote the use of GNU/Linux with Fedora and the GNOME, such as FLISOL Lima:Screen Shot 2017-07-18 at 12.59.49 AM.pngTalk at UPN tech conference:Screen Shot 2017-07-18 at 1.05.13 AMPresentation to the CFD OpenSource community at PUCP:Screen Shot 2017-07-11 at 3.46.25 AMTraining in Linux to THE MARINA OF WAR OF PERU:Screen Shot 2017-07-18 at 12.43.04 AM

  • At the end, this was the chart of activities versus persons:Toto and Solanch were the winners of this first GNOME Peru Challenge 2017 😀 

Thanks to BacktrackAcademy to prize them with a Membership for a year! 

Besides their effort, they are great people with good willing, and I hope they continue learning GNU/Linux from their own!

I think we need to be improved in many things. One of them is related to a place, we did not have a specific place, we did the workshops on Sundays because we work and study besides this challenge and unfortunately, universities are not open on Sundays usually and we had our meetings sometimes at Toto’s house, Randy or Mario’ house rather to have a lab every Sunday.  Another one, could be related a syllabus, as I presented, we did not have more chance to learn in deep Java Script, Vala, GTK, glib and more GNOME technologies. Special thanks also to Carlos Soriano who helped us once in this labor.Screen Shot 2017-07-17 at 1.17.17 AM.pngAnyway, this training was a great opportunity to build and get a stronger GNU/Linux community in Lima, Peru. Thanks so much to each participant of this challenge! 🙂


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: C Julita Inca Chiroque, community, fedora, Fedora + GNOME community, GNOME, gnome 3, GNOME Challenge, GNOME Perú, GNOME Peru Challenge, GNOME Peru Challenge 2017, Julita Inca, Julita Inca Chiroque, Lima, Linux community, Perú

Fedora and GNOME at the Marine

Posted by Julita Inca Chiroque on July 17, 2017 01:55 AM

Our local Linux community “LinuXatUNI = Fedora + GNOME”  have received an invitation to do a talk regarding Linux security at the “THE MARINA OF WAR OF PERU”.
I have started by doing some review of Linux History and check some basic commands on terminal to finally explain ACLs.People in the audience that respond satisfactory questions received some merchadising of Fedora and GNOME! Then, Randy Real showed how to explore ports and explaining the importance of the history of commands, useful to audit tools and applications. Felipe Moreno explained SQL injection, configuring it both, manually and automatically. Solanch Ccasa was in charge of SQUID, basic configuration and features.Ronaldo Cabezas (Toto) did also some material for iptables; but incredibly, time was gone since it was three hours in a raw. We appreciate the invitation we have that exert our community in training people into the Linux world beyond universities. We are now in organizations sharing knowledge voluntarily to do a better society that use Linux!… Some pictures that shows our exceptional experience! Thanks so much again Peruvian Navy 🙂

Thanks so much again Peruvian Navy 🙂


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: fedora, GNOME, gnome 3, Julita Inca, Julita Inca Chiroque, linux, Linux at UNI, LinuXatUNI, Marina, marina de guerra del peru, militar, security, seguridad, training, training Linux

Git auto fetch script I run every day

Posted by Lukas "lzap" Zapletal on July 17, 2017 12:00 AM

Git auto fetch script I run every day

I am “shutdowner”, meaning I always shutdown my laptop (now workstation) at the end of the day. I have a script to do that which sleeps 5 seconds (so I can change my mind - e.g. when I dig shell history incorrectly and quickly hit enter - it really happened yeah) and it is simple:

  • puts my monitors into standby mode
  • applies all OS updates
  • runs duplicity backup on my home folder
  • fetches git repos
  • filesystem sync call
  • fstrim root volume
  • poweroff

I learned a trick I want to write about today from colleague of mine Mirek Suchý, but I think he runs it from cron (not a “shutdowner” guy). The idea is simple:

  • find all directories containing .git/ and run on all of them:
  • git fetch –all
  • git gc

So every time I do git pull on a repo that I don’t use much (e.g. ruby language), I don’t need to wait seconds in order to pull all commits. Clever, now I’ve improved it a bit.

With my Ryzen 1700 8 core 16 threads CPU, I am able to leverage GNU parallel to do this in parallel. That will be faster. But how much? Let’s test against git repo I use the most: www.theforeman.org.

# git -c pack.threads=1 gc --aggressive
1m25.175s

# git -c pack.threads=16 gc --aggressive
0m16.321s

Initially I thought that running 16 GNU parallel worker processes of parallel will be fine, but git gc is really slow on one core (see above), so I usually end up with several very slow garbage tasks while all the others finished downloading. The sweet spot for git is around 4 threads where it always gives reasonable times even for bigger repos.

But I think little bit of CPU overcommit won’t kill, therefore I’ve decided to go with 8x4 which might sound crazy (32 threads in theory), but in practice garbage collect is executed only on few repositories I work regularly on.

Lot of words, I know. Here is the snippet:

find ~/work -name '.git' -type d | \
    parallel -j 6 'pushd "{}"; git fetch --all; git -c pack.threads=4 gc --aggressive --no-prune --auto; popd'

I think I could go further but this already gives me good experience and when my PC is doing this, I am already heading away from it. No biggie. Final notes for git flags I use:

  • aggressive - much slower collect giving better results
  • no-prune - I don’t want to loose any commits at any point in time
  • auto - git will decide when to actually run gc

Video: Nested, nested KVM in Fedora 26

Posted by Scott Dowdle on July 16, 2017 10:15 PM
Video: Nested, nested KVM in Fedora 26 Scott Dowdle Sun, 07/16/2017 - 16:15

Thinkpad T440s in Fedora

Posted by Alberto Rodriguez (A.K.A bt0) on July 16, 2017 06:25 PM

My Lenovo thinkpad T440s has the next product number:

$ cat /sys/class/dmi/id/product_name
  20ARS0LF0A

So all the procedures show here are valid for this product

Fist and if you don’t install thinkfan, lm_sensors and hdapsd before:

$ sudo dnf install thinkfan hdapsd lm_sensors
$ sudo sensors-detect
$ sudo sh -c "find /sys/devices -type f -name 'temp*_input' | xargs -I {} echo "hwmon {}" >> /etc/thinkfan.conf"
$ sudo systemctl enable thinkfan

The next step is install tlp and prepare other tools for manage kernel modules:

sudo dnf install tlp tlp-rdw kernel-devel akmods kmodtool

We need to enable external repositories:

$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
$ sudo dnf install http://repo.linrunner.de/fedora/tlp/repos/releases/tlp-release-1.0-0.noarch.rpm

Now this is the last part of the installation, but read carefully this warning:

The next instructions are no recommended, we use the --nogpgcheck , so we not only use a external repository, we don’t check the sing the packages.

If you are comfortable with this just:

sudo dnf install akmod-tp_smapi akmod-acpi_call --enablerepo tlp-updates-testing --nogpgcheck

The --enablerepo tlp-updates-testing is necessary for fedora 25+ versions.

Now you can have a very well use and administration of your batteries 🙂

Many thanks to linrunner and the TLP project.

Flatpak und Snap sind nicht die Lösung des Problems, sondern machen es noch schlimmer

Posted by Fedora-Blog.de on July 16, 2017 03:00 PM

Im Moment wird heftig darüber diskutiert, ob der Support von Flatpak in Fedora 27 weiter ausgebaut werden soll oder nicht.

Ich persönlich halte Flatpak und Snap für den völlig falschen Ansatz, das Problem mit der Bereitstellung von Software für verschiedene Distributionen zu lösen. Beide haben in meinen Augen das große Problem, das im Grunde jedes Snap-/Flatpak-Paket seine eigenen Versionen von benötigten Shared-Objects mitbringt und man damit irgendwann so etwas ähnliches, wie die DLL-Hölle von Windows hat: zig verschiedene Versionen eines Shared-Object von denen die meisten im schlimmsten Fall auch noch verwundbar für Angriffe sind.

Zumal die meisten technisch unbedarften Anwender wahrscheinlich davon ausgehen werden, das der Paketmanager des Systems (z.B. dnf oder PackageKit) auch die Shared-Objects der Flatpak-Pakete aktualisiert, was aber eben genau nicht der Fall ist. Stattdessen müssen sich die Nutzer darauf verlassen, das die Anbieter der von ihnen verwendeten Flatpaks/Snaps verwundbare Versionen der von ihren Apps verwendeten Shared-Object zeitnah aktualisieren – eine Wette, die ich persönlich nicht eingehen möchte. Das funktioniert ja schon bei Windows Anwendungen eher bescheiden, warum sollte es dann diesmal besser klappen?

Nein, Flatpak und Snap sind in meinen Augen das, was man sprichwörtlich „den Teufel mit dem Beelzebub austreiben“ nennt: ein Problem lösen und dabei ein anderes Problem schaffen. Man macht es einfacher, Anwendungen für verschiedene Distributionen bereit zu stellen, öffnet damit aber ohne Not zusätzliche Angriffsvektoren, indem man dem Paketmanager die Kontrolle über einen Teil der installierten Software entreißt und die Anwender damit zwingt darauf zu vertrauen, das Dritte – von denen man im Grunde nicht weiß, wie vertrauenswürdig sie sind – ihre Hausaufgaben machen!

IMHO ist der Kommentar von Fedora-Blog.de.
IMHO = In My Humble Opinion (Meiner bescheidenen Meinung nach).

Xfce: iBus Daemon automatisch starten

Posted by Fedora-Blog.de on July 16, 2017 12:28 PM
Bitte beachtet auch die Anmerkungen zu den HowTos!

Um den iBus Daemon, den man u.a. für die Eingabe von Emojis benötigt, automatisch starten zu lassen, sind folgende Schritte nötig:

Zuerst muss das Script /etc/X11/xinit/xinitrc.d/45-autoexec.sh mittels

sudoedit /etc/X11/xinit/xinitrc.d/45-autoexec.sh

erstellt und mit folgendem Inhalt gefüllt werden:

#!/bin/bash
if [ -x "$HOME/.autoexec" ]; then
       . $HOME/.autoexec
fi

Nun muss noch das ~/.autoexec Script erstellt

nano ~/.autoexec

und mit folgendem Inhalt gefüllt werden

# input framework launch
XIM_PROF=ibus
ln -sf /etc/X11/xinit/xinput.d/${XIM_PROF}.conf ${HOME}/.xinputrc
source ${HOME}/.xinputrc

if [ -n "${GTK_IM_MODULE}" ]; then
   export GTK_IM_MODULE
else
   export GTK_IM_MODULE=xim
fi 
if [ -n "${QT_IM_MODULE}" ]; then
   export QT_IM_MODULE
else
   export QT_IM_MODULE=xim
fi 

echo "XIM_PROGRAM=${XIM_PROGRAM}"
${XIM_PROGRAM} ${XIM_ARGS} &

Ab dem nächsten Neustart sollte der iBus Daemon nun automatisch gestartet werden.

(Quelle)

Install Play it Slowly with Flatpak

Posted by Mathieu Bridon (bochecha) on July 16, 2017 08:30 AM

Play it Slowly is a cool app which lets you play any audio file at any speed or pitch.

Some time ago Alexandre Franke told me about how he uses this app, and lamented the fact he couldn't easily install it on Fedora. At the time, I made a Copr repo with RPM packages of Play it Slowly, which seemed to make him happy.

But then came Flatpak, and I figured that providing Play it Slowly for everyone was even better than just for Fedora users, in addition to the sandboxing and dependency management advantages.

It took me a while, because I had completely forgotten about this app: it doesn't have regular releases, and the Copr packages worked just fine. :P

But finally, I pushed it to Flathub, and you can now trivially install it from there:

$ flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
$ flatpak install flathub ch.x29a.playitslowly

I made a few changes to the code, which improve the Flatpak integration, but really just are good practices which help even outside of Flatpak:

  • install the icons in the right folder (#15)
  • add an appdata file (#16)
  • rename the app id (#17 and #18)

Those were all merged upstream, but I had to add them to the Flatpak build until there is a new release.

I hope you'll find it useful, let me know how it works for you.

Getting Fedora 26

Posted by Julita Inca Chiroque on July 16, 2017 08:12 AM

I did have Fedora 25 and by using terminal I did the upgrade to my system to have Fedora 26 on it! 😀My experience was based on the recommendations of the Fedora Magazine  that includes a plugin while upgrading:We are now ready to get F26! Lucky me, I did not use the –allowerasing flag 😉Finally a reboot is in the view!The reboot take a long time (approximately an hour), but it worth to see the new lightning interface. My overnight face on Cheese to check that my doc is still there. It is priceless, thanks so much GNOME again! ❤😀


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: fedora, fedora 26, GNOME, GNOME 3.24, GNOMEPeruChallenge, GNOMEPeruChallenge2017, Julita Inca, Julita Inca Chiroque, release Fedora, upgrade fedora

Fedora 26 Release Party Novi Sad

Posted by nmilosev on July 16, 2017 08:01 AM

1DSC0000.JPG

Another awesome Fedora Release Party in Novi Sad! Thank you all who came to hang out and hear what is new in Fedora 26.

We had several interesting talks about what is new in Fedora 26, how to package RPMs, how to use Fedora Server edition, how to become a Fedora contributor and the ever-so-important talk about privacy, encryption and security in mobile IM applications.

Gallery: https://nmilosev.github.io/f26rpns-gallery/

Slides: https://github.com/nmilosev/f26rpns-gallery/tree/gh-pages/talks

See you in 6 months!

Fedora 26 + i3 + tilda + tmux combo!!!

Posted by Alberto Rodriguez (A.K.A bt0) on July 15, 2017 04:54 PM

i3wm is a Tiling window manager than has a lot options to custom and low hardware requirements. One most complete guide to i3 in fedora is the amazing Fedora Magazine article “Getting started with the i3 window manager” by William Moreno.

Getting started with the i3 tiling window manager

<iframe class="wp-embedded-content" data-secret="kjVxFFtgb6" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/getting-started-i3-window-manager/embed/#?secret=kjVxFFtgb6" title="“Getting started with the i3 tiling window manager” — Fedora Magazine" width="600"></iframe>

In my case only want to talk about one custom set of applications than we can use together

sudo dnf install i3 i3status dmenu i3lock feh tmux tilda network-manager-applet

This packages are:

  • i3: Window Manager
  • i3status: Bar of system information
  • dmenu: application launcher
  • i3lock: Screen Locker
  • feh: to set a wallpaper
  • tmux: terminal multiplexer
  • tilda: Customizable Terminal emulator
  • network-manager-applet: Applet to manage network connections

Changing to i3 in GDM:

After configuring the basic of i3, I add the next lines to my .config/i3/config

exec --no-startup-id feh --bg-scale ~/Pictures/fox.jpg # cute wallpaper
exec --no-startup-id nm-applet # network manager applet 
exec --no-startup-id tilda  #tilda terminal emulator

Tilda

Start tilda for the first time using dmenu $mod + d tilda, and configure as you wish, my particular setup is the next:

 

 

and the result:

cheers!!!

qemu/kvm libvirt and trim with Fedora 25

Posted by Jens Kuehnel on July 15, 2017 01:15 PM

Hi,

after more then 10 years of using VMWare Workstation (Starting with VMWare Workstation 5). I’m in the process of moving to KVM/libvirt, but I want to use qcow2 with trim support.

I’m using Fedora 25 with virt-manager to create my virtual machines. A lot of pages describe that very well like Chris Irwins. But I found another problem.

To support trim you need to make sure you have at least 2.1, but I want the latest version 2.7. This it the default, so normally you don’t need to change this with virsh edit DOMAIN:

<type arch='x86_64' machine='pc-i440fx-2.7'>hvm</type>

One parameter that you need to change manually (and that can not be done with virt-manager) tells qemu that the discard/trim should be forwarded to the underlying image.It looks like this:

<driver name='qemu' type='qcow2' discard='unmap'/>

All of this can be found on the Internet. A problem that I faced with RHEL7 and others is that virt-manager creates the disc-controller as a antique LSI/NCR controller and that RHEL7 does not support this. To fix this you have to this model to the scsi controller:

<controller type='scsi' index='0' model='virtio-scsi'>

With this RHEL7, Windows 10 and FreeBSD 11 Machines can be configured to run with trimming there images them self.

To migrate you Windows 10 from “libvirt” auf “libvirt-scsi” is quite easy. Add a new libvirt-scsi Disc to an existing Installation. Install the driver for libvirt-scsi, reboot. Now you system supports this driver. After the this reboot, shutdown the machines again and change the disc type of all discs to libvirt-scsi like above and reboot again. Your system should start up with it’s libvirt-scsi enabled disc.

To run the trim command to cleanup unused space, run this in an Admin PowerShell:

Optimize-Volume -DriveLetter c -ReTrim -Verbose

Rinse and repeat for all disc letters.

As always no warranty that this does not break you system.

Google startup program .

Posted by mythcat on July 15, 2017 12:17 PM
This is a new way to start your google work.
This is a great idea from google maybe some team will be agree with this help.
The goals from google team to do that is:
FEATURES:
  • GCP and Firebase Credits 
  • Office Hours 
  • 24/7 Support 
  • G Suite Online Training 
  • Advertising for Startups 
The credits come with:
SPARK PACKAGE Get $20,000 in credit for 1 year. Credit can be applied to all Google Cloud Platform and Firebase products.
SURGE PACKAGE Get $100,000 in credit for 1 year. Credit can be applied to all Google Cloud Platform and Firebase products.

oh-my-zsh in Fedora

Posted by Alberto Rodriguez (A.K.A bt0) on July 15, 2017 04:10 AM

Oh My Zsh is an AMAZING open source, community-driven framework for managing your zsh configuration.

Prerequisites

To use it in fedora, we need install zsh and some user utils to :

$ sudo dnf install zsh util-linux-user git

Installation

The next step is just download and run the installation script install.sh

$ sh -c "$(wget https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O -)"

Once the installation is finished you will be using the brand new oh-my-zsh 🙂

Plugins

Oh-my-zsh has a lot of useful plugins, they can be activated on ~/.zshrc, By default only the git plugin is enabled, in my personal case, I’m using the next plugins:

  • git
  • python
  • pyenv
  • dnf
  • fedora
plugins

Plugins in my ~/.zshrc

the complete list of plugins available are here.

Themes

oh-my-zsh has a lot of themes, the examples are here. One of the favorites is the agnoster theme.

Agnoster theme

 

To configure the themes also need edit ~/.zshrc. I use a random selection of themes

Random themes

Updating

Occasionally Oh-my-zsh ask you to check updates (this will be disable on your ~/.zshrc) and if you want do a mauna update just run:

upgrade_oh_my_zsh

Uninstall

On-my-zsh is not for everyone, so if you want uninstall it just type:

uninstall_oh_my_zsh

And if you also want uninstall zsh:

$ sudo dnf remove zsh

Special thanks to:

Robby Russell and the people of Planet Argon for the wonderful/amazing/awesome Oh-My-zsh

Fedora 26 server 64bit - tested VM.

Posted by mythcat on July 14, 2017 10:07 PM
I install Fedora 26 into simple way with the Netinstall Image (64-bit 484MB ) from here.
I used the last VirtualBox to test this Fedora 26 net image.
It took some time because the hardware used is without the dedicated video card and an I5 processor. The basic idea of this test was to see how to install it.
It's interesting to watch: the number of packages installed per time unit, the startup steps for the base installation and the work environment.
The other steps are more complex because it matters what you want to do with this linux. It depends on how much you want to adapt it to your hardware machine or whether you will make it a web server, ftp, sftp or a graphics rendering or video rendering station.
The total installation time in VirtualBox was one hour and seven minutes. The resulting video was modified by changing the number of frames for a faster viewing, (from 72 to 172).
The reason was the first steps to install Fedora not to set a specific linux server.
I use linux command under root account to install and set Fedora 26:
#dnf update 
#dnf upgrade
#dnf grouplist
#dnf grouplist -v
#dnf install @cinnamon-desktop
#dnf -y group install "Fedora Workstation"
#dnf install setroubleshoot
#sestatus
#sestatus -v
#getenforce
#dnf install clamtk
#echo "exec /usr/bin/cinnamon-session" >> ~/.xinitrc
#startx
Let's see the record video of this test install :
<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/PC_9uukubC4?list=PLIDHlEkMih2OssDFquu0iWSNOTsFeuk8v?ecver=2" style="position:absolute;width:100%;height:100%;left:0" width="640"></iframe>

Open Source Summit Japan

Posted by Brian "bex" Exelbierd on July 14, 2017 04:51 PM

Open Source Summit

Tokyo, Japan

31 May - 2 June 2017

Open Source Summit is the new name for LinuxCon/ContainerCon/etc. This was my first technical conference in Japan and possibly the first Open Source Summit ever. Future Open Source Summits will be held in North American and Europe later this year. This conference was apparently attended by over 1000 people, however it featured a small exhibition floor and was split across two floors of a building in Tokyo, Japan. This split led to it feeling like a lower attendance count for me.

Fedora was represented as part of a shared Red Hat communities booth. Our videos about Fedora Modularity and Diversity were playing in heavy rotation with a few other videos about Red Hat and other communities. I worked with local engineers to answer questions about Fedora and Modularity as they arose. We got some interest, however booth traffic in general was fairly light.

I think the primary reason for this was that other than the folks from SUSE, BlackDuck, and 1-2 others, every booth was focused on Automotive Linux. From a Fedora perspective, we don’t have much of an Automotive Linux story. We did talk a lot about Fedora Atomic, when people’s use cases made that appropriate. You could easily argue that automobiles are a special case of Internet of Things (IoT) which is well served by the OSTree technology built into Atomic Host.

A big take away for me was that we need to do a better job of figuring out how to grow our community in Japan. I had sent a general email to the ambassador list, but did not get a response from anyone in Japan. I did not follow up, like I should have, with direct mail to the ambassadors there, however it concerns me that email with Japan in the subject line was not answered. Assuming this conference remains on the list for next year, I will try to better engage with the local ambassador community. Anything we can do to use these kinds of conferences as leverage for growing our contributor base among working professionals is a good thing. This is especially true at a conference like this one where hardware drivers and other work may need to be done in Fedora to set it up for later inclusion in other Enterprise Linux distributions.

By sharing a booth with Red Hat we had no direct conference costs. Even the stickers we handed out were provided directly by Red Hat. This allowed us to participate in this conference without spending any Fedora public budget.

Swag!

OSCAL (Open Source Conference Albania)

Posted by Brian "bex" Exelbierd on July 14, 2017 04:15 PM

OSCAL (Open Source Conference Albania)

Tirana, Albania

13-14 May 2017

This was my first time going to Open Source Conference Albania (OSCAL). OSCAL is a two day event organized by the local hackerspace and its strong community. I was very impressed with the level of organization and quality on display. The conference was organized by 14 people with the help of 38 volunteers. 69 speakers presented multiple parallel tracks to about 300 attendees.

The vast majority of the attendees seemed to be young professionals or students. The presentations ranged from a presentation on the current concerns of the FSFE to a detailed discussion of the Rust programming language. Audience sizes varied a lot with the preference seeming to be for skill-oriented (and often introductory) content over theoretical or informational content. My views are based on my observations so I encourage you to check with the organizers for more detailed information.

I presented a talk titled Building Applications Doesn’t Mean Writing It All From Scratch. My goal was to talk about writing glue code to bind together larger open source projects to accomplish a goal. I used the example of the new Fedora Docs Infrastructure. I believe I was successful in conveying my message, however the audience was small.

This isn't me :) Who am I? I’m not me :) I didn’t have a picture of my talk, so I am using this one of Justin Flory instead.

I spent some time at the Fedora Booth and I encourage you to read the event report by the ambassadors directly. Many of the questions I heard were focused on the gaming that was on display or on how to use Fedora productively as a student. These are two audiences that are typically further out on the adoption curve than Fedora targets. They are great people and we should think about what the best way to serve these audiences. It seems that a SIG or other working group would allow those interested in these audiences best organize and serve them. It would also feed nicely into the structure suggested by the new mission statement.

However, I wonder if we missed out on the opportunity to engage the professionals in the audience by highlighting gaming. As I wasn’t at the booth full-time, I look to the Ambassadors to determine this. I spoke to a small number of the professionals between sessions in the lecture halls. They are concerned with the speed of change in Fedora and whether the skills they have are transferable. They want to grow in ways that will enhance their careers and were eager to learn about how contributing to Fedora could help them improve their skills or gain new ones. These topics are important and ones we should think about if and how we should address them.

This event also showed that we need to think about talks that use Fedora as the example but speak directly to the needs of the audience. This may mean that our Ambassadors need to find the people in our community to go deliver the talks that the audience wants. This may also mean we need to recruit the right people to be at the booth to make sure we have a high likelihood of being able to continue the conversations that our talks set in motion. Focus seems to be the key here.

Overall, our representatives did a great job.

The Booth

News: Send files on WhatsApp.

Posted by mythcat on July 14, 2017 12:20 PM
The WhatsApp software lets users share multimedia content, but you can't send files directly to other users.
If you wanted to send unsupported file formats like .apk, .zip etc, you had to rename the file to a supported file format.
For example if you wanted to send the apk for your favorite app, you’d rename it such that it ends with .txt. I tested with Whatsapp™ For PC extension - Opera add-ons and works great.
I send a document file (.doc) with an approximate size of 5 Mb, without any interruption of the connection and a fluid transfer to the application.

Note:

You need to know also, there’s a limit of 100 MB to any attachment.
The executable file is not allowed ( like: .exe, .dll).
Meanwhile, messaging app Telegram is still leading the pack when it comes to file size restrictions — it has supported 1.5 GB files since its launch.

All systems go

Posted by Fedora Infrastructure Status on July 14, 2017 12:13 PM
Service 'The Koji Buildsystem' now has status: good: Everything seems to be working.

What’s new in the Anaconda Installer for Fedora 26 ?

Posted by Fedora Magazine on July 14, 2017 08:00 AM

Fedora 26 is available now, providing a wide range of improvements across the entire operating system. Anaconda — the Fedora installer — has many new features and improvements implemented for Fedora 26. The most visible addition is the introduction of Blivet GUI, providing power users an alternate way to configure partitioning. Additionally, there are improvements to automated installation with kickstart, a range of networking improvements, better status reporting when your install is under way, and much more.

Enhanced storage configuration with Blivet GUI

The main highlight of Anaconda in Fedora 26 is the integration of Blivet GUI storage configuration tool into the installation environment. Previously, there were two options for storage configuration — automatic partitioning and manual partitioning. Automatic partitioning is mostly useful for really simple configurations, like installing Fedora on an empty hard drive or alongside of another existing operating system. The existing manual partitioning tool provides more control over partition layouts and size, enabling more complicated setups.

The previously-available manual partitioning tool is quite unique. Instead of creating all the storage components manually, the user just specifies future mountpoints and their properties. For example, you simply create two “mountpoints” for /home and / (root), specify properties like encryption or RAID and Anaconda properly configures all the necessary components below. This top-down model is really powerful and easy to use but might be too simple for some complicated storage setups.

This is where Blivet GUI can help — it is a storage configuration tool that works in the standard way. If you want an LVM storage on top of RAID, you need to create it manually from the building blocks — from bottom up. With a good knowledge of custom partitioning layouts, complicated custom storage setups are easily created with BlivetGUI.

Using Blivet GUI in Anaconda

Blivet GUI has been available from Fedora repositories as a standalone desktop application since Fedora 21 and now comes also to Anaconda as a third option for storage configuration. Simply choose Advanced Custom (Blivet-GUI) from the Installation Destination window in Anaconda.

Installation Destination window in Anaconda

Installation Destination window in Anaconda

Blivet GUI has full integration into the Anaconda installation workflow. Only the selected disks in the Installation Destination window show in BlivetGUI. Changes remain unwritten to the disks until you leave the window and choose Begin Installation. Additionally, you can always go back and use one of the other partitioning methods. However, Blivet GUI discards changes if you switch to a different partitioning method.

Storage configuration using Blivet GUI in Anaconda

Storage configuration using Blivet GUI in Anaconda

Adding a new device using Blivet GUI in Anaconda

Adding a new device using Blivet GUI in Anaconda

Automated install (kickstart) improvements

Kickstart is the configuration file format for automation of the installation process. A Kickstart file can configure all options available in the graphical and text interfaces and much more. View the Kickstart documentation for more information about Kickstart and how to use it.

Support for –nohome, –noswap and –noboot options in auto partitioning

When you don’t want to specify your partitions, you can let anaconda do that for you with the autopart command. It will automatically create a root partition, a swap partition and a boot partition. With Fedora Workstation, the installed creates a /home partition on large enough drives. To make the auto partitioning more flexible, anaconda now supports the –nohome, –noswap and –noboot options that disable the creation of the given partition.

Strict validation of the kickstart file with inst.ksstrict

It is not uncommon for sysadmins to have complicated kickstart files, and sometimes kickstart files have errors. At the beginning of the installation, anaconda checks the kickstart file and produces errors and warnings. Errors result in the termination of the installation.The log records warnings and the installation continues.

To ensure a kickstart file doesn’t produce any warnings, enable the new strict validation with the boot option inst.ksstrict. This treats warnings in kickstart in the same way as errors.

Snapshot support

Sometimes it is helpful to save an old installation or have a backup of freshly installed system for recovery. For these situations the snaphost kickstart command is now available. View the pykickstart documentation for full usage instructions of this new command. This feature is currently supported on LVM thin pools only. To request support for other partition types please file an RFE bug on bugzilla.

Networking improvements

Networking is a critical part of Anaconda as many installations are partially or fully network based. Anaconda also supports installation to network attached storage devices such as iSCSI, FCoE and Multipass. For this reason Anaconda needs to be able to support complex networking setups not only to even just start the installation but also to correctly setup networking for the installed system.

For this Fedora cycle we have mostly bug fixes, adaptation to NetworkManager rebase, enhancements to the network kickstart tests suite to discover issues caused by NM changes and some changes in components we are using in general. Also adding support for various – mostly enterprise driven – features:

  • Support for IPoIB (IP over infiniband) devices in TUI.
  • Support for setting up bridge device at early stage of installation (eg to fetch kickstart).
  • New inst.waitfornet boot option for waiting for connectivity at later stage of installation in cases where default waiting for DHCP configuration is not sufficient due to special network environment (DHCP servers) setup.

Other improvements

Anaconda and Pykickstart documentation on Read the Docs

The Anaconda and Pykickstart documentation have a new home on ReadTheDocs:

Also the Pykickstart documentation now contains full detailed kickstart command reference, both for Fedora and RHEL.

Progress reporting for all installation phases

Do you also hate it when Anaconda says “processing post installation setup tasks” for many minutes (or even tens of minutes!) without any indication what’s actually going on and how much longer it might take?

The cause of the previous lack of status reporting was simple – during the final parts of the RPM installation transaction RPM post & posttrans scriptlets are running and that can take a significant amount of time. And until recently there was no support from RPM and DNF for progress reporting from this installation phase.

But this has been rectified, RPM & DNF now provide the necessary progress reporting, so Anaconda can finally report what’s actually happening during the full installation run. 🙂

Run Initial Setup TUI on all usable consoles

Initial Setup is a utility to configure a freshly installed system on the first start. Initial Setup provides both graphical and text-mode interfaces and is basically just a launcher for the configuration screens normally provided by Anaconda.

During a “normal” installation everything is configured in Anaconda and Initial Setup does not run. However, the situation is different for the various ARM boards supported by Fedora. Here the installation step is generally skipped and users boot from a Fedora image on an SD card. In this scenario Initial Setup is a critical component, enabling users to customize the pre-made system image as needed.

The Initial Setup text interface (TUI) is generally used on ARM systems. During the Fedora 25 time frame two nasty issues showed up:

  • some ARM board have both serial and graphical consoles with no easy way detect which the user is using
  • some ARM board consoles appear functional, but throw errors when Initial Setup tries to run the TUI on them

To solve these issues, the Initial Setup TUI is run on all consoles that appear to be usable. This solves the first issue – the TUI will run on both the serial and graphical consoles. It also solves the second issue, as consoles that fail to work as expected are simply skipped.

Built in help is now also available for the TUI

Previously, only the graphical installation mode featured help. However, help is accessible in the TUI from every screen that offers the ‘h to help’ option.

Help displayed in the TUI

Help displayed in the TUI

New log-capture script

The new log-capture script is an addition from community contributor Pat Riehecky. This new script makes it easy to gather many installation relevant log files into a tarball, which is easily transferred outside of the installation environment for detailed analysis.

The envisioned use case is running the log-capture script in kickstart %onerror scriptlets.

 

Structured installation tasks

Anaconda does a lot of things during the installation phase (configures storage, installs packages, creates users & groups, etc.). To make the installation phase easier to monitor and to debug any issues the individual installation tasks are now distinct units (eq. user creation, user group creation, root user configuration.) that can be part of task groups (eq. user & group configuration).

End result – it is now easy to see in the logs how long each task took to execute, which task is currently running & how many tasks still need to be executed until the installation is done.

User interaction config file

Anaconda supports the new user interaction config file. A special configuration file to record the screens and (optionally) the settings manipulated by the user.

The main idea behind the user interaction config file is that a user generally comes into contact with multiple separate applications (Anaconda, Gnome Initial Setup, Initial Setup, a hypothetical language selector on a live CD, etc.) during an installation run and it would make sense to only present each configuration option (say language or timezone selection) only once and not multiple times. This should help to reduce the amount of screens a user needs to click through, making the installation faster.

Anaconda will record visited screens and will hide screens marked as visited in an existing user interaction config file. But once other pre & post installations tools (such as for example Gnome Initial Setup) start picking up support it should be easy to spot as users should no longer be asked to configure the same setting twice. But we might not have to wait for long as a Fedora 27 change proposal for adding Gnome Initial Setup support already exists.