Fedora People

Flock interviews: User Feedback on Modularity

Posted by Fedora Community Blog on August 23, 2017 04:33 PM

As you probably know, there is annual convention called Flock. This year’s is happening in Cape Cod, Hyannis, MA and will begin the morning of Tuesday, August 29. Sessions will continue each day until midday on Friday, September 1.

I have asked all of the session leaders from Flock some questions.

And now you are about to read one of the responses.

User Feedback on Modularity by Mary Clarke

Briefly describe your session:

I will actually have 6, 1-hour sessions during the course of Flock. They will be in the hour before the lunch break and the  hour after the lunch break on Tuesday, Wednesday, and Thursday. The reason we are doing this is that my sessions are not typical talks, in fact, they are not talks at all. You could describe them as focus groups intended to obtain end-user feedback. These sessions are intended to be highly interactive where we will demo functionality and ask for attendees to respond with their thoughts. I have provided more information through my answers below.

What does your talk focus on?

My sessions focus on demoing functionality related to a new prototype called Boltron

What is the goal of your session at Flock?

I am interested in learning from attendees, their initial impressions of the functionality Boltron provides, how they think they could leverage that functionality and what changes if any they would like to see in the functionality.

What does it affect in the project?

The feedback from my sessions will provide the engineers working on Boltron with some very valuable insights into what they got right and what still needs to be tweaked in order to meet users needs. This information will be prioritized and worked on by the engineers for the next release of Boltron

Without giving too much away, what can attendees expect to learn or do in your session?

Attendees can expect to see a demo of Boltron functionality and/or download Boltron and walk through that functionality themselves. Then, they will be asked to participate in a discussion about the functionality as a means to obtain their opinions on what is currently implemented and what they feel should change about that implementation.

Who should attend?

Anyone who currently installs or updates RPMs for their organization.

What do you do in Fedora/how long have you been involved in the project?

I am a UX Designer and have only been involved in Fedora for about a year.

What attracts you to this type of work or part of the project?

I have built a career out of my passion for helping teams build products or systems that are easy to use and provide end users with exactly what they are looking for.

 

The post Flock interviews: User Feedback on Modularity appeared first on Fedora Community Blog.

Running OpenShift Origin built from source

Posted by Adam Young on August 23, 2017 02:52 PM

Kubernetes is moving from Third Party Resources to the Aggregated API Server.  In order to work with this and continue to deploy on OpenShift Origin, we need to move from working with the shipped and stable version that is in Fedora 26 to the development version in git.  Here are my notes to get it up and running.

Process

It took a couple tries to realize that the go build process needs a fairly bulky virtual machine. I ended up using one that has 8 GB Ram and 50 GB disk. In order to minimize churn, I also went with a Centos7 deployment.

Once I had a running VM here are the configuration changes I had to make.

Use nmcli c eth0 on and set ONBOOT. This can also be done via editing the network config files or using older tools.

yum update -y
yum groupinstall "Development Tools"
yum instyall -y origin-clients

Ensure that the Docker daemon is running with the following argument:
–insecure-registry 172.30.0.0/16  By editing the file /etc/sysconfig/docker.  My Options line looks like this:

OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry 172.30.0.0/16'

Followed dirs from here In order to set up the development environment.

cd $GOPATH/src/github.com/openshift/origin
hack/env hack/build-base-images.sh
OS_BUILD_ENV_PRESERVE=_output/local/releases hack/env make release

Note that the KubeVirt code I want to run on top of this requires a later version of go, and thus I upgrade to go version go1.8.3 linux/amd64  via the tarball install method.

The Hash that gets generated by the build depends on when you run.   To see the images run

docker images

I expand the terminal to full screen lots of columns of data.  Here is a subset:

REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/hello-openshift 413eb73 911092241b5a 35 hours ago 5.84 MB
openshift/hello-openshift latest 911092241b5a 35 hours ago 5.84 MB
openshift/openvswitch 413eb73 c53aae019d81 35 hours ago 1.241 GB
openshift/openvswitch latest c53aae019d81 35 hours ago 1.241 GB
openshift/node 413eb73 af6135fc50c9 35 hours ago 1.239 GB
openshift/node latest af6135fc50c9 35 hours ago 1.239 GB

The tag is the second column.  This is what I use in order to install. I don’t use “Latest” as that changes over time, and it might accidentally succeed using a remote package when the local build failed.

I want to be able to edit configuration values.  A also want the etcd store to persist across reboots.  Thus,

sudo mkdir /var/lib/origin/etcd
sudo chown ayoung:ayoung /var/lib/origin/etcd

And then my final command line to bring up the cluster is:

oc cluster up --use-existing-config --loglevel=5 --version=413eb73 --host-data-dir=/var/lib/origin/etcd/ | tee /tmp/oc.log 2>&1

Notes:

Below are some of my Troubleshooting notes. I am going to leave them in here so they show up on future searches for people that have the same problems.  They are rough, and you don’t need to read them.

hack/env make release errored

[WARNING] Copying _output/local/releases from the container failed!
[WARNING] Error response from daemon: lstat /var/lib/docker/devicemapper/mnt/fb199307b2f95649066c42f55e5487c66eb3421e5407c8bd6d2f0a7058bc8cd5/rootfs/go/src/github.com/openshift/origin/_output/local/releases: no such file or directory

Tried with OS_BUILD_ENV_PRESERVE=_output/local but no difference.

Should have been
OS_BUILD_ENV_PRESERVE=_output/local/releases hack/env make release

This did not work (basename error)
export PATH=”${PATH}:$( source hack/lib/init.sh; echo “${OS_OUTPUT_BINPATH}/$( os::util::host_platform )/” )”

But I was able to do
export PATH=$PATH:$PWD/_output/local/bin/linux/amd64/

and then

oc cluster up –version=latest

failed due to docker error
— Checking Docker daemon configuration … FAIL
Error: did not detect an –insecure-registry argument on the Docker daemon
Solution:

used https://wiki.centos.org/SpecialInterestGroup/PaaS/OpenShift-Quickstart

To fix

oc seems to be running OK now. But not using my git commit

Told to run:

Then

oc cluster up –version=8d96d48

GSoC 2017 - Mentor Report from 389 Project

Posted by William Brown on August 23, 2017 02:00 PM

GSoC 2017 - Mentor Report from 389 Project

This year I have had the pleasure of being a mentor for the Google Summer of Code program, as part of the Fedora Project organisation. I was representing the 389 Directory Server Project and offered students the oppurtunity to work on our command line tools written in python.

Applications

From the start we have a large number of really talented students apply to the project. This was one of the hardest parts of the process was to choose a student, given that I wanted to mentor all of them. Sadly I only have so many hours in the day, so we chose Ilias, a student from Greece. What really stood out was his interest in learning about the project, and his desire to really be part of the community after the project concluded.

The project

The project was very deliberately “loose” in it’s specification. Rather than giving Ilias a fixed goal of you will implement X, Y and Z, I chose to set a “broad and vague” task. Initially I asked him to investigate a single area of the code (the MemberOf plugin). As he investigated this, he started to learn more about the server, ask questions, and open doors for himself to the next tasks of the project. As these smaller questions and self discoveries stacked up, I found myself watching Ilias start to become a really complete developer, who could be called a true part of our community.

Ilias’ work was exceptional, and he has documented it in his final report here .

Since his work is complete, he is now free to work on any task that takes his interest, and he has picked a good one! He has now started to dive deep into the server internals, looking at part of our backend internals and how we dump databases from id2entry to various output formats.

What next?

I will be participating next year - Sadly, I think the python project oppurtunities may be more limited as we have to finish many of these tasks to release our new CLI toolset. This is almost a shame as the python components are a great place to start as they ease a new contributor into the broader concepts of LDAP and the project structure as a whole.

Next year I really want to give this oppurtunity to an under-represented group in tech (female, poc, etc). I personally have been really inspired by Noriko and I hope to have the oppurtunity to pass on her lessons to another aspiring student. We need more engineers like her in the world, and I want to help create that future.

Advice for future mentors

Mentoring is not for everyone. It’s not a task which you can just send a couple of emails and be done every day.

Mentoring is a process that requires engagement with the student, and communication and the relationship is key to this. What worked well was meeting early in the project, and working out what community worked best for us. We found that email questions and responses worked (given we are on nearly opposite sides of the Earth) worked well, along with irc conversations to help fix up any other questions. It would not be uncommon for me to spend at least 1 or 2 hours a day working through emails from Ilias and discussions on IRC.

A really important aspect of this communication is how you do it. You have to balance positive communication and encouragement, along with critcism that is constructive and helpful. Empathy is a super important part of this equation.

My number one piece of advice would be that you need to create an environment where questions are encouraged and welcome. You can never be dismissive of questions. If ever you dismiss a question as “silly” or “dumb”, you will hinder a student from wanting to ask more questions. If you can’t answer the question immediately, send a response saying “hey I know this is important, but I’m really busy, I’ll answer you as soon as I can”.

Over time you can use these questions to help teach lessons for the student to make their own discoveries. For example, when Ilias would ask how something worked, I would send my response structured in the way I approached the problem. I would send back links to code, my thoughts, and how I arrived at the conclusion. This not only answered the question but gave a subtle lesson in how to research our codebase to arrive at your own solutions. After a few of these emails, I’m sure that Ilias has now become self sufficent in his research of the code base.

Another valuable skill is that overtime you can help to build confidence through these questions. To start with Ilias would ask “how to implement” something, and I would answer. Over time, he would start to provide ideas on how to implement a solution, and I would say “X is the right one”. As time went on I started to answer his question with “What do you think is the right solution and why?”. These exchanges and justifications have (I hope) helped him to become more confident in his ideas, the presentation of them, and justification of his solutions. It’s led to this excellent exchange on our mailing lists, where Ilias is discussing the solutions to a problem with the broader community, and working to a really great answer.

Final thoughts

This has been a great experience for myself and Ilias, and I really look forward to helping another student next year. I’m sure that Ilias will go on to do great things, and I’m happy to have been part of his journey.

Flock interviews: Multi-Arch Container Layered Image Build System

Posted by Fedora Community Blog on August 23, 2017 01:43 PM

As you probably know, there is annual convention called Flock. This year’s is happening in Cape Cod, Hyannis, MA and will begin the morning of Tuesday, August 29. Sessions will continue each day until midday on Friday, September 1.

I have asked all of the session leaders from Flock some questions.

And now you are about to read one of the responses.

Multi-Arch Container Layered Image Build System by Adam Miller

What does your talk focus on?

My talk will focus on the Fedora Layered Image Build System (FLIBS), the challenges multi-arch has brought to the container ecosystem, how we will integrate FLIBS with other Fedora initiatives, and what this all means for Fedora users and contributors.

Without giving too much away, what can attendees expect to learn or do in your session?

They will learn how we will deliver multi-arch containers to our users from start to finish.

Who should attend?

Anyone interested in container technologies on architectures other than just x86_64 or interested in the future of building container content in Fedora in general. I welcome users and contributors alike.

What is the goal of your session at Flock?

My goal of this session is to education Fedora Users and Contributors of our initiative to build Fedora content using container technologies in respect to hardware architectures beyond that of just x86_64. We will also walk through the design of the system that has been built as well as discuss future plans to enable contributors to deliver to users more rapidly and with more flexibility.

What does it affect in the project?

Fedora Layered Image Build System (FLIBS) is driven primarily by the Fedora Atomic Working Group in order to provide a full-stack “containerized” Fedora from bare metal to application runtime, using container technologies. The Atomic Host being the operating system and lowest level of the stack, and the container images to be used at runtime being further up the stack. Our goal is to offer Fedora building blocks for every step of the way.

This also touches on Fedora Modularity, Factory 2.0, and Fedora CI in various ways as there are plans to allow Modules to be shipped optionally as containers in the future, we’re going to leverage features of Factory 2.0 in order to keep our content delivered to users constantly up to date, and we will be integrating with the Fedora CI effort in order to ensure the quality of that content we want to more rapidly ship before actually doing so.

What do you do in Fedora/how long have you been involved in the project?

I have been a contributing member of the Fedora Project since 2008. I’m a Packaging Mentor, a Proven Packager, a member of Fedora Release Engineering, a member of the Fedora Atomic Working Group, and an elected member of the Fedora Engineering Steering Committee since the Fedora 24 Cycle. Also, I’ve been on the Fedora Engineering Team at Red
Hat since April 2015.

What attracts you to this type of work or part of the project?

I find container technologies fascinating and I believe them to be the future of how we manage infrastructure, as such I would like to help Fedora work towards the goal of continuing to be Leading Edge and participate heavily in delivering those technologies to our users.

The post Flock interviews: Multi-Arch Container Layered Image Build System appeared first on Fedora Community Blog.

Spend until you're secure

Posted by Josh Bressers on August 23, 2017 12:20 PM
I was watching a few Twitter conversations about purchasing security last week and had yet another conversation about security ROI. This has me thinking about what we spend money on. In many industries we can spend our way out of problems, not all problems, but a lot of problems. With security if I gave you a blank check and said "fix it", you couldn't. Our problem isn't money, it's more fundamental than that.

Spend it like you got it
First let's think about how some problems can be solved with money. If you need more electricity capacity, or more help during a busy time, or more computing power, it's really easy to add capacity. You need more compute power, you can either buy more computers or just spend $2.15 in the cloud. If you need to dig a big hole, for a publicity stunt on Black Friday, you just pay someone to dig a big hole. It's not that hard.

This doesn't always work though, if you're building a new website, you probably can't buy your way to success. If a project like this falls behind it can be very difficult to catch back up. You can however track progress which I would say is at least a reasonable alternative. You can move development to another group or hire a new consultant if the old one isn't living up to expectations.

More Security
What if we need "more" security. How can we buy our way into more security for our organization? I'd start by asking the question can we show any actual value for our current security investment? If you stopped spending money on security tomorrow do you know what the results would be? If you stopped buying toilet paper for your company tomorrow you can probably understand what will happen (if you have a good facilities department I bet they already know the answer to this).

This is a huge problem in many organizations. If you don't know what would happen if you lowered or increased your security spending you're basically doing voodoo security. You can imagine many projects and processes as having a series of inputs that can be adjusted. Things like money, time, people, computers, the list could go on. You can control these variables and have direct outcomes on the project. More people could mean you can spend less money on contractors, more computers could mean less time spent on rendering or compiling. Ideally you have a way to find the optimal levels for each of these variables resulting in not only a high return on investment, but also happier workers as they can see the results of their efforts.

We can't do this with security today because security is too broad. We often don't know what would happen if we add more staff, or more technology.

Fundamental fundamentals
So this brings us to why we can't spend our way to security. I would argue there are two real problems here. The first being "security" isn't a thing. We pretend security is an industry that means something but it's really a lot of smaller things we've clumped together in such a way that ensures we can only fail. I see security teams claim to own anything that has the word security attached to it. They claim ownership of projects and ideas, but then they don't actually take any actions because they're too busy or lack the skills to do the work. Just because you know how to do secure development doesn't automatically make you an expert at network security. If you're great at network security it doesn't mean you know anything about physical security. Security is a lot of little things, we have to start to understand what those are and how to push responsibility to respective groups. Having a special application security team that's not part of development doesn't work. You need all development teams doing things securely.

The second problem is we don't measure what we do. How many security teams tell IT they have to follow a giant list of security rules, but they have no idea what would happen if one or more of those rules were rolled back? Remember when everyone insisted we needed to use complex passwords? Now that's considered bad advice and we shouldn't make people change their passwords often. It's also a bad idea to insist they use a variety of special characters now. How many millions have been wasted on stupid password rules? The fact that we changed the rules without any fanfare means there was no actual science behind the rules in the first place. If we even tried to measure this I suspect we would have known YEARS ago that it was a terrible idea. Instead we just kept doing voodoo security. How many more of our rules do you think will end up being rolled back in the near future because they don't actually make sense?

If you're in charge of a security program the first bit of advice I'd give out is to look at everything you own and get rid of whatever you can. Your job isn't to do everything, figure out what you have to do, then do it well. One project well done is far better than 12 half finished. The next thing you need to do is figure out how much whatever you do costs, and how much benefit it creates. If you can't figure out the benefit, you can probably stop doing it today. If it costs more than it saves, you can stop that too. We must have a razor focus if we're to understand what our real problems are. Once we understand the problems we can start to solve them.

Flock interviews: Get Together with Local Fedorans: A UX Design Case

Posted by Fedora Community Blog on August 23, 2017 10:40 AM

As you probably know, there is annual convention called Flock. This year’s is happening in Cape Cod, Hyannis, MA and will begin the morning of Tuesday, August 29. Sessions will continue each day until midday on Friday, September 1.

I have asked all of the session leaders from Flock some questions.

And now you are about to read one of the responses.

 

Get Together with Local Fedorans: A UX Design Case by Suzanne Hillman

What is the goal of your session at Flock?

To explain a UX process that I used in an Outreachy internship in a way that others within the Fedora community can use it for their own projects.

Without giving too much away, what can attendees expect to learn or do in your session?

Participants can expect to learn a general UX process that they can use on their own projects. I intend to provide a cheatsheet for them to refer to, and hope that people will ask questions relating to their specific projects.

Who should attend?

Anyone interested in making their software easier for others to use.

What does your talk focus on?

UX, specifically that of Regional Fedora Hubs (part of the in-process Fedora Hubs project).

What does it affect in the project?

The user experience of Fedora projects.

What do you do in Fedora/how long have you been involved in the project?

I’m part of the Fedora-Hubs team, although I’ve not been strongly involved lately due to job hunting. I was part of it for about 6 months, although I also worked at Red Hat from 2003 to 2010 in QE, and was therefore tangentially involved with Fedora during my time there.

What attracts you to this type of work or part of the project?

I’m very interested in making open source software easier to use. I think it has a huge amount of potential, but it’s not user-friendly and that turns people off.

 

The post Flock interviews: Get Together with Local Fedorans: A UX Design Case appeared first on Fedora Community Blog.

Designing Fedora Badges @ Flock 2017

Posted by riecatnor on August 23, 2017 09:30 AM

Exciting news: I will be attending Flock 2017 in Hyannis, MA, this year! I will be holding a Fedora Badges Do Session with Masha Leonova. The session will be held on the first day of the conference, Tuesday, August 29th, at 1:30PM in 4-Centerville A + B.

We will be starting from the top: a short overview, then making sure everyone has Inkscape installed and ready to go, downloading Badges design resources, setting them up, and testing everything to get ready to design. The process for designing Fedora Badges has changed slightly as we welcomed a transition to Pagure earlier this year. Now we have easy uploading for files, and tags – all in a shiny new format!

Next we will go step by step through the process of designing a Fedora Badge. We will highlight and detail important points such as research, creating your designs to follow Badge aesthetics, and exporting your final PNG at the correct size. Then we will get to work actually designing. Masha, myself and any other experienced badge artists attending will help you through every step of the process, all the way from picking an issue to uploading the final artwork. Come have fun learning, using your creative side and of course – earn the Badger Padawn badge!

A side note for interested developers: There are a lot of artwork approved badges ready to be pushed to the website. Although we will not be specifically instructing on creating yaml files for awarding badges, I would love to host any developers during this session who want to help out or jump in so we can get these pushed. For all the weathered Fedora Badge developers, it would be great for you to revisit some of the issues so we can update the status. Are they now possible/will never be possible? Let us know, so we can make some art or close some issues!

My excitement to see my Fedora friends, to learn, and to put in focused work on Fedora Badges is growing as we get closer to the date. Looking forward to seeing you all soon 🙂

Relevant links/resources for the Designing Fedora Badges session:

Fedora Badges

Fedora Badges on Pagure

Fedora Badges Resources

Fedora Badges Tutorial Blog Post by Masha Leonova


The slide deck presented at FrOSCon 2017. It is a small update...

Posted by Fabian Deutsch on August 23, 2017 08:27 AM


The slide deck presented at FrOSCon 2017. It is a small update on the KubeVirt front.

Take a look if you want to understand the motivation, and key pillars of KubeVirt.

3 ways to trick out your terminal emulator

Posted by Fedora Magazine on August 23, 2017 08:00 AM

The command line is one of the most well-loved parts of a Linux distribution. Maybe not just because of what you can do with it, but how you can use it. Terminal windows are notorious for customization, and there’s several different ways you can make it your own. You can change the theme color, adjust transparency, use different fonts, or even different terminal emulators. This article will show you three ways you can customize your terminal emulator in Fedora.

Change your fonts

Some people are fine with the default fonts, but what if you have a favorite or want to try something new? There’s a lot of monospaced fonts packaged in Fedora. Check out this round-up of six fonts you can try out and customize in your terminal to make it your own.

6 great monospaced fonts for code and terminal in Fedora

<iframe class="wp-embedded-content" data-secret="L1100EdXCy" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/5-great-monospaced-fonts-for-coding-and-the-terminal-in-fedora/embed/#?secret=L1100EdXCy" title="“6 great monospaced fonts for code and terminal in Fedora” — Fedora Magazine" width="600"></iframe>

Power up with powerline

Ever seen one of those cool lines in someone’s terminal? Powerline is one tool that makes it possible. You can have helpful status markers, like seeing what git branch you’re in or whether you’re inside of a Python virtual environment. There’s also plugins available for Vim and tmux. Learn how to enable it in Fedora with this quick how-to.

Add power to your terminal with powerline

<iframe class="wp-embedded-content" data-secret="kAevkBL0z0" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/add-power-terminal-powerline/embed/#?secret=kAevkBL0z0" title="“Add power to your terminal with powerline” — Fedora Magazine" width="600"></iframe>

Try the Tilix terminal emulator

If you’ve been in a terminal for a while and want to try something new, why not look at Tilix? Tilix is a tilable emulator that lets you split your terminal windows in different ways at once. It also follows the GNOME Human Interface Guidelines to be as user-friendly as possible. Learn how to get started with Tilix in Fedora 26 in this article.

Try Tilix — a new terminal emulator in Fedora

<iframe class="wp-embedded-content" data-secret="69dsMEkkn6" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/try-tilix-new-terminal-emulator-fedora/embed/#?secret=69dsMEkkn6" title="“Try Tilix — a new terminal emulator in Fedora” — Fedora Magazine" width="600"></iframe>

Tips and tricks?

If you’ve never thought about customizing your terminal emulator before, hopefully these tips will let you start playing around with your own unique configurations! Is there something you think we missed or is there a cool tweak you know of you want to share? Let us know how you’re tricking out your terminal in the comments below.


Photo by Ilya Pavlov on Unsplash.

Flock interviews: Designing Fedora Badges

Posted by Fedora Community Blog on August 23, 2017 06:35 AM

As you probably know, there is annual convention called Flock. This year’s is happening in Cape Cod, Hyannis, MA and will begin the morning of Tuesday, August 29. Sessions will continue each day until midday on Friday, September 1.

I have asked all of the session leaders from Flock some questions.

And now you are about to read one of the responses.

Designing Fedora Badges by Marie Nordin

What is the goal of your session at Flock?

The goal of my session at Flock is to educate and guide attendees through the process of creating Fedora Badges artwork. I would like to help Fedorans who are interested in being artistically creative find a fun and useful outlet with active mentors to guide them. Another goal of my session is to empower contributors to have the ability to create badge artwork for their events and projects within Fedora.

What does it affect in the project?

Fedora Badges is a fun motivator for people to become more involved with the Fedora project. It reaches most areas of the project and it has the ability to create a large amount of contributions. My hope is that my session will create more Fedorans who are capable of making Fedora Badge artwork. This would allow for more badges to be published and in turn, create more contributions and participation throughout the entire project.

What does your talk focus on?

The Fedora Badges Design session focuses on creating badge artwork and showing that any person can do that. I will guide attendees step by step through the process, helping them overcome any obstacles, giving them all the necessary resources, and showing them how to keep contributing in the future.

Without giving too much away, what can attendees expect to learn or do in your session?

Attendees of my session can expect to learn exactly how to create artwork for Fedora Badges using Inkscape, Pagure, and badges design resources. They will learn techniques to overcome challenges in the creation process, how to improve their designs building from artwork that already exists, and how/where to find guidance for their designs.

Who should attend?

Anyone who wants to:

  • Be creative
  • Learn how to use Inkscape (or sharpen their skills)
  • Learn about graphic design
  • Contribute to a fun and artistic part of Fedora

What do you do in Fedora/how long have you been involved in the project?

I help to run and guide the Fedora Badges project and I contribute to the Fedora Design team as my extra time allows. I first got involved with the Fedora Project with an internship in 2013 through Outreachy. I completed the three month internship under the guidance of my mentor, Mizmo. During that time I designed over a 100 badges and a style guide for badge creation. I have continued to stay involved by making artwork, mentoring newcomers, triaging tickets, and speaking/teaching about Fedora Badges.

What attracts you to this type of work or part of the project?

As an artist and graphic designer, I am naturally very attracted to the Fedora Badges project. I enjoy the creativity and collaboration that the badges project allows. I became involved with badges because of the design, and I have stayed because of the community. Each time I have been privileged enough to attend Flock, I find more and more people who are excited and happy about what the Fedora Badges project does for this community. This is a great motivator to keep doing what I am doing to make Fedora Badges a success!

The post Flock interviews: Designing Fedora Badges appeared first on Fedora Community Blog.

New badge: FLISOL 2017 Organizer !

Posted by Fedora Badges on August 23, 2017 04:08 AM
FLISOL 2017 OrganizerYou helped organize the Fedora booth for FLISOL 2017! Thanks for your help.

New badge: FLISOL 2017 Attendee !

Posted by Fedora Badges on August 23, 2017 04:08 AM
FLISOL 2017 AttendeeYou visited the Fedora booth at FLISOL 2017!

New badge: Stroopwafel (Cookie VII) !

Posted by Fedora Badges on August 23, 2017 03:53 AM
Stroopwafel (Cookie VII)A legend among Fedorans, a true person of the community... you have earned the sacred stroopwafel.

New badge: DotNet SIG Member !

Posted by Fedora Badges on August 23, 2017 03:38 AM
DotNet SIG MemberYou're a proud member of the DotNet Special Interest Group

modulemd 1.3.0

Posted by Petr Šabata on August 22, 2017 10:11 PM

I almost forgot! Last week I released modulemd-1.3.0, the module metadata format specification and its reference Python implementation.

This release defines just three new fields but all of them are pretty important.

  • context, which carries a short hash generated from the module’s name, stream, version and its runtime dependencies. This is a prerequisite for a concept we call “Module Stream Expansion” and serves to uniquely identify a particular module build with its expanded dependencies, differentiating it from any other possible builds with the same name, stream and version. This might sound confusing but the basic idea is we would like to, in a future version, allow module maintainers to specify dependencies like “any platform” or “any platform but f27 with any python 3 on top”. The Module Build Service would then build all variants, each with the same name, stream and version but unique expanded dependencies and an unique context flag. System management utilities would then select the variant that is installable on the particular system, based on modules already deployed, default configuration or any additional information provided for the transaction. See the linked thread for details. context is included in the module’s identifier string, too. Did I mention we finally have a solid naming scheme proposal?
  • arch, which defines the specific hardware architecture the main components (as opposed to those pulled in via the multilib mechanism) of this module artifact are compatible with. This differs from the current common concept of basearch in the sense that this is not the architecture family but rather its specific variant. Not i386 but i586 or i686. Not armhfp but armv7hl. Meant to assist system deployment tools, just like context, arch is also part of the module’s identifier string.
  • eol, which a simple ISO 8601 date denoting the day the module reaches its End of Life. In Fedora we expect to define these in PDC for each module stream with the build system filling it in. Policies regarding module life cycles and service levels have yet to be defined but we already have a field for that. Yay.

All of these are technically optional and none of them should be manually filled in by the packager. modulemd leaves certain details undefined as they’re not all that important for the format — such as how exactly we construct the context hash, what hashing algorithm should be used, or whether this should be the same hash we use for package NVR uniqueness guarantee in koji. The only thing that matters is it’s unique among other modules with the same NSV.

Let’s see how this works out.

Traduisons F-Droid en français

Posted by Jean-Baptiste Holcroft on August 22, 2017 10:00 PM

Depuis quelques semaines, j’ai constaté que F-Droid avant plusieurs sous-projets supplémentaires sur son projet Weblate. Comme c’est un exemple à suivre, j’ai préféré écrire cet article que de traduire !

Voici l’état des lieux actuel du projet F-Droid sur Weblate :

Résultats des élections de Fedora 08/17

Posted by Charles-Antoine Couret on August 22, 2017 08:14 PM

Comme je vous le rapportais il y a peu, Fedora a organisé des élections pour renouveler partiellement le collège de ses organes.

Le scrutin est comme toujours un vote par valeurs. Nous pouvons attribuer à chaque candidat un certain nombre de points, dont la valeur maximale est celui du nombre de candidat, et le minimum 0. Cela permet de montrer l'approbation à un candidat et la désapprobation d'un autre sans ambiguïté. Rien n'empêchant de voter pour deux candidats avec la même valeur.

Les résultats pour le Conseil sont (seul le premier est élu) :

  # votes |  noms
- --------+----------------------
     505  | Justin W. Flory (jwf / jflory7)
- --------+----------------------
     492  | Dennis Gilmore (dgilmore / ausil)
     433  | Till Maas (tyll / till)
     354  | Langdon White (langdon)
     320  | Nick Bebout (nb)

À titre indicatif le score maximal possible était de 5 * 178 (pour 178 votants) soit 890.

Les résultats pour le FESCo sont (seuls les quatre premiers sont élus) :

  # votes |  noms
- --------+----------------------
     458  | Dennis Gilmore (dgilmore / ausil)
     438  | Till Maas (tyll / till)
     431  | Stephen Gallagher (sgallagh/sgallagh)
     330  | Randy Barlow (bowlofeggs/bowlofeggs)
- --------+----------------------
     296  | Dominik Mierzejewski (Rathann/rathann)

À titre indicatif le score maximal possible était de 5 * 150 (pour 150 votants) soit 750.

Les résultats pour le FAmSCo sont donc (seuls les trois premiers sont élus) :

  # votes |  noms
- --------+----------------------
     613  | Nick Bebout (nb/nb)
     551  | Itamar Reis Peixoto (itamarjp/itamarjp)
     541  | Sumantro Mukherjee (sumantro / sumantrom)
- --------+----------------------
     523  | Alex Oviedo Solis (alexove/alexove)
     490  | Eduardo Echeverria (echevemaster/echevemaster)
     469  | Daniel Lara (danniel/Danniel)
     467  | Eduard Lucena (x3mboy)
     449  | Ben Williams (Southern_Gentlem/jbwillia)
     445  | Sirko Kemter (gnokii/gnokii)
     443  | Andrew Ward (award3535)

À titre d'indication, la valeur maximale possible est de 10 * 148 (car il y a eu 148 votants) soit 1480.

Nous pouvons noter que globalement le nombre de votants pour chaque scrutin était proche aux alentours de 175-150 votants.. Les scores sont aussi plutôt éparpillés, avec souvent quelques membres assez largement en tête de chaque scrutin.

Bravo aux participants et aux élus, que le projet Fedora avance. :-)

Bodhi 2.10.1 released

Posted by Bodhi on August 22, 2017 06:00 PM

2.10.1

Bug fixes

  • Adjust the Greenwave subject query to include the original NVR of the builds (#1765).

Release contributors

The following developers contributed to Bodhi 2.10.1:

  • Ralph Bean

CONECIT 2017: Conferences, Workshops and Jungle Tours

Posted by Julita Inca Chiroque on August 22, 2017 04:37 PM

The CONECIT 2017 Conference was held in UNAS, Tingo Maria on 14 -18 August, and the feedback from the majority of attendees so far has been positive. The committee was conformed by 76 students as organizers, with Isai Ventura and Fatima Rouillon as chiefs of staff committee.

The CONECIT 2017 Conference received 7 international professionals to do a talk in IT, including consulting in Linux with “Maddog”. There were also thirteen Peruvian Professionals invited to share experiences with the students from different universities in Peru. Workshops and contests were also part of this edition of CONECIT.

I arrived on Sunday 13 to refine my presentation carefully and quietly prepare myself to my first KeyNote ever… The unexpected cancellation of the flight of Ignacio from IEEE Mexico (because of bad weather) made me be in charge of the opening session about HPC (High Performance Computing) where I highlighted the importance of Linux in this matter. I also emphasize all the subjects related, not only programming, also math and physics are important in this topic. Some pictures of the inaugural session: One of the most awaited presentation was the presentation of MadDog, he exerted our students to not only received what the university offers, we must be self-learning and he encourage us to use Linux in our projects and of course, in our lives. He kindly accepted to take a picture with every single student who was in line waiting for him! 🙂Workshops and contests were also part of the event during the week and thanks the organizers to support the presentation of our local team from Lima Fedora + GNOME 🙂 We were doing some tests since the previous day and two hours before the Fedora + GNOME workshop. It was nice to see students and teachers of universities so interested in learning Linux 😀
Contest of programming, Arduino, Robotics, Start ups as well as football competition took place mostly at night after conferences.  UNI and UNAC were the winners 2017:Peruvian jungle has a variety of delicious food and all the speakers enjoy not only exquisite plates, it was definitely enjoy exotic drinks offered by downtown restaurants and bars:
Adventure and amazing nature is synonym of the Peruvian Jungle, here in Tingo Maria, I visited places such as Oilbird Cave, Tree of wishes, National Parks, Sleeping Beauty, Lagoon of Miracles, The Love Tree, and the Sulfurous Waters 😀Finally, all the magic could not happen without the support and love of friends! Thanks to our local team from Lima Fedora + GNOME, friends of Pucallpa and the new ones too! Hope to see you soon in Linux related projects! 😉Gracias Totales :v


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: CONECIT, CONECIT 2017, CONECIT Tingo Maria, conferences, fedora, Fedora + GNOME, Fedora + GNOME community, Fedora + GNOME group, GNOME, HPC, HPC in the jungle, I Conecit 2017 Tingo María, Julita Inca, Julita Inca Chiroque, Jungle Tours, Tingo Maria, Tingo Maria 2017, WHPC, Workshops

Embracing open source cloud: Local government in Tirana switches to open source cloud solution

Posted by Justin W. Flory on August 22, 2017 08:30 AM
Embracing open source cloud: Local government in Tirana switches to open source cloud solution

This article was originally published on Opensource.com.


Open source software has come a long way since the turn of the century. Each year, more and more people are embracing open source technology and development models. Not just people, though ­– corporations and governments are exploring open source solutions too. From the White House to the Italian army, open source is appearing more frequently in the public sector. But perhaps the newest addition to the list is the municipality of Tirana, Albania.

On June 11th, the local government in the municipality of Tirana migrated their private cloud to Nextcloud, an open source cloud and office productivity suite. The decision to move to an integrated cloud / office suite came after internal discussion about security and performance. Because Nextcloud is entirely open source, it stood out as a powerful option for the municipality to consider.

Why switch to Nextcloud?

The municipality was looking for ways to optimize on costs without sacrificing security. Many people deeply familiar with open source are already aware of the security benefits of using open source software. Instead of relying on a private firm to assure the code is secure, open source software benefits from letting anyone (or paying someone else) to audit the code, find flaws, and submit bugs or patches to get them fixed. This was something Ermir Puka and other members of the IT team in Tirana considered when choosing a cloud solution for the 600 employees of the municipality.

“The IT directory at the municipality of Tirana thought the movement to Nextcloud, which is an open source platform, gives us flexibility since we won’t be dependent from providers who offer proprietary solutions. We can also develop it ourself, according to our needs, if we have the staff with the necessary qualifications to do such a thing,” Puka said.

Nextcloud also stood out not only for its use as a file sharing tool, but also the other features that make it helpful as an office productivity suite. With Nextcloud, you can edit documents simultaneously with Collabora Online, share calendars with co-workers, use an intranet messaging system, and use it on your phone too. A large selection of open source apps are available to add to an Nextcloud installation.

The municipality of Tirana celebrates the launch of their open data portal, opendata.tirana.al

The municipality of Tirana celebrates the launch of their open data portal, opendata.tirana.al. Photo from Twitter, @erionveliaj.

According to European Commission Joinup, Tirana is one of the first municipalities in southeastern Europe implementing open source technology in the public sector. This continues the municipality’s growing interest in open source, following the recent announcement of their open data portal (see it at opendata.tirana.al) and decision to collaborate with the local open source community by contributing GIS data to OpenStreetMaps. “We also hope to give a good example in the region and maybe other municipalities can follow our example,” Puka added. This further shows the municipality’s dedication to saving money on software licenses, protecting user privacy, and innovating by using open source technology.

About Tirana

Members of Open Labs Albania collaborating with the municipality of Tirana on providing GIS data for OpenStreetMaps

Members of Open Labs Albania collaborating with the municipality of Tirana on providing GIS data for OpenStreetMaps. Photo from Twitter, @erionveliaj.

Tirana is located in Albania, in southeastern Europe, on the Mediterranean Sea just above Greece. The open source community in Tirana is growing each year. This is in part to the Open Labs Albania community in the city. Open Labs Albania is a not-for-profit hackerspace that promotes free and open source technologies, open data, open technological standards, and online privacy. You can read more about them in their manifesto.

This continues a trend of exciting news for open source in the region. Earlier this year, the first-ever overnight hackathon for the UN Sustainable Development Goals happened in March, with an emphasis on sustainable projects with open source licenses. They also host Linux Weekend, an annual mini-conference to help on-board students and interested technologists to Linux and open source. However, their most well-known event is Open Source Conference Albania (OSCAL), an annual conference gathering open source developers and community members from across the world. Together with the municipality, Open Labs has helped provide advice and support for some of the municipality’s research into using open source software.

Get in touch

If you’re interested in learning more or sharing your thoughts, you can view the public announcement on the European Commission website or visit the thread on the Open Labs forums.

The post Embracing open source cloud: Local government in Tirana switches to open source cloud solution appeared first on Justin W. Flory's Blog.

Unités systemd avec docker

Posted by Casper on August 22, 2017 04:17 AM

Une fois encore le couple systemd-docker nous montre son efficacité... redoutable. J'ai pû monter un service en quelques minutes sans prise de tête ou complication imprévue; certes la partie zone DNS et reverse proxy était déjà en place, mais quand même. Je résumerais l'histoire en 4 étapes.

Veni

Le premier reflexe à avoir quand on veut monter un nouveau service, est de chercher sur la registry Docker officielle une image potable, et vérifier son Dockerfile pour voir si elle emballe pas quelques abérations. Moi je cherchais une image du service Searx, et je suis tombé sur une perle rare.

Vidi

L'image docker est basée sur Alpine Linux 3.6, elle n'a pas de volumes donc pas de données persistantes, et elle n'expose qu'un seul port, plus simple tu meurs. Petit plus, elle n'embarque pas les dépendances de compilation, elle se contente de compiler le programme sans faire de choses tordues avant ou après. Le script run.sh procède visiblement à quelques ajustements de la config du programme, notament la génération d'une clé d'authentification, qui sera regénérée à chaque lancement. Soit.

Vici \o/

Il ne reste plus qu'à concrétiser tout ça dans son terminal...

# docker pull wonderfall/searx

L'unité systemd aura les fonctionnalités suivantes:

  1. Lancer le processus dans le container et créer le container
  2. Le container utilise le serveur DNS de OpenDNS (fixe un bug avec NetworkManager et dnsmasq)
  3. Le container écoute sur le port 8089 de l'interface localhost (important)
  4. L'url de base est passée en variable d'environnement
  5. Envoi du signal reload au processus cloisonné
  6. Tuer le processus dans son container et laisser le container à l'abandon
  7. un nouveau container avec un nouveau nom sera créé au prochain redémarrage

# cat /etc/systemd/system/searx-casper-site.service
[Unit]
Description=Searx search engine
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker run -i --dns 208.67.222.222 -p 127.0.0.1:8089:8888 \
          -e BASE_URL=https://search.casperlefantom.net \
          wonderfall/searx:latest
ExecReload=/usr/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target

J'insiste sur le fait que vos containers doivent écouter seulement sur l'interface localhost pour des raisons de sécurité. Si le container écoute sur l'interface ethernet, alors son port d'écoute sera accessible depuis n'importe où sur Internet. Firewalld ne pourra rien faire pour vous. J'ai 2 reverses proxy devant, je serais bien embêté si on pouvait les bypasser >-)

# systemctl daemon-reload
# systemctl enable searx-casper-site.service
# systemctl start searx-casper-site.service

Les logs du programme sont automatiquement récupérés par Journald. L'étape de suppression des containers sans processus actif est relègué à une opération de nettoyage manuel. Je n'ai malheureusement pas trouvé d'autre solution, des disques trop lent sont assez problématiques et provoquent des erreurs de système de fichier en cours d'utilisation.

Gaming Willpower

Posted by Sarup Banskota on August 22, 2017 12:00 AM

Watching my daily lifestyle evolve over the last two years, I’ve recently developed an amateur interest in human habits and willpower.

Habits are routines we go through without consciously thinking about because we’re used to doing them successfully for a long time. Willpower is fuel that we can use to drive involuntary routines to completion. Habit is what drives us towards another cup of sugar drink against our meal plan’s wishes, willpower is what helps prevents it.

Habits are formed through a repetitive anticipation trigger → routine → satisfaction cycle. A Starbucks builds the anticipation of good coffee, and we’re programmed to walk towards it and grab a cuppa, scoring familiar satisfaction.

Strong willpower is often needed for routines whose results and success rates are not clear or instant. It’s easier to form the habit of using a toothpaste every day, because we anticipate the cool-fresh feeling within minutes. It is difficult to form the habit of doing an intensive workout everyday at 6pm, because the anticipation is of pain and the result (weight loss, sexy legs, abs) is far away, making it less attractive and harder to visualise.

Stanford University psychologist Kelly McGonigal wrote an entire book on this topic. I read parts of it, and the key takeaway for me was that just like physical power:

  • Willpower has a daily cap. Within a willpower day, we gotta budget what we spend it on. Otherwise, we will lose our willpower to trivial things, and lose focus on what’s most important
  • Willpower can be trained and the cap can be increased. Much like regularly working out at the gym, if we consciously work on gaming our willpower allocation, we get better at it, and more of the activities that demanded higher willpower will inch towards becoming habits
  • Willpower days can be short - they don’t necessarily have to be as long as human days - naps and small victories can help replenish willpower and restart the cycle

After observing my own behaviour for 6 months, I discovered that there are willpower peaks on waking up and on successfully completing a chunk of work.

Naturally, I wanted to take advantage of willpower peaks to get difficult things done. I also wanted to discover ways to boost my willpower when it was dropping. Through some reading and subsequent experimentation, what I found to work well for me is to list down which activities in my day consume more willpower than others, and to organise them around observed peaks. Ironically, willpower is also needed to follow this organised plan, because there is usually inertia that prevents one from frequently changing what they’re currently doing.

Here are some example activities (classifications can wary from person to person):

  • Habit activities (no willpower needed) - brushing teeth, replying to instant messages, calling a loved one to deliver good news, eating easily accessible food when a little hungry
  • Low willpower activities - catching the bus to work, cleaning up after cooking, making your bed before leaving for work, gorging on food that’s not easily accessible
  • High willpower productive activities - waking up, working out, solving a difficult math problem, writing a blogpost 😉
  • High willpower chores - this is a special category comprising tasks that need high willpower, but which I’d rather not do now, in the interest of spending time on other important things - groceries, paying bills, booking flights or hotels. However not doing them sufficiently sooner makes them expensive in the near future, and with time they often pile up and become urgent. The bright side about chores is that they’re often small victories.

So a good plan could start as follows:

  1. Wake up (only if you know you’ve slept enough) - this usually consumes substantial willpower. To make up, we need to achieve a small victory now: maybe lay out clothes for office, make the bed, get the laundry started
  2. Follow it up with a few habit activities - brush teeth, breakfast (kept within easy reach). This period also allows time for forming a list of what are some unique high willpower activities for the day (leftover difficult work from yesterday etc)
  3. Time for a high willpower activity! Get cracking on JIRA-615 😉 Hopefully it works out and serves as a small victory. Small victories will usually elevate willpower levels again
  4. In the event of a high willpower activity not working out, that’s when we have to be careful - small failures can be pretty dangerous for the mood. Therefore, a suitable thing to do now is a chore that doesn’t involve decision making. e.g making a known bill payment, making a pre-decided lunch (more on this later). As we already know by now, a brainless chore will provide a small victory, and elevate willpower levels

Recently, I’m trying to get better at keeping an inventory of brainless chores. When I experience a small failure and willpower is low, I pick one of the chores and strike them off to unlock a small victory.

Making weekly decisions in bulk during high willpower period is helpful. For example, recently I’ve been trying to plan in advance what meals I’m going to prepare through the week, and shop for ingredients with a defined shopping list at a time when I’m seeking a small victory. This saves me a few food decisions during the work week, keeps me well fed, and I get a free weekly brainless chore to exchange for small victory.

Call me crazy, but now I also go one level further by distributing breakfast and dinner ingredients in the home refrigerator and lunch in the office one. This allows for easy access when I need them, which means low willpower towards preparing meals. This in turn means I avoid skipping meals. This one habit has allowed me to not skip a single meal in the last one week (usually I always skip at least breakfast or lunch or both).

If you’re feeling like giving the willpower gaming a try, here are the two key takeaways from this post:

  1. Plan your day around your personal willpower peaks for maximum productivity. When willpower is high, do high willpower tasks. When willpower is low, aim for brainless tasks that lead to small victories
  2. Always aim to move towards making habits. Habits follow an anticipation trigger → routine → satisfaction cycle. By making triggers easily visible, and satisfaction better defined, you convert high willpower activities to lower willpower ones. Through practice, you can make the routine brainless - voila you just made a habit

Customizing the KubeVirt Manifests

Posted by Adam Young on August 21, 2017 05:02 PM

My cloud may not look like your cloud. The contract between the application deployment and the Kubernetes installation is a set of manifest files that guide Kubernetes in selecting, naming, and exposing resources. In order to make the generation of the Manifests sane in KubeVirt, we’ve provided a little bit of build system support.

The manifest files are templatized in a jinja style. I say style, because the actual template string replacement is done using simmple bash scripting. Regardless of the mechanism, it should not be hard for a developer to understand what happens. I’ll assume that you have your source code checked out in $GOPATH/src/kubevirt.io/kubevirt/

The template files exist in the manifests subdirectory. Mine looks like this:

haproxy.yaml.in             squid.yaml.in            virt-manifest.yaml.in
iscsi-demo-target.yaml.in   virt-api.yaml.in         vm-resource.yaml.in
libvirt.yaml.in             virt-controller.yaml.in
migration-resource.yaml.in  virt-handler.yaml.in

The simplest way to generate a set of actual manifest files is to run make manifests

make manifests
./hack/build-manifests.sh
$ ls -l manifests/*yaml
-rw-rw-r--. 1 ayoung ayoung  672 Aug 21 10:17 manifests/haproxy.yaml
-rw-rw-r--. 1 ayoung ayoung 2384 Aug 21 10:17 manifests/iscsi-demo-target.yaml
-rw-rw-r--. 1 ayoung ayoung 1707 Aug 21 10:17 manifests/libvirt.yaml
-rw-rw-r--. 1 ayoung ayoung  256 Aug 21 10:17 manifests/migration-resource.yaml
-rw-rw-r--. 1 ayoung ayoung  709 Aug 21 10:17 manifests/squid.yaml
-rw-rw-r--. 1 ayoung ayoung  832 Aug 21 10:17 manifests/virt-api.yaml
-rw-rw-r--. 1 ayoung ayoung  987 Aug 21 10:17 manifests/virt-controller.yaml
-rw-rw-r--. 1 ayoung ayoung  954 Aug 21 10:17 manifests/virt-handler.yaml
-rw-rw-r--. 1 ayoung ayoung 1650 Aug 21 10:17 manifests/virt-manifest.yaml
-rw-rw-r--. 1 ayoung ayoung  228 Aug 21 10:17 manifests/vm-resource.yaml

Looking at the difference between, say the virt-api template and final yaml file:

$ diff -u manifests/virt-api.yaml.in manifests/virt-api.yaml
--- manifests/virt-api.yaml.in	2017-07-20 13:29:00.532916101 -0400
+++ manifests/virt-api.yaml	2017-08-21 10:17:10.533038861 -0400
@@ -7,7 +7,7 @@
     - port: 8183
       targetPort: virt-api
   externalIPs :
-    - "{{ master_ip }}"
+    - "192.168.200.2"
   selector:
     app: virt-api
 ---
@@ -23,14 +23,14 @@
     spec:
       containers:
       - name: virt-api
-        image: {{ docker_prefix }}/virt-api:{{ docker_tag }}
+        image: kubevirt/virt-api:latest
         imagePullPolicy: IfNotPresent
         command:
             - "/virt-api"
             - "--port"
             - "8183"
             - "--spice-proxy"
-            - "{{ master_ip }}:3128"
+            - "192.168.200.2:3128"
         ports:
           - containerPort: 8183
             name: "virt-api"
@@ -38,4 +38,4 @@
       securityContext:
         runAsNonRoot: true
       nodeSelector:
-        kubernetes.io/hostname: {{ primary_node_name }}
+        kubernetes.io/hostname: master

make manifests, it turns out, just calls a bash script ./hack/build-manifests.sh. This script uses two files to determine the values to use for template string substitution. First, the defaults: hack/config-default.sh. This is where master_ip get the value of 192.168.200.2. This file also gives priority to the $DOCKER_TAG environment variable. However, if you need to customize values further, you can create and manage them in the file hack/config-local.sh. The goal is that any of the keys from the -default file that are specified in the hack/config-local.sh will use the value from the latter file. The set of keys with their defaults (as of this writing) that you can customize are:

binaries="cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api cmd/virtctl cmd/virt-manifest"
docker_images="cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api cmd/virt-manifest images/haproxy images/iscsi-demo-target-tgtd images/vm-killer images/libvirt-kubevirt images/spice-proxy cmd/virt-migrator cmd/registry-disk-v1alpha images/cirros-registry-disk-demo"
optional_docker_images="cmd/registry-disk-v1alpha images/fedora-atomic-registry-disk-demo"
docker_prefix=kubevirt
docker_tag=${DOCKER_TAG:-latest}
manifest_templates="`ls manifests/*.in`"
master_ip=192.168.200.2
master_port=8184
network_provider=weave
primary_nic=${primary_nic:-eth1}
primary_node_name=${primary_node_name:-master}

Not all of these are for Manifest files. The docker_images key is used in selecting the set of generating Docker images to generate in a command called from a different section of the Makefile. The network_provider is used in the Vagrant setup, and so on.However, most of the values are used in the manifest files. So, If I want to set a master IP Address of 10.10.10.10, I would have a hack/config-local.sh file that looks like this:

master_ip=10.10.10.10
$  diff -u manifests/virt-api.yaml.in manifests/virt-api.yaml
--- manifests/virt-api.yaml.in	2017-07-20 13:29:00.532916101 -0400
+++ manifests/virt-api.yaml	2017-08-21 10:42:28.434742371 -0400
@@ -7,7 +7,7 @@
     - port: 8183
       targetPort: virt-api
   externalIPs :
-    - "{{ master_ip }}"
+    - "10.10.10.10"
   selector:
     app: virt-api
 ---
@@ -23,14 +23,14 @@
     spec:
       containers:
       - name: virt-api
-        image: {{ docker_prefix }}/virt-api:{{ docker_tag }}
+        image: kubevirt/virt-api:latest
         imagePullPolicy: IfNotPresent
         command:
             - "/virt-api"
             - "--port"
             - "8183"
             - "--spice-proxy"
-            - "{{ master_ip }}:3128"
+            - "10.10.10.10:3128"
         ports:
           - containerPort: 8183
             name: "virt-api"
@@ -38,4 +38,4 @@
       securityContext:
         runAsNonRoot: true
       nodeSelector:
-        kubernetes.io/hostname: {{ primary_node_name }}
+        kubernetes.io/hostname: master

Do you have a laptop that isn't fully supported yet?

Posted by Justin M. Forbes on August 21, 2017 05:01 PM
Sometimes it is a lot easier to debug some of these hardware support issues in person as opposed to over IRC or bugzilla. If you have a laptop with hardware that isn't working quite right, and happen to be heading to flock, bring it with you.  I will be in the  Kernel regression and perf testing session to help debug some of these.  If you can't make that session, feel free to find me any time during the conference. If you don't have Fedora installed on these laptops, I will have USB keys with me to boot a live image for debugging purposes.

Edit images with GNU Parallel and ImageMagick

Posted by Fedora Magazine on August 21, 2017 08:00 AM

Imagine you need to make changes to thousands or millions of images. You might write a simple script or batch process to handle the conversion automatically with ImageMagick. Everything is going fine, until you realize this process will take more time than expected.

After rethinking the process, you realize this task is taking so long because the serial method processes one image at a time. With that in mind, you want to modify your task to work in parallel. How can you do this without reinventing the wheel? The answer is simple: use GNU Parallel and the ImageMagick utility suite.

About GNU Parallel and ImageMagick

The GNU Parallel program can be used to execute jobs faster. If you use xargs or tee, you’ll find parallel easy to use. It’s written to have the same options as xargs. If you write loops in the shell, you’ll find parallel can often replace most of the loops and finish the work faster, by running several jobs in parallel.

The ImageMagick suite of tools offers many ways to change or manipulate images. It can deal with lots of popular formats, such as JPEG, PNG, GIF, and more.

The mogrify command is part of this suite. You can use it to resize an image, blur, crop, despeckle, dither, draw on, flip, join, re-sample, and much more.

Using parallel with mogrify

These packages are available in the Fedora repositories. To install, use the sudo command with dnf:

sudo dnf install ImageMagick parallel

Before you start running the commands below, be aware the mogrify command overwrites the original image file. If you want to keep the original image, use the convert command (also part of ImageMagick) to write to a different image file. Or copy your originals to a new location before you mogrify them.

Try this one-line script to resize all your JPEG images to half their original size:

cd ~/Pictures; find . -type f | egrep "*.jpg" | parallel mogrify -resize 50% {}

If you wanted to convert these files instead, adding -new to the filename:

cd ~/Pictures; find . -type f | egrep "*.jpg" | parallel convert -resize 50% {} {.}-new.jpg

Resize all your JPEG images to a maximum dimension of 960×600:

cd ~/Pictures; find . -type f -iname "*.jpg" | parallel mogrify -resize 960x600 {}

Convert all your JPEG images to PNG format:

cd ~/Pictures; find . -type f | egrep "*.jpg" | parallel mogrify -format png {/}

For more information about the mogrify command and examples of usage, refer to this link. Enjoy!


Photo by John Salzarulo on Unsplash.

Recap: Workshop of GNOME on Fedora at CONECIT 2017

Posted by Julita Inca Chiroque on August 21, 2017 04:45 AM

CONECIT 2017 (held at UNAS in Tingo Maria, Peru) included in this edition several workshops and one of them was about Linux, with GNOME and Fedora. I must thank to the organization and the volunteers that helped me before the workshop. Special thanks to Jhon Fitzgerald during the installation of Fedora 26, updating packages and programs.

The workshop congrated students mostly from Ica, Pucallpa, Cañete and Tingo Maria. We started by showing and explain some foundations of GNOME and Fedora, I talked about the history of the, interfaces, applications, GSoC programs, channels of communication with IRC, bugzilla and GNOME Builder. During the workshop I had the pleasure to be helped by the local Lima team of GNOME + Fedora. Thanks Solanch Ccasa, Lizbeth Lucar, Toto Cabezas and Leyla Marcelo.

It was only two hours and because the low bandwith only allowed us to show the Newcomers guide, download and install the buider package. After that I prize all the participants who finished first some instructions I did on each topic mentioned. As you can see in pictures I am glad that the numbers of women are increasing in Linux workshops. We have shared a delicious cake for the 20th birthday of GNOME.

Thanks to the GNOME Foundation and Fedora LATAM for the financial support that let us spread some knowledge about these projects. Thanks CONECIT for trusting in our job, specially of one of the organizers of this great event, thanks so much Fatima Rouilon!


Filed under: FEDORA, τεχνολογια :: Technology Tagged: cake 20th GNOME, CONECIT, CONECIT 2017, fedora, GNOME, Julita Inca, Julita Inca Chiroque, Mad dog, Maddog, workshop

Just finished, almost done.

Posted by veon on August 20, 2017 08:00 PM

flock-almost done

The last revision of the slide for the workshop has been completed.

What do I talk about?

Oh yes, right,

It is with great pleasure that I announce my first involvement with the flock-2017 in Hyannis, Massachusetts, also as speaker.

I will come from Italy with my mentor and web site coordinator - Robert Mayr <robyduck> , and along with the person who initially inspired me to actively participate in the Fedora Project - Gabriele Trombini <mailga>.

I will be presenting with robyduck a Fedora Websites workshop, an overview on principal features of Fedora Websites and work on real issue ticket. Attendees learn how Fedora Websites are make, with which tools and how they can contribute.

What do I expect from the flock? Definitely a unique experience, to meet many of the developers and contributors of the Fedora Project. It will be a full immersion of ideas and experiences.

The Fedora Project has already given me so much and I hope I can learn a lot more.

A dream come true thanks to the Fedora Project.

Paper review: Shadow Kernels

Posted by Levente Kurusa on August 20, 2017 12:00 PM

Shadow Kernels: A General Mechanism For Kernel Specialization in Existing Operating Systems

Application selectable kernel specializations by Chick et al.

Abstract

Chick et al start their paper by noting that existing operating system share one single kernel .text section between all running tasks, and that this fact contradicts recent research which has proved from time to time that profile guided optimizations are beneficial. Their solution involves remapping the running kernel’s page-tables on context switches and expose to user-space the ability to choose which “shadow kernel” to bind its process to. The authors have implemented their prototype using the Xen hypervisor and argue that, thus, it can be extended to any operating system that runs on Xen.

Introduction

The authors argue that in a traditionally monolithic operating systems, system calls are fast because they don’t require swapping page tables and flushing the TLB’s caches. However, the disadvantage of such system is the fact that per-process optimization of the kernel is now impossible. To fix the discrepancy between relevant research on profile guided optimizations and the apparent lack of embracing it, they introduce “shadow kernels”, a per-process optimization mechanism of the kernel’s .text section.

Motivation

The authors of this paper highlight three benefits of “shadow kernels” that have motivated them.

Firstly, the already mentioned recent research into profile-guided optimization. One of the unsolved issues of such optimization is that it must be based on a representative workload. They argue that, Shadow kernels allow applications executing on the same machine to each execute with their own kernel that is optimized with profile-guided optimization specific to that program. And thus, the problem about representative workload is solved, because you presumably know the profile of your own program and you no longer need to care about other processes running on the same machine.

Secondly, scoping probes. It is well known that Linux has multiple instrumentation primitives, for instance Kprobes and DTrace. The authors argue that when one process may want to be instrumented, every other process in the system is also impacted by the overhead of installing the primitive. In contemporary operating systems it is simply impossible to restrict the scope of a probe to a single, or a group of, process(es). Shadow kernels again present a solution here by replacing the pages of the affected process’ kernel .text region.

Finally, the third factor that has motivated the authors is about overall optimization of the kernel and its fast paths. They argue that while security checks are in the kernel there is a strong case for trusted processes, which do not necessarily need the protections that are in place and in those cases the additional checks are a bottleneck` to their performance. With shadow kernels, it is possible to remove security checks from the address space of one process while leaving them intact in all other processes.

Design

The most important parts about the shadow kernel design can be nicely summed in the authors’ own words: An application can spawn a new shadow kernel through a call to a kernel module. This creates a copy-on-write version of the currently running kernel, which is mapped into the memory of the process that created it. As a process registers probes, the specialization mechanism makes modifications to the kernel’s instruction stream. Due to the use of copy-on-write, every page that is modified is then physically copied, leaving the original kernel text un-touched. Modified functions are replaced using standard mechanisms of either rewriting the entire block if the “replacee” is shorter or the same length or using an unconditional jump that is easy to branch predict.

<figure> <figcaption>Overview of the design of shadow kernels</figcaption> </figure>

The above figure gives us a little overview on the actual architectural details of this novel technique.

One of the more interesting problems with this approach is dealing with kernel code that is not bound to a single process (think, kworkers, interrupts and schedulers). The authors mention that is difficult to just go ahead and remap the pages because other processes may want to augment the same page in a different way. The solution they propose is giving up isolation and using the code of a “union” shadow kernel that contains all of the probes.

Implementation

Probably one of the most fascinating I’ve read in this paper is the fact their entire implementation is 250 lines of code entirely implemented as a Linux kernel module. Pretty much of the implementation is specific to the Linux kernel thus, and I don’t think describing it here would be of much value, rather anyone interested can read the paper and find more detail about how the implementation adheres to the Design outlined above.

Evaluation

Furthering motivation, the authors show that probing the most popular kernel function called across all CPUs reduces single thread performance by 30%, and it keeps worsening if you probe the top three functions to 50%. They tested their setup monitoring the performance of memcached and installing probes in an unrelated process. From this result, it is clear that some technique to solve this, is worthwhile.

Flock 2017 – I’m waiting for you, Cape Cod!

Posted by Robert Mayr on August 19, 2017 07:55 PM

I am very happy I was able to organize my family and holidays to attend Flock again, this will be my third edition after 2013 and 2015, where I had a great experience and made a lot of friends, so I am sure this year will be even better ;)
flock2017-945x400
The flight already will be very nice, because this year I will travel with Gabriele Trombini (mailga) and a new entry of Flock, Andrea Masala (veon). Cape Cod is a real nice venue and although I will be very busy during the conference, I hope we will have a couple of hours to make some sightseeing.
I will be co-speaker in a session I normally gave for the last years, but I am happy Andrea will handle that this year for me. He helped out a lot during the last two releases and I hope he will do even more in the near future. Our workshop will be rather interesting, because we will put our hands on real tickets, look how to fix them and also answer questions about how we handle, develop or debug the websites we are managing.
My talk, given with Gabriele, is a bout the Mindshare initiative, a Council objective for 2017, which aims to retool outreach teams. You will probably already understand this will not affect only ambassadors, but all outreach teams in Fedora world. If you are interested in knowing more, or give your feedback to the plans we have, then come to my talk, I will be happy to open discussions even after the talk, maybe in front of a cold beer :D
Other sessions will see me directly involved, as for example the Council session, but I will also attend the Ambassador workshop-session. Not only because it is directly related to the Mindshare talk, but because as the actual FAmSCo chair I am very interested in this session.

See you all there, and thanks to Fedora to make this possible.

KDE PIM in Randa 2017

Posted by Daniel Vrátil on August 19, 2017 01:06 PM

Randa Meetings is an annual meeting of KDE developers in a small village in Swiss Alps. The Randa Meetings is the most productive event I ever attended (since there’s nothing much else to do but hack from morning until night and eat Mario’s chocolate :-)) and it’s very focused – this year main topic is making KDE more accessible.

Several KDE PIM developers will be present as well – and while we will certainly want to hear other’s input regarding accessibility of Kontact, our main goal in Randa will be to port away from KDateTime (the KDE4 way of handling date and time in software) to QDateTime (the Qt way of handling date and time). This does not sound very interesting, but it’s a very important step for us, as afterward, we will finally be free of all legacy KDE4 code. It is no simple task, but we are confident we can finish the port during the hackfest. If everything goes smoothly, we might even have time for some more cool improvements and fixes in Kontact ;-)

I will also close the KMail User Survey right before the Randa meetings so that we can go over the results and analyze them. So, if you haven’t answered the KMail User Survey yet, please do so now and help spread the word! There are still 3 more weeks left to collect as many answers as possible. After Randa, I will be posting a series of blog posts regarding results of the survey.

And finally, please support the Randa Meetings by contributing to our fundraiser – the hackfest can only happen thanks to your support!

Konqi can't wait to go to Randa again!

You can read reports from my previous adventures in Randa Meetings in 2014 and 2015 here:

<iframe class="wp-embedded-content" data-secret="0hsystnfNv" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="http://www.dvratil.cz/2014/08/hacking-my-way-through-randa/embed/#?secret=0hsystnfNv" title="“Hacking my way through Randa” — Daniel Vrátil's blog" width="600"></iframe>

<iframe class="wp-embedded-content" data-secret="0EOiwzLHIS" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="http://www.dvratil.cz/2015/08/kde-pim-in-randa/embed/#?secret=0EOiwzLHIS" title="“KDE PIM in Randa” — Daniel Vrátil's blog" width="600"></iframe>

Post-GUADEC distractions

Posted by Matthias Clasen on August 18, 2017 09:25 PM

Like everybody else, I had a great time at GUADEC this year.

One of the things that made me happy is that I could convince Behdad to come, and we had a chance to finally wrap up a story that has been going on for much too long: Support for color Emoji in the GTK+ stack and in GNOME.

Behdad has been involved in the standardization process around the various formats for color glyphs in fonts since the very beginning. In 2013, he posted some prototype work for color glyph support in cairo.

This was clearly not meant for inclusion, he was looking for assistance turning this into a mergable patch. Unfortunately, nobody picked this up until I gave it a try in 2016. But my patch was not quite right, and things stalled again.

We finally picked it up this year. I produced a better cairo patch, which we reviewed, fixed and merged during the unconference days at GUADEC. Behdad also wrote and merged the necessary changes for fontconfig, so we can have an “emoji” font family, and made pango automatically choose that font when it finds Emoji.

After guadec, I worked on the input side in GTK+. As a first result, it is now possible to use Control-Shift-e to select Emoji by name or code.

<video class="wp-video-shortcode" controls="controls" height="147" id="video-1879-1" preload="metadata" width="400"><source src="https://blogs.gnome.org/mclasen/files/2017/08/c-s-e.webm?_=1" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/08/c-s-e.webm</video>

This is a bit of an easter egg though, and only covers a few Emoji like ❤. The full list of supported names is here.

A more prominent way to enter Emoji is clearly needed, so i set out to implement the design we have for an Emoji chooser. The result looks like this:

As you can see, it supports variation selectors for skin tones, and lets you search by name. The clickable icon has to be enabled with a show-emoji-icon property on GtkEntry, but there is a context menu item that brings up the Emoji chooser, regardless.

I am reasonably happy with it, and it will be available both in GTK+ 3.92 and in GTK+ 3.22.19. We are bending the api stability rules a little bit here, to allow the new property for enabling the icon.

Working on this dialog gave me plenty of opportunity to play with Emoji in GTK+ entries, and it became apparent that some things were not quite right.  Some Emoji just did not appear, sometimes. This took me quite a while to debug, since I was hunting for some rendering issue, when in the end, it turned out to be insufficient support for variation selectors in pango.

Another issue that turned up was that pango did place the text caret in the middle of Emoji’s sometimes, and Backspace deleted them piece-meal, one character at a time, instead of all at once. This required fixes in pango’s implementation of the Unicode segmentation rules (TR29). Thankfully, Peng Wu had already done much of the work for this, I just fixed the remaining corner cases to handle all Emoji correctly, including skin tone variations and flags.

So, what’s still missing ? I’m thinking of adding optional support for completion of Emoji names like :grin: directly in the entry, like this:

<video class="wp-video-shortcode" controls="controls" height="450" id="video-1879-2" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2017/08/emoji-completion.webm?_=2" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/08/emoji-completion.webm</video>

But this code still needs some refinement before it is ready to land. It also overlaps a bit with traditional input method functionality, and I am still pondering the best way to resolve that.

To try out color Emoji, you can either wait for GNOME 3.26, which will be released in September, or you can get:

  • cairo from git master
  • fontconfig from git master
  • pango 1.40.9 or .10
  • GTK+ from the gtk-3-22 branch
  • a suitable Emoji font, such as EmojiOne or Noto Color Emoji

It was fun to work on this, I hope you enjoy using it! ❤

New badge: FrOSCon 2017 Attendee !

Posted by Fedora Badges on August 18, 2017 07:55 PM
FrOSCon 2017 AttendeeYou visited the Fedora booth at FrOSCon 2017!

Shipping PKCS7 signed metadata and firmware

Posted by Richard Hughes on August 18, 2017 04:28 PM

Over the last few days I’ve merged in the PKCS7 support into fwupd as an optional feature. I’ve done this for a few reasons:

  • Some distributors of fwupd were disabling the GPG code as it’s GPLv3, and I didn’t feel comfortable saying just use no signatures
  • Trusted vendors want to ship testing versions of firmware directly to users without first uploading to the LVFS.
  • Some firmware is inherently internal use only and needs to be signed using existing cryptographic hardware.
  • The gpgme code scares me.

Did you know GPGME is a library based around screen scraping the output of the gpg2 binary? When you perform an action using the libgpgme APIs you’re literally injecting a string into a pipe and waiting for it to return. You can’t even use libgcrypt (the thing that gpg2 uses) directly as it’s way too low level and doesn’t have any sane abstractions or helpers to read or write packaged data. I don’t want to learn LISP S-Expressions (yes, really) and manually deal with packing data just to do vanilla X509 crypto.

Although the LVFS instance only signs files and metadata with GPG at the moment I’ve added the missing bits into python-gnutls so it could become possible in the future. If this is accepted then I think it would be fine to support both GPG and PKCS7 on the server.

One of the temptations for X509 signing would be to get a certificate from an existing CA and then sign the firmware with that. From my point of view that would be bad, as any firmware signed by any certificate in my system trust store to be marked as valid, when really all I want to do is check for a specific (or a few) certificates that I know are going to be providing certified working firmware. Although I could achieve this to some degree with certificate pinning, it’s not so easy if there is a hierarchical trust relationship or anything more complicated than a simple 1:1 relationship.

So this is possible I’ve created a LVFS CA certificate, and also a server certificate for the specific instance I’m running on OpenShift. I’ve signed the instance certificate with the CA certificate and am creating detached signatures with an embedded (signed-by-the-CA) server certificate. This seems to work well, and means we can issue other certificates (or CRLs) if the server ever moves or the trust is compromised in some way.

So, tl;dr: (should have been at the top of this page…) if you see a /etc/pki/fwupd/LVFS-CA.pem appear on your system in the next release you can relax. Comments, especially from crypto experts welcome. Thanks!

Bodhi 2.10.0 released

Posted by Bodhi on August 18, 2017 02:49 PM

Compatibility changes

This release of Bodhi has a few changes that are technically backward incompatible in some senses, but it was determined that each of these changes are justified without raising Bodhi’s major version, often due to features not working at all or being unused. Justifications for each are given inline.

  • dnf and iniparse are now required dependencies for the Python bindings. Justification: Technically, these were needed before for some of the functionality, and the bindings would traceback if that functionality was used without these dependencies being present. With this change, the module will fail to import without them, and they are now formal dependencies.
  • Support for EL 5 has been removed in this release. Justification: EL 5 has become end of life.
  • The pkgtags feature has been removed. Justification: It did not work correctly and enabling it was devastating (#1634).
  • Some bindings code that could log into Koji with TLS certificates was removed. Justification: It was unused (b4474676).
  • Bodhi’s short-lived ci_gating feature has been removed, in favor of the new Greenwave integration feature. Thus, the ci.required and ci.url settings no longer function in Bodhi. The bodhi-babysit-ci utility has also been removed. Justification: The feature was never completed and thus no functionality is lost (#1733).

Features

  • There are new search endpoints in the REST API that perform ilike queries to support case insensitive searching. Bodhi’s web interface now uses these endpoints (#997).
  • It is now possible to search by update alias in the web interface (#1258).
  • Exact matches are now sorted first in search results (#692).
  • The CLI now has a --mine flag when searching for updates or overrides (#811, #1382).
  • The CLI now has more search parameters when querying overrides (#1679).
  • The new case insensitive search is also used when hitting enter in the search box in the web UI (#870).
  • Bodhi is now able to query Pagure for FAS groups for ACL info (f9414601).
  • The Python bindings’ candidates() method now automatically intiializes the username (6e8679b6).
  • CLI errors are now printed in red text (431b9078).
  • The graphs on the metrics page now have mouse hovers to indicate numerical values (#209).
  • Bodhi now has support for using Greenwave to gate updates based on test results. See the new test_gating.required, test_gating.url, and greenwave_api_url settings in production.ini for details on how to enable it. Note also that this feature introduces a new server CLI tool, bodhi-check-policies, which is intended to be run via cron on a regular interval. This CLI tool communicates with Greenwave to determine if updates are passing required tests or not (#1733).

Bug fixes

  • The autokarma check box’s value now persists when editing updates (#1692, #1482, and #1308).
  • The CLI now catches a variety of Exceptions and prints user readable errors instead of tracebacks (#1126, #1626).
  • The Python bindings’ get_releases() method now uses a GET request (#784).
  • The HTML sanitization code has been refactored, which fixed a couple of issues where Bodhi didn’t correctly escape things like e-mail addresses (#1656, #1721).
  • The bindings’ docstring for the comment() method was corrected to state that the email parameter is used to make anonymous comments, rather than to enable or disable sending of e-mails (#289).
  • The web interface now links directly to libravatar’s login page instead of POSTing to it (#1674).
  • The new/edit update form in the web interface now works with the new typeahead library (#1731).

Development improvements

  • Several more modules have been documented with PEP-257 compliant docblocks.
  • Several new tests have been added to cover various portions of the code base, and Bodhi now has
    89% line test coverage. The goal is to reach 100% line coverage within the next 12 months, and
    then begin to work towards 100% branch coverage.

Release contributors

The following developers contributed to Bodhi 2.10.0:

  • Ryan Lerch
  • Matt Jia
  • Matt Prahl
  • Jeremy Cline
  • Ralph Bean
  • Caleigh Runge-Hottman
  • Randy Barlow

F26-20170815 Updated ISOs released

Posted by Ben Williams on August 18, 2017 02:43 PM

We the Fedora Respins-SIG are happy to announce new F26-20170815 Updated Lives. (with Kernel 4.12.5-300).
This will be the First Set of updated isos for Fedora 26. 

With this release we include F26-MD-20170815 which is a Multi-Desktop iso in support of Fosscon (FOSSCON is a Free and Open Source software conference held annually in Philadelphia PA. )

With F26 we are still using Livemedia-creator to build the updated lives.

To build your own please look at  https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

With this new build of F26 Updated Lives will save you about 600 M of updates after install.

As always the isos can be found at http://tinyurl.com/Live-respins2


Report for COSCUP 2017

Posted by Tong Hui on August 18, 2017 10:02 AM

In the early of the month,as a GNOME Foundation member, I participated in the 12th COSCUP (Conference for Open Source Coder, User & Promoter). From 1996 to 2017, COSCUP has made significant contribution for promoting free and open source in Taiwan. This dozen years or FOSS promoting made Taiwan as much as contributor grows faster than any other Asia country, so I would like to learn what make Taiwan FOSS career successfully, and also advocate GNOME in this conference.

There were thousands participants join COSCUP 2017, and more than 80 talks and workshop by hundreds of free and open source communities contributors and promoters.

As a GNOME Foundation member, together with Bin Li, we have a task to promote GNOME and collaborate with Local Free Desktop community in this COSCUP.

I also gave a short talk together with Mandy Wang in this COSCUP. We talked about how to recruit my girlfriend into FOSS and ‘train’ her become a GNOME contributor.

<figure class="wp-caption aligncenter" id="attachment_1329" style="width: 840px"><figcaption class="wp-caption-text">My talk with Mandy Wang (Photo by Vagabond, CC BY-SA 2.0)</figcaption></figure>

China-Taiwan contributors Meet-up

At BoF (Birds of Feather) session in this COSCUP, Mandy and me from mainland of China, together with Franklin Weng (KDE-TW), zerng07 and freedomknight from Taiwan who works much more on localization of GNOME and KDE. We had a local free desktop meet-up that night.

Firstly we reviewed what we done past years, and communicate what difficulties we met and how we solved. And then we chatted what we should do and need to do to promoting free desktop in China and Taiwan.

By chat with Taiwan contributor I learned so many experience, so it could help us to do more than before.

<figure class="wp-caption aligncenter" id="attachment_1328" style="width: 840px"><figcaption class="wp-caption-text">With some staff of COSCUP 2017 (Photo by Vagabond CC BY-SA 2.0)</figcaption></figure>

Finally, Thanks all hundreds of volunteers who working in COSCUP, make this event wonderful and awesome!

Fedora Design Interns 2017

Posted by Maria Leonova on August 18, 2017 09:02 AM

Here’s an update on internships. Older post linked to here. Quick recap: there’s been 2 long-term interns for Fedora design team since February, and one short-term guy, who came for 2 weeks at the beginning of June. Guys have been doing an amazing job, I can’t stress enough how happy I am to have them around.

So let me give you a short overview of their work:

Martin Modry

PUNK

Martin has created some lovely designs before he moved on to pursue other endeavors in life 😉 Here are some examples of his work:

Badges

Artwork

He’s created several designs for L10N roles, his work is now continued by Mary in this ticket. He’s shown true understanding of the design issues, and worked directly with ticket creators.

l10n_gen3

Martin Petr

Martin Petr worked with us for 2 weeks 6 hours a day, which allowed him to tackle many projects for Fedora Design and different teams at Red Hat. As always we started of with badges work, soon moving on to other design issues.

Badges

Artwork

He’s created really cool icons for Lightning talks group; they chose the red one in top row for their page. It does work best when resized to be smaller and incorporates references eg to lightning, as well as a neat design solution.lightning_all.png

He also helped create Fedora Release Party poster, which has been widely used. For example, see here. Martin worked on a Fedora telegram theme, and even started to mock up an updated graphics for this year’s devconf.cz site. Martin has an eye for latest trends in design and is super-creative.

Me and many other people are looking forward for him to come back and stay with us for 2 more weeks at the end of September!

Tereza Hlavackova

Terka has been around the longest  – since the end of February and going strong! She’s done an impressive amount of work and I really love her designs. She’s a great help with badges, as well as with some other artwork issues.

Badges

Artwork

Some of her designs include FAF, podcast and Fedora diversity icons. She’s done a great job working with requestors and going through design iterations. Terka’s been away for some time, and I’m looking forward for her to come back, too!

Conclusions and future projects

Altogether I find the Internship program extremely helpful for myself, for Fedora Design team and for some Red Hat teams as well. Both Martins and Terka are great designers, and I hope, they in their turn, only benefit from working in a professional environment, using open source products and communicating with real customers. Not every design issue can be solved easily, some require discussions and iterations, and these guys have been handling them beautifully.


Installing Ring in Fedora 26

Posted by Fedora Magazine on August 18, 2017 08:00 AM

Many communication platforms promise to link people together by video, voice, and data. But almost none of them promise or respect user privacy and freedom to a useful extent.

Ring is a universal communication system for any platform. But it is also a fully distributed system that protects users’ confidentiality. One protective feature is that it doesn’t store users personal data in a centralized location. Instead, it decentralizes this data through a combination of OpenDHT and Ethereum blockchain technology. In addition to being distributed, it has other unique features for communication:

  • Cross platform (works on Linux, Windows, MacOS, and Android)
  • Uses only free and open source software
  • Uses standard security protocols and end-to-end encryption
  • Works with desktop applications (like GNOME Contacts)

In July the Savoir-faire Linux team released the stable 1.0 version of Ring. Although it isn’t included in Fedora due to some of its requirements, the Savoir-faire team graciously provides a package for the Fedora community.

How to install Ring

To install, open a terminal and run the following commands:

sudo dnf config-manager --add-repo https://dl.ring.cx/ring-nightly/fedora_26/ring-nightly.repo
sudo dnf install ring

If you’re using an older version of Fedora, or an entirely different platform, check out the download page.

How to setup a RingID

Now that it’s installed, you’re ready to create an account (or link pre-existing one). The RingID allows other users to locate and contact you while still protecting your privacy. To create one:

  1. First, click on Create Ring Account.
  2. Next, add the required information.
  3. Finally, click Next.
Ring welcome screen Ring register user name RingID

The tutorial page offers more information on setting up this useful app. For example, you can learn how to secure your account and add devices which all notify you on a call. To learn more, check out the tutorial page.

 

All systems go

Posted by Fedora Infrastructure Status on August 18, 2017 05:37 AM
Service 'Fedora Wiki' now has status: good: Everything seems to be working.

Minor service disruption

Posted by Fedora Infrastructure Status on August 18, 2017 05:30 AM
Service 'Fedora Wiki' now has status: minor: Recovering database server connectivity issues, expect some slowness

Major service disruption

Posted by Fedora Infrastructure Status on August 18, 2017 05:23 AM
Service 'Fedora Wiki' now has status: major: Looking into database server connectivity issues

Light - when xbacklight doesn't work

Posted by Jakub Kadlčík on August 18, 2017 12:00 AM

Do you have any issues with controlling backlight on your laptop? Try light!

I’ve recently upgraded my laptop from F24 to F26, checked new Gnome features, killed it and switched to Qtile like I always do. Everything worked, so I moved to another things. Later that day I’ve put my laptop to my nightstand and went to bed. After a while of scrolling down the facebook page I decided to sleep, repeatedly pressed the function button to turn the backlight off, but nothing happened. WTF? Maybe I haven’t committed my key bindings with xbacklight, so they got lost during the reinstall? Nah, they are here. Well maybe I can just restart the Qtile session. Nah, still doesn’t work … It took only a little while for me to … get off the bed, take my laptop and while cursing, sit back to the desk.

Long story short, I figured out, that xbacklight was the problem.

[jkadlcik@chromie ~]$ xbacklight
No outputs have backlight property

Never encounter this error before so I googled it. From results you might learn that it is completely normal and you just need to symlink something with cryptic name in /sys/devices and add some lines to /etc/X11/xorg.conf. Eh, I don’t want to do that? Besides, I don’t have a xorg.conf for like half a decade. Also you can find an opened bug report from 2016, so waiting for fix might take a while.

Then I finally found a blog post describing solution that I liked most. It suggest using a handy little tool called light as a xbacklight alternative. It worked like a magic!

Installation

The only problem was, that light has not been packaged for Fedora yet. Since I was so happy about the tool, I decided to do my part and package it. Now you can easily install it from Copr by

dnf copr enable frostyx/light
dnf install light

There is also a pending package review so you might be able to install it directly from Fedora repositories soon.

Usage

# Increasing brightness
xbacklight -inc 10
light -A 10

# Decreasing brightness
xbacklight -dec 10
light -U 10

my solution to zeno's paradox

Posted by Frank Ch. Eigler on August 17, 2017 10:01 PM

You've probably heard of Zeno's Paradox - the famous one about Achilles and the tortoise. It's a 2000+ year old puzzle about the nature of infinity. An equivalent formulation is roughly this:

  • Imagine someone running from point A to Z. At some time t, the person will be half way between A and Z, let's call it B.
  • The person will run from point B to Z. After time t/2, the person will be half way between B and Z, let's call it C.
  • The person will run from point C to Z. After time t/4, the person will be half way between C and Z, let's call it D.
  • One can continue this pattern of subdivision infinitely.
  • Therefore, the person will never reach Z.

It's hard to believe that this little puzzle was taken too seriously by those clever Greeks. Formally modeling it in math is easy - arithmetic of infinite convergent series is taught in high schools, so it's clear that at time 2t, the runner will reach Z. But the infinity is bothersome enough that even 2000 years later we take the problem seriously. Some even bring up silly stuff like quantum mechanics and uncertainty principles to try to work around it.

But I came across another way to approach the problem - to sever the Gordian Knot, so to speak. That is to recognize an implication of the basic fact that argumentation about a situation is not the same thing as the situation itself.

In this case, the argumentation can indeed go on infinitely, as one talks about shorter and shorter distances & time intervals. But the error in logic is the last step of the list above. The "therefore" doesn't hold, because the only thing that's infinite is all this argumentation. The situation is quite simple and evolves independently of how a goofy observer might want to talk about it - or to imagine breaking it up.

In other words, just because someone chooses a degenerate, infinite, useless way to talk about a situation, the situation itself can be perfectly finite, reasonable, intuitive. There is no paradox.

In other words, the map (argumentation) is not the same thing as the territory (subject of the argument).

GUADEC 2017 Notes

Posted by Petr Kovar on August 17, 2017 05:08 PM

With GUADEC 2017 and the unconference days over, I wanted to share a few conference and post-conference notes with a broader audience.

First of all, as others have reported, at this year’s GUADEC, it was great to see an actual increase in numbers of attendees compared to previous years. This shows us that 20 years later, the community as a whole is still healthy and doing well.

<figure class="wp-caption aligncenter" id="attachment_405" style="width: 660px"><figcaption class="wp-caption-text">At the conference venue.</figcaption></figure>

While the Manchester weather was quite challenging, the conference was well-organized and I believe we all had a lot of fun both at the conference venue and at social events, especially at the awesome GNOME 20th Birthday Party. Kudos to all who made this happen!

<figure class="wp-caption aligncenter" id="attachment_406" style="width: 660px"><figcaption class="wp-caption-text">At the GNOME 20th Birthday Party.</figcaption></figure>

As I reported at the GNOME Foundation AGM, the docs team has been slightly more quiet recently than in the past and we would like to reverse this trend going forward.

<figure class="wp-caption aligncenter" id="attachment_411" style="width: 660px"><figcaption class="wp-caption-text">At the GNOME 20th Birthday Party.</figcaption></figure>
  • We held a shared docs and translation session for newcomers and regulars alike on the first two days of the post-GUADEC unconference. I was happy to see new faces showing up as well as having a chance to work a bit with long-time contributors. Special thanks goes to Kat for managing the docs-feedback mailing list queue, and Andre for a much needed docs bug triage.

    <figure class="wp-caption aligncenter" id="attachment_413" style="width: 660px"><figcaption class="wp-caption-text">Busy working on docs and translations at the unconference venue.</figcaption></figure>

  • Shaun worked on a new publishing system for help.gnome.org that could replace the current library-web scripts requiring release tarballs to get the content updated. The new platform would be a Pintail-based website with (almost) live content updates.
  • Localization-wise, there was some discussion around language packs, L10n data installation and initial-setup, spearheaded by Jens Petersen. While in gnome-getting-started-docs, we continue to replace size-heavy tutorial video files with lightweight SVG files, there is still a lot of other locale data left that we should aim to install on the user’s machine automatically when we know the user’s locale preference, though this is not quite what the user’s experience looks like nowadays. Support for that is something that I believe will require more input from PackageKit folks as well as from downstream installer developers.
  • The docs team also announced a change of leadership, with Kat passing the team leadership to me at GUADEC.
  • In other news, I announced a docs string freeze pilot that we plan to run post-GNOME 3.26.0 to allow translators more time to complete user docs translations. Details were posted to the gnome-doc-list and gnome-i18n mailing list. Depending on the community feedback we receive, we may run the program again in the next development cycle.
  • The docs team also had to cancel the planned Open Help Conference Docs Sprint due to most core members being unavailable around that time. We’ll try to find a better time for a docs team meetup some time later this year or early 2018. Let me know if you want to attend, the docs sprints are open to everybody interested in GNOME documentation, upstream or downstream.
<figure class="wp-caption aligncenter" id="attachment_412" style="width: 660px"><figcaption class="wp-caption-text">At the closing session.</figcaption></figure>

Last but not least, I’d like to say thank you to the GNOME Foundation and the Travel Committee for their continuous support, for sponsoring me again this year.

PHP version 7.0.23RC1 and 7.1.9RC1

Posted by Remi Collet on August 17, 2017 12:12 PM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.1.9RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26 or remi-php71-test repository for Fedora 23-25 and Enterprise Linux.

RPM of PHP version 7.0.23RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 23-24 and Enterprise Linux.

PHP version 5.6 is now in security mode only, so no more RC will be released.

PHP version 7.2 is in development phase, version 7.2.0beta3 is also available.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.1.7RC1 is also available in Fedora rawhide (for QA).

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php70, php71)

Base packages (php)

5 apps to install on your Fedora Workstation

Posted by Fedora Magazine on August 17, 2017 08:00 AM

A few weeks ago, Fedora 26 was released. Every release of Fedora brings new updates and new applications into the official software repositories. Whether you were already a Fedora user and upgraded or you are a first-time user, you might be looking for some cool apps to try out on your Fedora 26 Workstation. In this article, we’ll round up five apps that you might not have known were available in Fedora.

Try out a different browser

By default, Fedora includes the Firefox web browser. But in Fedora 25, Chromium (the open source version of Chrome) was packaged into Fedora. You can learn how to start using and install Chromium below.

How to install Chromium in Fedora

<iframe class="wp-embedded-content" data-secret="tnjeTElMB9" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/install-chromium-fedora/embed/#?secret=tnjeTElMB9" title="“How to install Chromium in Fedora” — Fedora Magazine" width="600"></iframe>

Sort and categorize your music

Do you have a Fedora Workstation filled with local music files? When you open it in a music player, is there missing or just straight out wrong metadata? MusicBrainz is the Wikipedia of music metadata, and you can take back control of your music by using Picard. Picard is a tool that works with the MusicBrainz database to pull in correct metadata to sort and organize your music. Learn how to get started with Picard in Fedora Workstation below.

Picard brings order to your music library

<iframe class="wp-embedded-content" data-secret="lljIa7PX0q" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/picard-brings-order-music-library/embed/#?secret=lljIa7PX0q" title="“Picard brings order to your music library” — Fedora Magazine" width="600"></iframe>

Get ready for the eclipse

August 21st is the big day for the total solar eclipse in North America. Want to get a head start by knowing the sky before it starts? You can map out the sky by using Stellarium, an open source planetarium application available in Fedora now. Learn how to install Stellarium before the skies go dark in this article.

Track the night sky with Stellarium on Fedora

<iframe class="wp-embedded-content" data-secret="YgCLgONqF0" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/stellarium-on-fedora/embed/#?secret=YgCLgONqF0" title="“Track the night sky with Stellarium on Fedora” — Fedora Magazine" width="600"></iframe>

Control your camera from Fedora

Have an old camera lying down? Or maybe do you want to upgrade your webcam by using an existing camera? Entangle lets you take control of your camera all from the comfort of your Fedora Workstation. You can even adjust aperture, shutter speed, ISO settings, and more. Check out how to get started with it in this article.

Tether a digital camera using Entangle

<iframe class="wp-embedded-content" data-secret="pUjGLOPXP7" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/tether-digital-camera-fedora/embed/#?secret=pUjGLOPXP7" title="“Tether a digital camera using Entangle” — Fedora Magazine" width="600"></iframe>

Share Fedora with a friend

One of the last things you might need to do with your Fedora Workstation is extend it! With the Fedora Media Writer, you can create a USB stick loaded with any Fedora edition or spin of your choice and share it with a friend. Learn how to start burning your own USB drives in this how-to article below.

How to make a Fedora USB stick

<iframe class="wp-embedded-content" data-secret="YKp7rYathj" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/make-fedora-usb-stick/embed/#?secret=YKp7rYathj" title="“How to make a Fedora USB stick” — Fedora Magazine" width="600"></iframe>

Creating heat maps using the new syslog-ng geoip2 parser

Posted by Peter Czanik on August 17, 2017 06:26 AM

The new geoip2 parser of syslog-ng 3.11 is not only faster than its predecessor, but can also provide a lot more detailed geographical information about IP addresses. Next to the usual country name and longitude/latitude information, it also provides the continent, time zone, postal code and even county name. Some of these are available in multiple languages. Learn how you can utilize this information by parsing logs from iptables using syslog-ng, storing them to Elasticsearch, and displaying the results in Kibana!

Before you begin

First of all, you need some iptables log messages. In my case, I used logs from my Turris Omnia router. You could use logs from another device running iptables. Alternatively, with a small effort, you can replace iptables with an Apache web server or any other application that saves IP addresses as part of its log message.

You will also need a syslog-ng version that has the new geoip2 parser. The new geoip2 parser was released as part of version 3.11.1.

As syslog-ng packages in Linux distributions do not include the Elasticsearch destination of syslog-ng, you either need to compile it yourself or use one of the unofficial packages, as listed at https://syslog-ng.org/3rd-party-binaries/.

Last but not least, you will also need Elasticsearch and Kibana installed. I used version 5.5.1 of the Elastic stack, but any other version should work just fine.

What is new in GeoIP

The geoip2 parser of syslog-ng uses the maxminddb library to look up geographical information. It is considerably faster than its predecessor and also provides a lot more detailed information.

As usual, the packaging of maxminddb tools is different on different Linux distributions. You need to make sure that a tool to download / update database files is installed, together with the mmdblookup tool. On most distributions you need to use the former at least once as usually only the old type of databases are available packaged. The latter application can help you list what kind of information is available in the database.

Here is a shortened example:

[root@localhost-czp ~]# mmdblookup --file /usr/share/GeoIP/GeoLite2-City.mmdb --ip 1.2.3.4

  {
    "city": 
      {
        "geoname_id": 
          3054643 <uint32>
        "names": 
          {
            "de": 
              "Budapest" <utf8_string>
            "en": 
              "Budapest" <utf8_string>
            "es": 
              "Budapest" <utf8_string>
            "fr": 
              "Budapest" <utf8_string>
            "ja": 
              "ブダペスト" <utf8_string>
            "pt-BR": 
              "Budapeste" <utf8_string>
            "ru": 
              "Будапешт" <utf8_string>
            "zh-CN": 
              "布达佩斯" <utf8_string>
          }
      }
[...]
    "location": 
      {
        "accuracy_radius": 
          100 <uint16>
        "latitude": 
          47.500000 <double>
        "longitude": 
          19.083300 <double>
        "time_zone": 
          "Europe/Budapest" <utf8_string>
      }
[...]

As you can see from the above command line, I use the freely available GeoLite2-City database. The commercial variant is also supported by syslog-ng, which is more precise and up-to-date.

In my configuration example below, I chose to simply store all available geographical data, but normally that is a waste of resources. You can figure out the hierarchy of names based on the JSON output of mmdblookup.

Configure Elasticsearch

The installation and configuration of Elasticsearch and Kibana are beyond the scope of this blog. The only thing I want to note here is that before sending logs from syslog-ng to Elasticsearch, you have to configure mapping for geo information.

If you follow my configuration examples below, you can use the following mapping. I use “syslog-ng” as the index name.

{
   "mappings" : {
      "_default_" : {
         "properties" : {
            "geoip2" : {
               "properties" : {
                  "location2" : {
                     "type" : "geo_point"
                  }
               }
            }
         }
      }
   }
}

Configure syslog-ng

Complete these steps to get your syslog-ng ready for creating heat maps:

1. First of all, you need some logs. In my test environment I receive iptables logs from my router over a TCP connection to port 514. These are filtered on the sender side, so no other logs are included. If you do not have filtered logs, in most cases you can filter for firewall logs based on the program name.

source s_tcp {
  tcp(ip("0.0.0.0") port("514"));
};

2. Process log messages. The first step of processing is using the key-value parser. It creates name-value pairs from the content of the message. You can store all or part of these name-value pairs in a database and search them at a field level instead of the whole message. A prefix for the name is used to make sure that the names do not overlap.

parser p_kv {kv-parser(prefix("kv.")); };

The source IP of the attacker is stored into the kv.SRC name-value pair.

3. Let’s analyze the kv.SRC name-value pair further, using the geoip2 parser. As usual, we use a prefix to avoid any naming problems. Note that the location of the database might be different on your system.

parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };

4. The next step is necessary to ensure that location information is in the form expected by Elasticsearch. It looks slightly more complicated than for the first version of the GeoIP parser as there is more information available and information is now structured.

rewrite r_geoip2 {
    set(
        "${geoip2.location.latitude},${geoip2.location.longitude}",
        value( "geoip2.location2" ),
        condition(not "${geoip2.location.latitude}" == "")
    );
};

5. In the Elasticsearch destination we assume that both the cluster and index names are “syslog-ng”. We set the flush-limit to a low value as we do not expect a high message rate. A low flush-limit makes sure that we see logs in Kibana in near real-time. By default, it is set to a much higher value, which is perfect for performance. Unfortunately, timeout is not implemented in the Java destinations so with the default setting and low message rate, you might need to wait an hour before anything shows up in Elasticsearch.

destination d_elastic {
 elasticsearch2 (
  cluster("syslog-ng")
  client-mode("http")
  index("syslog-ng")
  type("test")
  flush-limit("1")
  template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
 )
};

6. Finally we need a log statement which connects all of these building blocks together:

log {
  source(s_tcp);
  parser(p_kv);
  parser(p_geoip2);
  rewrite(r_geoip2);
  destination(d_elastic);
};

Configuration to copy & paste

To make your life easier, I compiled these configuration snippets in one place for better copy & paste experience. You should append it to your syslog-ng.conf or place it in a separate .conf file under /etc/syslog-ng/conf.d/ if supported by your Linux distribution.

source s_tcp {
  tcp(ip("0.0.0.0") port("514"));
};

parser p_kv {kv-parser(prefix("kv.")); };

parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };

rewrite r_geoip2 {
    set(
        "${geoip2.location.latitude},${geoip2.location.longitude}",
        value( "geoip2.location2" ),
        condition(not "${geoip2.location.latitude}" == "")
    );
};

destination d_elastic {
 elasticsearch2 (
  cluster("syslog-ng")
  client-mode("http")
  index("syslog-ng")
  type("test")
  flush-limit("1")
  template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
 )
};

log {
  source(s_tcp);
  parser(p_kv);
  parser(p_geoip2);
  rewrite(r_geoip2);
  destination(d_elastic);
};

Visualize your data

By now you have configured syslog-ng to parse iptables logs, added geographical information to them, and stored the results in Elasticsearch. The next step is to verify if logs arrive to Elasticsearch. You should see messages in Kibana where many field names start with “kv.” and “geoip2.”

Once you verified that logs are arriving to Elasticsearch, you can start creating some visualizations. There are numerous tutorials on how to do it by Elastic and others.

You can see a world map below visualizing the IP addresses that attempt to connect to my router. You can easily create such a map just by clicking on the “geoip2.location2” field in the “Available fields” list in Kibana, and then clicking on the “Visualize” button when it appears below the field name.

<figure class="wp-caption aligncenter" id="attachment_2415" style="width: 600px">world map<figcaption class="wp-caption-text">Map of IP addresses from attempted connections.</figcaption></figure>

Even if I left out many details, this blog is now quite lengthy so I am going to point you to some further reading:

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Creating heat maps using the new syslog-ng geoip2 parser appeared first on Balabit Blog.

LxQT Test Day: 2017-08-17

Posted by Alberto Rodriguez (A.K.A bt0) on August 17, 2017 02:29 AM

Tuesday, 2017-08-17, is the DNF 2.0 Test Day! As part of this planned Change for Fedora 26, we need your help to test LxQT!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Original note here:

LxQT Test Day: 2017-08-17

<iframe class="wp-embedded-content" data-secret="oCw5tobFpO" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/lxqt-test-day-2017-08-17/embed/#?secret=oCw5tobFpO" title="“LxQT Test Day: 2017-08-17” — Fedora Community Blog" width="600"></iframe>

Da FAI

Posted by Casper on August 16, 2017 11:22 PM

Pour ceux qui ont des freebox, j'ai trouvé un petit tour rigolo. Si vous êtes comme moi et que vous voulez faire à l'occasion (petites coupures, autres...) des diagnostics rapides de l'ensemble de tous les composants réseau, afficher l'uptime de la freebox dans un terminal en une seule commande va être assurément intéressant.

Vous connaissez sans doute l'adresse pour afficher le rapport complet :

http://mafreebox.free.fr/pub/fbx_info.txt

Donc on peut déjà afficher le rapport complet dans un terminal :

casper@falcon ~ % export FBX=http://mafreebox.free.fr/pub/fbx_info.txt
casper@falcon ~ % curl $FBX

C'est un début mais ça floode encore pas mal le terminal, on peut faire mieux...

casper@falcon ~ % curl $FBX 2>/dev/null | grep "mise en route" | cut -d " " -f10,11,12,13
4 heures, 33 minutes

Bon, j'ai rien inventé, mais j'espère que cette astuce vous sera utile un jour. N'hésitez pas à mettre un pouce vert, un com', tout ce que vous voulez, et surtout de vous abonner pour être automatiquement averti de la sortie d'une nouvelle vidéo !

Going to retire Fedora's OmegaT package

Posted by Ismael Olea on August 16, 2017 10:00 PM

OmegaT logo

Well, time has come and I must face my responsability on this.

My first important package in Fedora was for OmegaT. AFAIK OmegaT is the best FLOSS computer aid translator tool available. With the time OmegaT has been enjoying a very active development with a significant (to me) handicap: new releases adds new features with new dependencies on java libraries not available in Fedora. As you perfectly know, updating the package requires to add each one of those libraries as new packages. But I can’t find the time for such that effort. That’s the reason the last Fedora version is 2.6.3 and the lasts at upstream are 3.6.0 / 4.1.2.

So, I give up. I want to retire the package from Fedora because I’m sure I will not be able to update it anymore.

I’ll wait some days waiting someone expressing their interest on taking ownership. Otherwise I’ll start the retirement process.

PS: OTOH I plan to publish OmegaT as a flatpak package via Flathub. Seems to me it would be a lot easier to maintain that way. I’m aware Flathub is out of the scope of Fedora :-/

PPS: I send an announcement to the Fedora devel mailing list.