Fedora People

Measuring security: Part 2 - The cost of doing business

Posted by Josh Bressers on September 25, 2017 12:48 AM
If you've not read my last post on measuring security you probably should. It talks about how to measure the security of things that make money. That post is mostly focused on things like products that directly generate revenue. This time we're going to talk about a category I'm calling the cost of doing business.

The term "cost of doing business" is something I made up so I could group these ideas in some sensible way. At least sensible to me. You probably can't use this with other humans in a discussion, they won't know what you're talking about. If I had a line graph of spending I would put revenue generating on one side, the purse cost centers on the other side. The cost of doing business is somewhere in the middle. These are activities that directly support whatever it is the organization does to make new money. Projects and solutions that don't directly make money themselves but do directly support things being built that make money.

The cost of doing business includes things like compliance, sending staff to meetings, maybe regulatory requirements. Things that don't directly generate revenue but you can't move forward if you don't do these things. There's not a lot of options in many cases. If you don't have PCI compliance, you can't process payments, you can't make any money, and the company won't last long. If you don't attend certain meetings nobody can get any work done. Regulated industry must follow their requirements or the company can often just be shut down. Sometimes there are things we have to do, even if we don't want to do them.

In the next post we'll talk about what I call "infrastructure", these are the things that are seen as cost centers and often a commodity service (like electricity or internet access). I just want to clarify the difference. Infrastructure is something where you have choice or can decide not to do it with a possible negative (or positive) consequence. Infrastructure is what keep the lights on at a bare minimum. Cost of doing business must be done to get yourself to the next step in a project, there is no choice, which changes what we measure and how we measure it.

The Example

Let's pick on PCI compliance as it's pretty easy to understand example. If you don't do this it's quite likely your company won't survive, assuming you need to process card payments. If you're building a new web site that will process payments, you have to get through PCI compliance, there is no choice, and the project cannot move forward until this is complete. The goal now isn't so much measuring the return on an investment as it is being a good steward of the resources given to us. PCI requirements and audits are not cheap. If you are seen as making poor decisions and squandering your resources it's quite likely the business will get grumpy with you.

Compliance and security aren't the same thing. There is some overlap but it must be understood that you can be compliant and still get hacked. The overlap of compliance is a great thing to focus on for measuring what we do. Did your compliance program make you more secure? Can you show how another group used a compliance requirement to make something better? What if something compliance required saved some money on how the network was architected? There are a lot of side benefits to pay attention to. Make sure you note the things that are improvements, even if they aren't necessarily security improvements.

I've seen examples where compliance was used to justify 2 factor authentication (2FA) in an organization, There are few things more powerful than 2FA that you can deploy. Showing compliance helped move an initiative like this forward, and also showing how the number of malicious logs drops substantially is a powerful message. Just turning on 2FA isn't enough. Make sure you show why it's better, how the attacks are slowed or stopped. Make sure you can show there were few issues for users (the people who struggle will complain loudly). If there is massive disruption for your users, figure out why you didn't know this would happen, someone screwed something up that means. It's important to measure the good and the bad. We rarely measure failure which is a problem. Nobody has a 100% success rate, learn from your failure.

What about attending a meeting or industry conference? Do you just go, file the expense report, and do nothing? That sounds like a waste of time and money. Make sure you have concrete actions. Write down what happened, why it was important you were there, how you made the situation better, and what you're going to do next. How did the meeting move your project forward? Did you learn something new, or make some plans that will help in the future? Make sure the person paying your bills sees this. Make them happy to be providing you the means to keep the business moving forward.

The Cost

The very first step we have to consider when we want to measure what we're doing is to do your homework and understand cost. Not just upfront cost but cost of machines, disk, people, services, anything you need to keep the business moving forward. If there are certain requirements needed for a solution make sure you understand and document it. If a certain piece of software or service has to be used show why. Show what part of the business can function because of the cost you're providing. Remember this is going to be specific requirements you can't escape. These are not commodity services and solutions. And of course the goal is to move forward.

If you inherit an existing solution take a good look at everything, make sure you know exactly what the resource cost of the solution is. The goal here isn't always to show a return on investment, but to show that the current solution makes sense. Just because something costs less money doesn't mean it's cheaper. If your cut rate services will put the project in jeopardy you're going to be in trouble someday. Be able to show this is a real threat. It's possible a decision will be made to take on this threat, but that's not always your choice. Always be able to answer the questions "if we do this what happens" and "if we don't do this what happens".

Conclusion
This topic is tricky. I keep thinking about it and even as I wrote this post it changed quite a lot from what I started to write. If you have something that makes money it's easy to justify investment. If you have something that's a pure cost center it's easy to minimize cost. This middle ground is tricky. How do you show value for something you have to do but isn't directly generating revenue? If you work for a forward looking business you probably won't have to spend a ton of time getting these projects funded. Growing companies understand the cost of doing business.

I have seen some companies that aren't growing as quickly fail to see value in the cost of doing business. There's nothing wrong with this sometimes, but as a security leader your job is to make your leadership understand what isn't happening because of this lack of investment. Sometimes if you keep a project limping along, barely alive, you end up causing a great deal of damage to the project and your staff. If leadership won't fund something, it means they don't view it as important and neither should you. If you think it is important, you need to sell it to your leadership. Sometimes you can't and won't win though, and then you have to be willing to let it go.

Fedora 26 - test kernel .

Posted by mythcat on September 24, 2017 09:08 PM
You can test the kernel with your Fedora distro and get a funny badge of science:
Science (Kernel Tester I).
$ git clone https://git.fedorahosted.org/git/kernel-tests.git
$ cd kernel-tests
$ sh runtests.sh

This is my tests of Fedora 26 logs :
  • 4.14.0-0.rc1.git2.1.fc28.x86_64  FAIL logs
  • 4.13.0-0.rc7.git0.1.fc28.i686+PAE PASS logs
  • 4.14.0-0.rc1.git3.1.fc28.i686 PASS logs
  • 4.13.3-300.fc27.x86_64 FAIL logs
  • 4.13.3-300.fc27.i686+PAE PASS logs
  • 4.12.14-300.fc26.x86_64 PASS logs
  • 4.12.14-300.fc26.i686+PAE PASS logs
  • 4.12.14-200.fc25.x86_64 PASS logs
  • 4.12.14-200.fc25.i686+PAE PASS logs

Kernel 4.13 Test Day 2017-09-27

Posted by Fedora Community Blog on September 24, 2017 06:52 PM

Monday, 2017-09-27, is Kernel 4.13 Test Day! As Fedora 27 will be using 4.13 , we want to test it across all architectures and different
variants of F27.

Why test Kernel?

Test Day will focus on testing the new kernel , we although have couple of known bugs. Feel free to explore and triage them too for the compose.
We will also appreciate , testing for F28/4.14 as most of you know, with ‘No-Alpha’ Rawhide should be of Alpha Quality.The regression reports will help us too.

We hope to see whether it’s working well enough and catch any remaining issues.

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Kernel 4.13 Test Day 2017-09-27 appeared first on Fedora Community Blog.

Mirroring free and open source software matters

Posted by Vedran Miletić on September 24, 2017 04:53 PM

Featured image: Patrick Tomasso | Unsplash (photo)

Post theme song: Mirror mirror by Blind Guardian

A mirror is a local copy of a website that’s used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that’s transparent to the user; when using a mirror, the user will see explicitely which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.

Free and open source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts’o on MIT’s FTP server. The GNU Project‘s history contains an analogous process of making local copies of software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.  Many Linux distributions, including this author’s favorite Debian and Fedora use mirroring (see here and here) to be more easily available to the users in various parts of the world. If you look carefully at those lists, you can observe that the universities and institutes host a significant number of mirrors, which is both a historical legacy and an important role of these research institutions today: the researchers and the students in many areas depend on free and open source software for their work, and it’s much easier (and faster!) if that software is downloadable locally.

Furthermore, my personal experience leads me to believe that hosting a mirror as a university is a great way to reach potential students in computer science. For example, I heard of TU Vienna thanks to ftp.tuwien.ac.at and, if I was willing to do PhD outside of Croatia at the time, would certainly look into the programs they offered. As another example, Stanford has some very interesting courses/programs at the Center for Computer Research in Music and Acoustics (CCRMA). How do I know that? They went even a bit further than mirroring, they offered software packages for Fedora at Planet CCRMA. I bet I wasn’t the only Fedora user who played/worked with their software packages and in the process got interested to check out what else they are doing aside from packaging those RPMs.

That being said, we wanted to do both at University of Rijeka: serve the software to the local community and reach the potential students/collaborators. Back in late 2013 we started with setting up a mirror for Eclipse; it first appeared at inf2.uniri.hr/mirrors and later moved to mirrors.uniri.hr, where it still resides. LibreOffice was also added early in the process, and Cygwin quite a bit later. Finally, we started mirroring CentOS‘s official and alternative architectures as a second mirror in Croatia (but the first one in Rijeka!), the first Croatian one being hosted by Plus Hosting in Zagreb.

<iframe frameborder="0" height="274" scrolling="no" src="https://www.facebook.com/plugins/post.php?href=https%3A%2F%2Fwww.facebook.com%2Finf.uniri%2Fposts%2F1519708868068588&amp;width=500" style="border: none; overflow: hidden;" width="500"></iframe>

University’s mirrors server already syncs a number of other projects on a regular basis, and we will make sure we are added to their mirror lists in the coming months. As it has been mentioned, this is both an imporant historical legacy role of a university and a way to serve the local community, and a university should be glad to do it. In our case, it certainly is.

AWS EMR – Big Data in Strata New York

Posted by Hernan Vivani on September 23, 2017 10:37 PM

Will you be in New York next week (Sept 25th – Sept 28th)?

aws_sponsor                   strata_data

Come meet the AWS Big Data team at Strata Data Conference, where we’ll be happy to answer your questions, hear about your requirements, and help you with your big data initiatives.

See you there!

 

 

 

 


CentOS mirroran na Sveučilištu u Rijeci

Posted by HULK Rijeka on September 23, 2017 08:55 PM

Jedna od stvari u svijetu slobodnog softvera koje gotovo svaki došljak primijeti je sudjelovanje brojnih sveučilišta i instituta diljem svijeta u distribuciji izvornog koda, paketa softvera i instalacijskih slika distribucija Linuxa. (Naravno, uz sveučilišta i institute, tu su i brojne IT kompanije koje koriste proces distribucije slobodnog softvera kao svojevrsni “stress test” vlastite mrežne infrastrukture.) Za ilustraciju sudjelovanja sveučilišta, možemo pogledati popis mirrora projekta GNU koja ima niz servera unutar SAD-a na domeni .edu. Za ocijeniti situaciju van SAD-a potrebno je malo više truda, ali lako se u popisu vidi brazilsko Sveučilište u Campinasu, finski FUNET (ekvivalent hrvatskog CARNet-a), grčko Sveučilište u Kreti, nizozemsko Sveučilište u Twenteu i brojna druga.

Sveučilišta prvo trebaju mjesto gdje se može brzo i lako preuzeti softver za potrebe vlastitih istraživača i studenata, a zatim, obzirom da su veliki broj njih javno financirana, mogu ponuditi i lokalnoj zajednici istu uslugu. Potaknuto stranim primjerima i na poticaj autora ovog teksta, Sveučilište u Rijeci u sklopu svojeg skupa mirrora odnedavno nudi drugi mirror za CentOS u Hrvatskoj, koji uz standardnu distribuciju uključuje i pakete za alternativne procesorske arhitekture.

<iframe frameborder="0" height="274" scrolling="no" src="https://www.facebook.com/plugins/post.php?href=https%3A%2F%2Fwww.facebook.com%2Finf.uniri%2Fposts%2F1519708868068588&amp;width=500" style="border: none; overflow: hidden;" width="500"></iframe>

Kako nas veseli svaka inicijativa koja slobodni softver čini dostupnijim, pozdravljamo mirroranje bilo kojeg projekta slobodnog softvera u Hrvatskoj, a naročito popularne Linux distribucije kao što je CentOS.

A minor addition for todo.txt

Posted by Adam Young on September 23, 2017 05:57 PM

I had a simple todo list I managed using shell scripts and git, but I wanted something for the Cell phone. The todo.txt application fills that need now. But I was able to reuse something from my old approach to make it a little more command line friendly.

I want to be able to add a todo from the command line without thinking syntax or dates or anything. Here is the code:

 

#!/bin/sh

TODOLINE=`date +%Y-%m-%d`
until [ -z "$1" ] # Until all parameters used up . . .
do
 TODOLINE=$TODOLINE" $1 "
 shift
done

pushd $HOME/Dropbox/todo

echo $TODOLINE >> todo.txt

I have it in  ~/bin/add_todo~/bin is a part of $PATH.    To remember something, I type:

add_todo Meet with Jim on Monday

And In the corresponding document I get:

2017-09-23 Meet with Jim on Monday

The thing I like about the script is that it treats each command line argument as part of the message;  no quotes are required.

All systems go

Posted by Fedora Infrastructure Status on September 22, 2017 10:00 PM
Service 'Mailing Lists' now has status: good: Everything seems to be working.

Further on the UX hiring process

Posted by Suzanne Hillman (Outreachy) on September 22, 2017 09:55 PM

Hi again!

The previous post on this topic offered an overall summary of what I’ve been learning in my conversations with folks. Now I’d like to go into a little more detail on some of the topics.

So what should I learn?

Identifying the best areas to focus is probably one of the hardest tasks, especially for those folks who are not able to afford to get a degree or do a bootcamp like General Assembly. The guidance offered through official programs is not to be underestimated!

What do you already know?

You almost certainly have experience in _something_ that falls into UX design. Whether it’s researching how to do something, drawing things in your spare time, talking to someone new, explaining a skill or idea to someone else, or trying to use a new piece of software: these are all applicable to UX in some way or another.

The way I like to think about UX research and interactive design breaks down like this (see my quick and dirty handout from a recent talk I did):

<figure></figure>

Everything informs everything else, from the information you gather at the beginning, to the analysis with other folks, to the early sketchy design possibilities you create, through to iterating on your design based on feedback you get from stakeholders and users.

When these designs need to be produced in higher and higher fidelity as your team gets closer to something that works well for the stakeholders, there will likely be continued iterations based on what’s actually feasible and plausible. (I am not as experienced in the visual design aspect of the UX process, so I cannot offer as much structure around that part.)

What do you like to do, what do you need to learn?

Figure out what you know how to do or could easily learn. With that information, you can focus on what you know how to do and how to integrate it into a project, and then on improving any areas you specifically want to learn.

I personally need more practice in visual design and data visualization: I’m not especially familiar with visual design or otherwise making things visually approachable, and these both seem useful to at least have a basis in.

I’m working on identifying the best ways for me to improve these skills, and found that working on badges with Fedora folks helped a bit. Among other things, it meant that I had the opportunity to ask what people did when they did specific things that I might otherwise not have encountered (such as specific keystrokes in design programs).

For other folks, it might be wise to learn the basics of HTML and CSS. Even if you do not wish to write the code for your designs, it is immensely helpful to understand how programming works.

Depending on one’s level of familiarity with these, something like https://www.codecademy.com/ might be your best bet. These are free courses that let you see what you are doing as you go along. You might also appreciate https://codepen.io, which will update with your changes as you go along, and which supports HTML, CSS, and Javascript.

If you’re not familiar with how to phrase things, maybe you want to work on writing content for your designs. Maybe pretend that you are talking to someone who has never run into the thing you are talking about, or to someone who is too busy to give you more than a 30 seconds to a minute to read whatever you have to say. Figure out the most concise, but clear, way to say whatever you need to say. Even if you don’t want to write the content for your designs, it’s really important to be able to express yourself simply and clearly. Words are important, along with visuals and structure.

If you are looking to get into research, it would behoove you to learn some about quantitative research, not just qualitative. One of the major points that folks looking for quantitative researchers want is the ability to tell if the company is measuring success effectively.

Possible places to get cheap but decent classes include Lynda and Coursera. I’ve done some Coursera courses, specifically “Human-Centered Design: An Introduction”, ”Design Principles: An Introduction”, and “Information Design”.

Whatever it is that you need to learn more about, there is probably a way to do it online (remember to check Youtube!). However, it is often the things one needs the most help in that are the hardest to figure out how to learn on one’s own. Knowing the terminology is important for any successful google search!

(Note: I suspect that offering classes in basic aspects of each piece of the UX process would be a good value for the UXPA boston group, given the content of the previous paragraph. Not everyone learns from videos/written instruction very well)

Do a project. Any project

In my experience, the best way to learn is to find a specific design project — really any design project is fine to start out — and start working on it. If you have friends who write programs, see if they want your help. If you have friends with lots and lots of ideas, ask them to let you help design one of them. If neither of these are the case, consider an area in which you wish that something existed, or in which you wish a piece of software were easier to use. At this point, it matters less if your project goes live — although that’s always preferred if possible — and more that you are working on something.

Take lots of screenshots and notes and keep track of what you’ve tried, what worked, and what didn’t work. These will be useful when it comes time to create your portfolio!

Remember: the point of your first project is to learn, rather than to succeed, and most people learn the best from failure. Failing at something isn’t actually bad. Indeed, it’s almost expected, since you’re new at it. Figuring out where things went wrong is the important part.

That said, it can be difficult to know what to do at any stage of a project, especially if you’ve never tackled one before. This is where having someone you can check in with is invaluable. Not only is UX design not really a solitary activity, but having someone to help nudge you on the right path when you get stuck is fantastic.

If you have a mentor, that’s great. If not, see if you can find other folks who are also job hunting to work with. Chances are good that you are each better at different pieces of the project, and this will provide you both with additional experience.

For a possible mentors, join http://designmentors.org/ (credit to David Simpson for this!) and get in touch with someone who looks useful for your needs.

If you’re still struggling to figure out a design idea, this page might be helpful.

If you’re not sure how to approach a project, this site talks about the whiteboard design challenge that sometimes happens in interviews, and is a decent overview of what a design project could involve.

(Note: Offering folks ways to get in touch with others who are looking for their design projects to work on might be a useful feature. Similarly, ways to find mentors.)

Which tools?

In general, you will need to use a tool of some sort for your design project. Paper prototypes are amazing, no doubt about it. Unfortunately, they are difficult to test out remotely, and rely on excellent drawing skills and handwriting to be easily used for prototypes.

There are a large number of options for tools in the UX design space.

Mockups/Prototyping

Some are focused on being easy to use to make low and medium-fidelity mockups and prototypes (Balsamiq was my first tool, for example. Axure is easy to start out, but a bit complicated to learn to turn into a prototype). Some are specifically meant to help folks turn their designs into prototypes (like Invision, which is free and supports uploading existing designs) and often support collaboration quite easily. Others are more on the visual design side of things, although sometimes still include fairly easy ways to make mockups and prototypes (Sketch is extremely popular, but mac-only).

Adobe’s creative cloud service includes a lot of commonly used graphic design tools, whether photoshop (for which Gimp is a decent free and open source substitute, if poorly named), illustrator (vector graphics; try Inkscape for a free and open source substitute), indesign (as far as I can tell it’s about design for publishing online and off? Not sure of the best free equivalent) or the recently added experience design (XD beta, again not sure of an equivalent, although I think it may be meant to compete with Sketch).

The ones I’ve listed above are the most frequently mentioned in job applications, especially Sketch and Adobe creative cloud. Axure and Invision are also quite common. There are a _lot_ of other newer (and often free/beta) options, although I’ve not done much exploring of those.

(note: classes/mentors for basic introductions to the most common design tools might be useful, especially for those who are not already familiar with Adobe Creative Cloud. Not everyone learns from videos/written instruction well)

Other tools and techniques

You may also want to investigate tools for mind mapping (I like MindMeister, free for a small number of maps), which can be useful to keep track of relevant ideas and concepts. Or for remote affinity mapping (I like Realtimeboard, free for a small number of boards) and other sticky-note/whiteboard-based activities.

There are a lot of other techniques that could be good to learn, including task flows and journey maps.

Many companies want folks with experience in the agile framework, so learning what that is and the various ways that design folk have figured out how to integrate into it would be useful.

If you are not already familiar with style guides and pattern libraries, getting a basic understanding of those would be useful.

Ok, I’ve done my first design. Now what?

First, congratulations! That’s often the hardest part.

Review your work

Take a look at what you did with an eye toward improving. What do you want to learn more about? What do you need help with? Where do you feel you excelled?

Read

Take a look at various blogs in UX, as now that you’ve done your first project, you will likely start finding that those start making more sense to you. I found that reading various blogs and watching videos was overwhelming before I’d done a project, because I had no idea what was relevant.

Twitter has a lot of fantastic UX folks, although who you want to follow may be partly location-based. I like Jared Spool, Joe Natoli, Luke Wroblewski, Mule Design Studio, Dana Chisnell, Sarah Mei, and What Users Do.

http://52weeksofux.com/ is an excellent overview site that I really need to revisit myself, now that I’ve got some experience in UX.

I’m also fond of UX Mastery, and the Nielsen Norman Group.

There’s also a lot of good books out there!

(note: a curated list of useful links and books would be really helpful!)

Portfolio

Your best bet would be to summarize what you did, whether as part of your portfolio or as preparation for your portfolio. Keep your eye out for things you would have done differently next time, as well as things you think worked out well. You want to describe your process, and at the same time tell a story about what you did and why. Remember to be clear on what you did and what your teammates did: as I’ve mentioned above, UX is typically a team process.

If you want to write the HTML and CSS yourself, that’s fine. However, beware of the problem of running down rat holes to make things look perfect, and never actually creating a portfolio that you can share. That’s a major reason I’m moving away from a static website to Wix.com — it’s so much easier to do good design if I’m not also trying to write the code.

Tell a story?

I’ve had lots and lots of people say to tell a story, so I’ll share something about that. I had no idea what that actually _meant_ until I had a chance to a) dig deeper into what specifically folks were thinking about and b) see examples of this. One of my major problems is that writing a portfolio for a UX researcher is _hard_. You tend to have fewer pretty things to show folks than the typical graphic design portfolio might, and you may or may not have the design skills to make your portfolio pretty.

To the best of my understanding, your story needs to include as much guidance for your reader as possible. Like everything else, use your nacient UX skills on your portfolio: guide your reader through it.

Guide your reader

Use Gestalt principles to help your reader know where to go next, and I recommend an overview (this links to my in-progress update for my website) of your major goals and results to act as guideposts.

From this page: Include as much as possible of the STAR method in your portfolio to communicate what the situation is (goal of the project), what tasks and actions you accomplished (your UX toolkit of wireframing, usability testing, sitemaps…) and what the end results were (analytics, final designs, customer testimonials).

Note that I’m still struggling with the best way to explain the end results in some of my projects, because they either were one shot things (through hackathons) or are on pause while underlying things are completed.

I’ve got a portfolio, now what?

Get someone to look at it! Just as in everything else, you want someone else to take a look because there will be something you’ve missed, or ways in which you are not as clear as you’d like.

If that’s not an option, take a week or two, and then take another look at it. You’ll probably find typos and brainos (places where what you wrote doesn’t actually make sense), even though you are the one who originally wrote it.

(note: I expect that offering folks portfolio feedback would be really helpful! I’ve personally gotten in touch with someone from designmentors.org and have a review pending)

Do more design work!

Find more projects to work on. Now that you have your first one under your belt, this will go more smoothly, and you likely will find it easier to identify areas to work on.

If you happen to be able to find an internship in UX (say, Outreachy), take it! Guidance is amazing.

Start looking for jobs

This will help you get an idea of what the market looks like right now. It may help you decide what tools or skills to learn, or identify things you specifically _don’t_ want to do. And hey, you might find a job that looks good!

Network!

Honestly, I should have already said this, but this is easier when you have a little experience. At least in my case, having some basic knowledge makes it easier to talk to folks about UX.

Better yet is if you have a specific goal in talking to folks. For example, since I’ve been collecting data about the hiring process in Boston, I’ve had no trouble contacting folks about interviewing them. You may be able to take the tactic of asking folks about what they do in UX, potentially allowing for the opportunity to learn more about UX at their company.

Business (MBA) folk do something called an informational interview. In some cases, this appears to mean talking to folks about UX at their company. In others, it might involve the possibility of going to someone’s company and actually seeing how it works. As far as I can tell, your best bet is to see if you know anyone working at a company that includes UX folks and see if you can get any of them to introduce you. You can also message people on LinkedIn without a connection, but that may not work as well.

Present on your project

If you have the opportunity to present on a project you’ve done, take it. Presenting skills are very important in UX, and practice does help. Talking in front of a group of people can be scary, especially if you’re also trying to get them to hire you. Practice in a safer space, first, if you can.

Be visible online

If you don’t already exist online, you really should. Start a blog (I’m quite fond of Medium) about your UX experiences/learning/thoughts. Be active on twitter. Be visible in your UXness.

What next?

I’ll be chatting with more folks over the coming weeks, and will be speaking to the UXPA Boston board the first week of October. Watch this space!

Cómo instalar Steam en Fedora

Posted by Fernando Espinoza on September 22, 2017 05:00 PM

La aplicación Steam es una plataforma de videojuegos que cada vez es más popular entre los usuarios de ordenador y portátil. Esto se debe entre otras cosas a los videojuegos que podemos jugar sin tener que esperar a tener una determinada plataforma o sistema operativo. Lo positivo de Steam está en que tiene un cliente oficial para... Seguir leyendo →


Why all the DAC_READ_SEARCH AVC messages?

Posted by Dan Walsh on September 22, 2017 04:26 PM

If you followed SELinux policy bugs being reported in bugzilla you might have noticed a spike in messages about random domains being denied DAC_READ_SEARCH.

Let's quickly look at what the DAC_READ_SEARCH capability is.  In Linux the power of "root" was broken down into 64 distinct capabilities.  Things like being able to load kernel modules or bind to ports less then 1024.  Well DAC_READ_SEARCH is one of these.

DAC stands for Discretionary Access Control, which is what most people understand as stand linux permissions, Every process has owner/group.  All file system objects are  assigned owner, group and permission flags.  DAC_READ_SEARCH allows a privilege process to ignore parts of DAC for read and search.

man capabilities

...

       CAP_DAC_READ_SEARCH

              * Bypass file read permission checks and directory read and execute permission checks;

There is another CAPABILITY called DAC_OVERRIDE

       CAP_DAC_OVERRIDE

              Bypass file read, write, and execute permission checks.

As you can see DAC_OVERRIDE is more powerful then DAC_READ_SEARCH, in that it can write and execute content ignoring DAC rules, as opposed to just reading the content.

Well why did we suddenly see a spike in confined domains needing DAC_READ_SEARCH?

Nothing in policy changed but the kernel changed.  Basically the kernel was made a little more secure.  Lets look at an example.  There is a small program called unix_chkpwd (checkpwd_t) which you end up executing when you log into the system.  This program reads /etc/shadow.  On Fedora/RHEL/CentOS and probably other GNU/Linux systems, /etc/shadow has 0000 Mode.  This means NO processes on the system, even if running as root (UID=0), are allowed to read/write /etc/shadow, unless they have a DAC Capability.

Well as policy evolved we saw that chckpwd_t needed to read /etc/shadow, it generated an DAC_OVERIDE AVC, so a policy developer gave that SELinux allow rule to checkpwd_t.  And for years things worked properly but then the kernel changed...

If a process tried to read /etc/shadow, it would be allowed if it had either DAC_OVERRIDE or DAC_READ_SEARCH. 

  

Older kernel's had pseudo code like

If DAC_OVERRIDE or DAC_READ_SEARCH:

         Read a file with 0000 mode.

New Kernel switched to:

If DAC_READ_SEARCH or DAC_OVERRIDE

         Read a file with 0000 mode.

Since the chkpwd_t had DAC_OVERRIDE in the older kernels, it never checked DAC_READ_SEARCH and therefore DAC_READ_SEARCH was never added to policy.  Now it is checking DAC_READ_SEARCH first so we see the AVC being generated even though the final access was allowed.

This has generated a lot of noise for people from the SELinux logs, but really nothing got denied.  After the AVC was generated the access was still allowed.

The policy package maintainers have been updating all of the policy to allow these new DAC_READ_SEARCH, and I have suggested to them to start dropping alot of DAC_OVERRIDE policy from domains, since a lot of them including chkpwd_t don't need to write /etc/shadow, so they should be able to get by with only reading it.  This should eventually make SELinux policy more secure.



Maildrop.cc

Posted by Fernando Espinoza on September 22, 2017 03:05 PM

Hola amigos de la blogosfera, hoy les traigo una buena noticia, me encontré con un servicio de email muy interesante y sobretodo muy útil este servicio web es conocido como "maildrop", nos ayuda a crear un correo publico que podemos utilizar para cualquier cosa y sobretodo para no llenar de spam o mensajes no deseados... Seguir leyendo →


Tip: Changing the qemu product name in libguestfs

Posted by Richard W.M. Jones on September 22, 2017 01:03 PM

20:30 < koike> Hi. Is it possible to configure the dmi codes for libguestfs? I mean, I am running cloud-init inside a libguestfs session (through python-guestfs) in GCE, the problem is that cloud-init reads /sys/class/dmi/id/product_name to determine if the machine is a GCE machine, but the value it read is Standard PC (i440FX + PIIX, 1996) instead of the expected Google Compute Engine so cloud-init fails.

The answer is yes, using the guestfs_config API that lets you set arbitrary qemu parameters:

g.config('-smbios',
         'type=1,product=Google Compute Engine')

Join the Magazine team

Posted by Fedora Magazine on September 22, 2017 08:00 AM

The recent Flock conference of Fedora contributors included a Fedora Magazine workshop. Current editorial board members Ryan LerchJustin W. Flory, and Paul W. Frields covered how to join and get started as an author. Here are some highlights of the workshop and discussion that took place.

Writing process

The process of writing an article for Fedora magazine is simple and involves only a few steps.

  1. The writer pitches an idea which summarizes the topic and its objectives.
  2. Once the pitch is approved, the writer creates a draft.
  3. The editors work with the author to get the article finished and scheduled for publishing.

The Fedora Magazine’s editorial board meeting happens every Tuesday, where approvals and scheduling happen. If you follow the process, you can usually expect your article to be published within a week or two, depending on the queue and time critical articles.

The full process of writing an article is covered here on the Magazine itself.

Tips for better articles

The Magazine provides articles about and outreach for Fedora. As a result, articles focus mainly on Fedora users, rather than just contributors. However, the Community Blog provides a focus on contributors, and might be the right place for some news. Ask yourself a few questions as you think about your pitch or article:

  • Who’s your target audience? What kind of readers are you talking to?
  • What do you want them to know or achieve by reading the article?

Also, articles don’t have to be big and complex. If you think your article is going beyond 500-600 words, you may want to break it up into two or more shorter, simpler articles. The Magazine also hosts series of articles, as long as authors are willing to write several entries before publishing the first.

The Magazine features a dedicated page full of tips and advice on writing better articles: Tips for article style, grammar, content, and SEO

Get your red hot pitches

During the workshop, some fantastic pitch ideas were discussed.

  • Kernel bench-marking in Fedora
  • Add-ons to improve your privacy on Thunderbird
  • How to use Thunderbird mail filters
  • Installing Mycroft.ai
  • How to use Webex on Fedora
  • Awesome GNOME extensions for developers
  • Installing Hawkular on Fedora
  • How to create a stratum 0 time source

The editorial board has already pre-approved each of the above pitch ideas. One of them might be a good start for your article for the Magazine. To claim a pitch, just follow the process described above and claim an idea.

Join the team

The Magazine’s reach and readership grows steadily every release. You too can be part of this team — it’s easy to get started and fun to work on. Your contributions directly impact the growth of the Magazine. Why not join today?


Photo by Aaron Burden on Unsplash

Flock 2017 - Event Report

Posted by Giannis Konstantinidis on September 22, 2017 12:00 AM

Flock to Fedora comprises the premier Fedora event, held annually in either EMEA or NA. A few weeks ago, I had the opportunity to travel to Cape Cod, USA and participate at the latest edition of the conference.

I hold multiple roles within the Fedora Project: FAmSCo Vice-Chairman, FAmA Member, Ambassador Mentor and Ambassador. In addition, I am involved with Mozilla as a ReMo and Tech Speaker and therefore sometimes I feel I act as a bridge between the Fedora Project and Mozilla.

During the Ambassadors Workshop During the Ambassadors Workshop (photo by Mariana Balla, CC BY-SA)

I attended the “Diversity Team Hackfest” whose goals included planning diversity-specific events and defining the interactions between the Diversity Team and other Fedora Sub-Projects and SIGs. I was delighted to see the Mozilla Community Participation Guidelines and the Mozilla Diversity & Inclusion Strategy not only being referenced, but also highlighted as best practices.

I suggested the Diversity Team should focus on helping people -regardless of their background- on-board Fedora Sub-Projects and SIGs and encourage existing contributors to join the Diversity Team so that the activities of the latter may be gradually expanded.

With Open Labs Members With Open Labs Members (photo by Mariana Balla, CC BY-SA)

I simply could not miss “Fedora Ambassadors: The Future”, either. Ambassadors from various countries sat down to share concerns, exchange ideas and best practices. Plenty food for thought, I must say.

I referenced the EMEA Event Plan and proposed that APAC, NA and LATAM may adopt the same model. I also pointed out that contributors do not need to be ambassadors to submit regional funding requests, although it is recommended. An important problem we are facing is that there is no way of collecting event metrics, which prevents the Fedora Leadership from having a clear overview - Mozilla’s ReMo, for example, had solved this a long time ago.

To conclude, Flock brought me together with hundreds of Fedora contributors from across the globe. We collaborated extensively, brainstormed, tackled issues down and certainly enjoyed every moment. When it comes to free and open-source software, their communities are their biggest strength. We have proven that ourselves.

News: The new Krita 3.3.0 .

Posted by mythcat on September 21, 2017 09:02 PM
The new Krita come for linux users with 64 bits Linux: krita-3.3.0-rc1-x86_64.appimage.
As you know : the AppImage is a format for distributing portable software on Linux without needing superuser permissions to install the application. About this new release then this new Krita come with some improvements and features:
  • support for the Windows 8 event API;
  • hardware-accelerated display functionality to optionally use Angle on Windows instead of native OpenGL;
  • some visual glitches when using hi-dpi screens are fixed
  • several new command line options;
  • the performance improvements and selections are fixed;
  • the system information dialog for bug reports is improved
You can read more about this release here.

Optionsbleed: Don’t get your panties in a wad

Posted by Jeroen van Meeuwen on September 21, 2017 07:30 PM
You’re a paranoid schizophrenic if you think optionsbleed affects you in any meaningful way beyond what you should have already been aware of, unless you run systems with multiple tenants that upload their own crap to document roots and you’ll happily serve as-is, yet pretend to provide your customers with security; this is a use-after-free… Continue reading Optionsbleed: Don’t get your panties in a wad

Samba 4.7.0 (Samba AD for the Enterprise)

Posted by Andreas Schneider on September 21, 2017 04:00 PM

Enterprise distributions like Red Hat or SUSE are required to ship with MIT Kerberos. The reason is that several institutions or governments have a hard requirement for a special Kerberos implementation. It is the reason why the distributions by these vendors (Fedora, RHEL, openSUSE, SLES) only package Samba FS and not the AD component.

To get Samba AD into RHEL some day it was clear, that we need to port it to MIT Kerberos.

In 2013 we started to think about this. The question which arise first was: How do we run the tests if we port to MIT Kerberos? We want to start the krb5kdc daemon. This was more or less the birth of the cwrap project! Think of cwrap like it is “The Matrix” where reality is simulated and everything is a lie. It allows us to create an artificial environment emulating a complete network to test Samba. It took nearly a year till we were able to integrate the first part of cwrap, socket_wrapper, into Samba.

Then the work to port Samba AD to MIT Kerberos started. We created a simple abstraction of Samba KDC routines so we could convert between Heimdal and MIT Kerberos. Then created a MIT KDB module and were able to start the krb5kdc process.

In 2015 we had more than 140 patches for Samba AD ready and pushed most of them upstream in April. We still had 70 testsuites failing. We started to implement missing features and fixed tests to work with MIT Kerberos tools. During that time we often had setbacks because features in MIT Kerboros were missing which we required. So we started to implement missing features in MIT Kerberos.

In September of 2015 we started to implement missing pieces in ‘samba-tool’ to provision a domain with MIT Kerberos involved. Till the end of the year we implemented the backup key protocol using GnuTLS (which also needed to add features for us first).

From January till July 2016 we implemented more features in MIT Kerberos to get everything working. In August we had most of the stuff working just the trust support wasn’t working. From there we discovered bug after bug in the implementation how trusts are handled and fixed bug by bug. We had to do major rewrites of code in order to get everything working correctly. The outcome was great. We improved our trust code and got MIT Kerberos working in the end.

2017-04-30

That’s the day when I pushed the final patchset to our source code repository!

It took Günther Deschner, Stefan Metzmachen and me more than 4 years to implement Samba AD with MIT Kerberos. Finally with the release Samba 4.7.0 it is available in a to use for everyone.

Fedora 27 will be the first version with Samba AD.

Using the WordPress App with the Community Blog

Posted by Fedora Community Blog on September 21, 2017 01:00 PM

Occasionally the CommBlog has had an issue with the WordPress interface that allows you to edit articles. The visual editor gets stuck and the interface stops working. I use the WordPress Desktop App. This application allows you to edit and manage a wordpress.com blog, but it also allows you to edit and manage a self-hosted WordPress blog through JetPack. This app will allow you to post on the CommBlog from your laptop without any problem. The requirement is to have a wordpress.com account.

Installation

There is no package or installation required, just download the binary and run it. First download the latest version of the app from the wordpress desktop app site. Choose the tar.gz file. At this moment, the latest version is 2.7.1

After downloading the file, unpack it:

tar xzf wordpress-com-linux-x64-2-7-1-tar.gz

This will give you a folder called WordPress.com-linux-x64, which has the binary app called WordPress.com inside of it.

Wordpress Folder Content

WordPress Folder Content

You also need to have the dependency, libXss.so.1, that is provided by libXScrnSaver. This is available via a dnf install:

sudo dnf install libXScrnSaver

Finally, you just need to excecute the WordPress.com file.

Configuration

To configure it, just execute the app and follow the steps shown on the screen.

Login:

WordPress.com App Login Screen

WordPress.com App Login Screen

You will be prompted to use the My Sites widget to add a new site:

My Sites changer widget

My Sites changer widget

Click on “Add New Site” and in the next screen select “Add an existing WordPress site with Jetpack” and write communityblog.wordpress.com:

Jetpack association

 

You will prompted to login with your fas account (in the default browser) and then it will make the link between your wordpress.com account and the Jetpack plugin in the CommBlog. After a few minutes you will see the blog configured in the app:

Final Screen with Blog added

Final Screen with Blog added

You will need to close your browser and then you can use the app to write and manage the CommBlog without any problem.

The Cons

Right now the only failure I found is that I can’t upload media to the Media Library. My solution has been to upload the media files from the browser and then switch to the app to write the articles.

The post Using the WordPress App with the Community Blog appeared first on Fedora Community Blog.

Fix Xmarks Bookmark Sync to Opera browser.

Posted by mythcat on September 21, 2017 12:27 PM
The Xmarks Bookmark Sync is a good web tool to manage all your browser bookmarks.
The official website of Xmarks come with extension just for Firefox, Google Chrome, Internet Explorer and Safari.
This can be fix on Opera browser with another extension named download chrome extension.
Using this extension you can install on Opera browser many extensions from Google Chrome.

Cockpit 151

Posted by Cockpit Project on September 21, 2017 09:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 151.

Support loading SSH keys from arbitrary paths

The User menu’s Authentication dialog now supports entering arbitrary paths to SSH keys for adding to the SSH authentication agent. Previously this only offered keys present in the standard ~/.ssh home directory.

See it in action:

<iframe allowfullscreen="" frameborder="0" height="720" src="https://cockpit.fedorapeople.org/add-keys.webm" width="960"></iframe>

Support X-Forwarded-Proto HTTP header for Kubernetes

Newer Kubernetes versions support reading the X-Forwarded-Proto HTTP header, which helps to determine whether or not a client used SSL to connect to an intermediate proxy, load balancer, or similar. Cockpit’s Kubernetes (Cluster) dashboard now sets this header. Earlier versions have already done that when hosted in OpenShift.

Fix Kubernetes connection hangs

The previous Cockpit release 150 introduced a regression when connecting to Kubernetes clusters. In some cases, like specifying a wrong server name or when the Cluster did not send Authentication Provider information, the connection attempt would hang indefinitely. This version corrects this bug.

Try it out

Cockpit 151 is available now:

libinput and the HUION PenTablet devices

Posted by Peter Hutterer on September 21, 2017 04:52 AM

HUION PenTablet devices are graphics tablet devices aimed at artists. These tablets tend to aim for the lower end of the market, driver support is often somewhere between meh and disappointing. The DIGImend project used to take care of them, but with that out of the picture, the bugs bubble up to userspace more often.

The most common bug at the moment is a lack of proximity events. On pen devices like graphics tablets, we expect a BTN_TOOL_PEN event whenever the pen goes in or out of the detectable range of the tablet ('proximity'). On most devices, proximity does not imply touching the surface (that's BTN_TOUCH or a pressure-based threshold), on anything that's not built into a screen proximity without touching the surface is required to position the cursor correctly. libinput relies on proximity events to provide the correct tool state, which again is relied upon by compositors and clients.

The broken HUION devices only send BTN_TOOL_PEN once whenever the pen first goes into proximity and then never again until the device is disconnected. To make things more fun, HUION re-uses USB ids, so we cannot even reliably detect the broken devices and do the usual approach to hardware-quirking. So far, libinput support for HUION devices has thus been spotty. The good news is that libinput git master (and thus libinput 1.9) will have a fix for this. The one thing we can rely on is that tablets keep sending events at the device's scanout frequency. So in libinput we now add a timeout to the tablets and assume proximity-out has happened. libinput fakes a proximity out event and waits for the next event from the tablet - at which point we'll fake a proximity in before processing the events. This is enabled on all HUION devices now (re-using USB IDs, remember?) but not on any other device.

One down, many more broken devices more to go. Yay.

Documentation and Modularity at Flock 2017

Posted by Fedora Community Blog on September 20, 2017 07:05 PM

If I had to choose one buzzword for Flock 2017 at Cape Cod, it would be ‘modularity’. Modules, module building, module testing, and module explaining seemed to be all over the place. I attended to give a workshop (with Aneta ŠP) about a proposed way to inject new life into the Fedora Documentation Project.

Not to be outdone, we had modules featuring quite importantly in our workshop, too. But not the kind of modules that make the rest of Fedora modular. Ours were much simpler conceptually. We presented them mostly as a means to an end for efficient authoring of documentation based on user stories.

Documentation Based on User Stories (My Participation)

The main idea behind our docs workshop stemmed from the following: while it’s great to have a lot of docs that users can refer to, it’s very hard to properly maintain them. How do we reduce the amount of docs, so that we only offer stuff that is really critical for users? At the same time, how do we eliminate some of the maintenance burden on docs contributors? From downstream work on Red Hat documentation, we have good experience with basing docs on user stories (yes, those agile user stories). This ensures good targeting as well as strong focus on the most important things.

Docs Modularity

Where does documentation modularity come into the picture? To achieve some basic consistency and to make docs work more accessible to one-off or first-time contributors, we have developed a set of templates. These greatly simplify the way writers can author user story-based documentation. To accompany that, we wrote a concise guide for potential contributors. It contains all the info they may need to start using the templates (see the GitHub repo). Our workshop aimed to introduce all these concepts and resources. We opted for a ‘learning by doing’ model. To that end, we invited all attendees to get their hands dirty with real-world docs. Together, we ‘modularized’ several user stories.

modular documentation workshop

People liked the general ideas, and we even got a bunch of pull requests out of the workshop. So, I believe we’re on a good path. Fedora docs now also use new tooling (AsciiBinder replaced Publican) and a new source format (AsciiDoc replaced DocBook). Both publishing and contributing should be easier.

Look at the new website at https://docs.fedoraproject.org/. Join the docs team at https://fedoraproject.org/wiki/Docs_Project. And browse our repos at https://pagure.io/projects/fedora-docs/*. You can read more about the ideas and the workshop itself in the pre-Flock interview: Turning Legacy Docs into User-Story-Based Content. See also Aneta’s blog post: Two Docs Workshops at Flock 2017. Thanks to the Fedora Project for sponsoring us to attend – I think we turned the workshop into a success.

Interesting Sessions

Thursday afternoon packed sessions that introduced Fedora modularity from basic concepts to practical examples of building and testing modules. It started with Langdon White’s, Ralph Bean’s, and Adam Šámalík’s short introductory talks. They were grouped into Modularity – the future, building, and packaging. Adam then continued with When to go fully modular?, Tomáš Tomeček with Let’s create a module, and Petr Hráček wrapped it up with Let’s create tests for modules/containers.

Learn More about Fedora Modularity

Even though the sessions were recorded, they were really hands-on. So, it might serve you better to take a look at some of the resources that people presented. First of all, go read the introduction to Fedora modularity at Building a modular Linux OS (with multiple versions of components on different lifecycles). The page provides all the info to get you started, including nice graphical schemes and links to the modularity mailing list and IRC channel.

If you want to start building modules, you will need to have a look at the module metadata definition template and the modulemd library used for manipulating it: https://pagure.io/modulemd.

Continuous Integration

Stef Walter’s presentation, Continuous Integration and Delivery of our Operating System, offered a very nice introduction into CI/CD. It included reasoning for its use, and the status of implementation in the Fedora (Atomic Host) building infrastructure. It wasn’t a very technical talk, which made it accessible even to people in the audience who had no previous experience with CI/CD.

Stef explained what integration tests can and cannot do, and in what ways they can free up human resources for working on more interesting things that robots cannot do (yet). The imaginative slides showed how a package travels through a CI pipeline and where along that path the system tests it and potentially quarantines it if broken.

Overall

The last time I attended Flock was in Prague in 2014. It was a much more massive event, with more people, more talks, and crucially, more talk. Really, the majority of the program consisted of presentations that talked about various aspects of the Fedora Project. While there were some workshops and offshoot hands-on sessions, it was totally different from Flock 2017.

The focus on ‘do sessions’ made this year’s Flock feel more like a hackaton than a conference, and I liked it. There was a sense of purpose in the air right from the start. One thing that, in my view, greatly helped to maintain this focus was the fact that the event and the hotel shared the same premises.

Before I arrived, I had considered that a nice touch that was going to save us some time and effort. But it proved to be a fantastic way of keeping all the attendees engaged all the time. My only nitpick would be that the actual conference rooms and space were depressingly cold, windowless, and a bit past their 90’s prime.

The post Documentation and Modularity at Flock 2017 appeared first on Fedora Community Blog.

Two Docs Workshops at Flock 2017

Posted by Fedora Community Blog on September 20, 2017 05:07 PM

This year’s Flock saw two documentation workshops. One focused on reviving Fedora documentation as modular docs based on user stories. The other had participants helping to document Atomic Host features.

Reviving Fedora Documentation: Modular Docs Based on User Stories

At the workshop named Turning Legacy Docs into User-Story-Based Content, participants got hands-on experience with modular documentation. They also learned how this writing approach fits in with plans to revive Fedora docs.

Modular documentation is documentation based on modules, which the writer combines into assemblies (think articles). Each assembly documents a user story. See Modular Documentation Reference Guide for details.

After a short introductory talk, we rolled up our sleeves and started writing. Five people showed up for the workshop, and we had four pull requests at the end. We would have been able to submit more, but the workshop naturally transitioned into a conversation about modular docs tooling, writing workflows, and the word bolus. The last one isn’t really relevant to mod docs, so let me summarize just the first two.

A few important benefits of writing modular docs based on user stories:

  • Easier onboarding thanks to templates.
  • Experienced writers can help junior team members by pre-preparing the structure of assemblies and modules. These junior people, who might otherwise feel intimidated to start editing a large guide, just fill in small files (documentation modules).
  • Convenient writing workflow involving different people at different stages. One person files an issue on pagure proposing a user story to document, another fleshes out the details (what modules the assembly should consist of), and others then choose modules they feel comfortable with.

Of course, modular writing has its challenges as well. For example, managing the docs can be more difficult because you need to navigate a repo with a large set of modules. But because long non-modular guides are also very difficult to manage, the transition to modular docs doesn’t seem to make things worse.

To learn more about modular docs based on user stories, see Documentation Based on User Stories on opensource.com and the repo for the Modular Documentation Reference Guide. Among other things, the guide provides guidelines on writing assemblies and modules. Contributions are welcome!

Atomic Host Docs Challenge: Who Can Write the Most Docs?

The Atomic Host has a number of interesting features, but many of them aren’t documented. The organizers of the workshop named Fedora Atomic Doc Work showed the participants a work-in-progress guide and asked them to fill in the sections that were empty.

Atomic Doc Work workshop at Flock 2017, Hyannis, MA.

Atomic Doc Work workshop at Flock 2017, Hyannis, MA.

After a brief introduction to explain the publication tool chain and the basics of AsciiDoc, we set to work. The task was to:

  1. Fork and clone the repo for Atomic Docs.
  2. Pick an issue, and comment on it to let others know someone has claimed it.
  3. Write the docs!

A lot of the issues included existing resources (docs on GitHub, Red Hat docs, etc.) that we could use as a starting point to create the new Atomic docs. For example, see this issue for adding a section about how Atomic Host revolutionizes OSes.

To encourage everyone to try their best, Atomic-themed swag awaited the most significant contributors. Prizes went to people who submitted the most pull requests, worked on issues without any pre-prepared external resources, and to the most productive external (non-Red Hat) contributor.

A number of issues for the Atomic Host docs are still open and up for grabs. If you’d like to help, feel free to claim one for yourself. You might want to start with the Atomic Host Documentation Contribution Guide, especially if you are new to AsciiDoc and AsciiBinder.

The post Two Docs Workshops at Flock 2017 appeared first on Fedora Community Blog.

Import RPM repository GPG keys from other keyservers temporarily

Posted by Major Hayden on September 20, 2017 03:24 PM

Keys, but not gpg keysI’ve been working through some patches to OpenStack-Ansible lately to optimize how we configure yum repositories in our deployments. During that work, I ran into some issues where pgp.mit.edu was returning 500 errors for some requests to retrieve GPG keys.

Ansible was returning this error:

curl: (22) The requested URL returned error: 502 Proxy Error
error: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x61E8806C: import read failed(2)

How does the rpm command know which keyserver to use? Let’s use the --showrc argument to show how it is configured:

$ rpm --showrc | grep hkp
-14: _hkp_keyserver http://pgp.mit.edu
-14: _hkp_keyserver_query   %{_hkp_keyserver}:11371/pks/lookup?op=get&search=0x

How do we change this value temporarily to test a GPG key retrieval from a different server? There’s an argument for that as well: --define:

$ rpm --help | grep define
  -D, --define='MACRO EXPR'        define MACRO with value EXPR

We can assemble that on the command line to set a different keyserver temporarily:

# rpm -vv --define="%_hkp_keyserver http://pool.sks-keyservers.net" --import 0x61E8806C
-- SNIP --
D: adding "63deac79abe7ad80e147d671c2ac5bd1c8b3576e" to Sha1header index.
-- SNIP --

Let’s verify that our new key is in place:

# rpm -qa | grep -i gpg-pubkey-61E8806C
gpg-pubkey-61e8806c-5581df56
# rpm -qi gpg-pubkey-61e8806c-5581df56
Name        : gpg-pubkey
Version     : 61e8806c
Release     : 5581df56
Architecture: (none)
Install Date: Wed 20 Sep 2017 10:17:11 AM CDT
Group       : Public Keys
Size        : 0
License     : pubkey
Signature   : (none)
Source RPM  : (none)
Build Date  : Wed 17 Jun 2015 03:57:58 PM CDT
Build Host  : localhost
Relocations : (not relocatable)
Packager    : CentOS Virtualization SIG (http://wiki.centos.org/SpecialInterestGroup/Virtualization) <security@centos.org>
Summary     : gpg(CentOS Virtualization SIG (http://wiki.centos.org/SpecialInterestGroup/Virtualization) <security@centos.org>)
Description :
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: rpm-4.11.3 (NSS-3)

mQENBFWB31YBCAC4dFmTzBDOcq4R1RbvQXLkyYfF+yXcsMA5kwZy7kjxnFqBoNPv
aAjFm3e5huTw2BMZW0viLGJrHZGnsXsE5iNmzom2UgCtrvcG2f65OFGlC1HZ3ajA
8ZIfdgNQkPpor61xqBCLzIsp55A7YuPNDvatk/+MqGdNv8Ug7iVmhQvI0p1bbaZR
0GuavmC5EZ/+mDlZ2kHIQOUoInHqLJaX7iw46iLRUnvJ1vATOzTnKidoFapjhzIt
i4ZSIRaalyJ4sT+oX4CoRzerNnUtIe2k9Hw6cEu4YKGCO7nnuXjMKz7Nz5GgP2Ou
zIA/fcOmQkSGcn7FoXybWJ8DqBExvkJuDljPABEBAAG0bENlbnRPUyBWaXJ0dWFs
aXphdGlvbiBTSUcgKGh0dHA6Ly93aWtpLmNlbnRvcy5vcmcvU3BlY2lhbEludGVy
ZXN0R3JvdXAvVmlydHVhbGl6YXRpb24pIDxzZWN1cml0eUBjZW50b3Mub3JnPokB
OQQTAQIAIwUCVYHfVgIbAwcLCQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJEHrr
voJh6IBsRd0H/A62i5CqfftuySOCE95xMxZRw8+voWO84QS9zYvDEnzcEQpNnHyo
FNZTpKOghIDtETWxzpY2ThLixcZOTubT+6hUL1n+cuLDVMu4OVXBPoUkRy56defc
qkWR+UVwQitmlq1ngzwmqVZaB8Hf/mFZiB3B3Jr4dvVgWXRv58jcXFOPb8DdUoAc
S3u/FLvri92lCaXu08p8YSpFOfT5T55kFICeneqETNYS2E3iKLipHFOLh7EWGM5b
Wsr7o0r+KltI4Ehy/TjvNX16fa/t9p5pUs8rKyG8SZndxJCsk0MW55G9HFvQ0FmP
A6vX9WQmbP+ml7jsUxtEJ6MOGJ39jmaUvPc=
=ZzP+
-----END PGP PUBLIC KEY BLOCK-----

Success!

If you want to override the value permanently, create a ~/.rpmmacros file and add the following line to it:

%_hkp_keyserver http://pool.sks-keyservers.net

Photo credit: Wikipedia

The post Import RPM repository GPG keys from other keyservers temporarily appeared first on major.io.

Flock 2017 – test, test, test

Posted by Fedora Community Blog on September 20, 2017 03:00 PM

FLOCK 2017 – Testing Testing Testing

I’ve attended Flock for the first time this year. I’ve didn’t know what to expect there. We’ve had prepared workshop about Meta-Test-Family to present it there.

Stef’s presentation: Ansible testing invocation

I’ve attend as many ansible presentations there as possible. First were lead by Stef Walter. It is my colleague also from Cockpit team development. There were some things that I knew, because I’m part of QA team. and some new things because I am not in direct touch with upstream ansible.

Known facts for me

  • Ansible allows us to schedule tests and provide reliable results. link
  • Ansible will contain standard test roles what allows us to create simpler tests, for example for me there is important to have role for beakerlib, because we use it regularly.

What’s new for me

  • There are part of metadata inside these ansible scripts (called tags) what allows us to specify what is proper environment to run commands.
  • There exist inventory, what helps you to generate yaml and prepare machines for you . It is very interesting and promising for me, especially machine preparation, it helps you to debug tests locally inside environment what will be prepared for you also inside some CI

I’ve also invited workshop for this project and we’ve discussed how to use ansible with MTF especially for container test subject. It is very important do do differences what is test subject is. This dictionary sometimes causes lots of confusion (artifact, test, subject, container, docker, image etc 🙂

 

MTF – meta test family workshop

We’ve had workshop about MTF. MTF began as project what should help you to write tests for modules (and every  final build artifacts) It allows you to write same tests for containers, rpm repositories (module compose) or whatever what it will be (ISO images, openshift)

We’ve started with short presentation about this project and we’ve expected to have there some tests as an output of this workshop. But there happened that attenders was more interested in talk :-), so we’ve changed it to discussion. We’ve heard lots of feedback anc ideas what we’ve should implement there and how to sell it to more people.

Used technologies

  • docker – for container testing
  • systemd-nspawn – for module testing, it is closer to docker, but it allows to use systemd inside
  • Avocado – Primary testing framework, although MTF is not directly dependent to write tests in avocado. It is your choice, it is simplest to use avocado, but in case you can use various frameworks around MTF – unittest, py.test, nosetest, behave

Basics ideas what are already implemented there and were discussed

  • Continuous integration for this project to prove that it is still working. It is connected with another topic and it is to support alternative Linux distribution.
  • Make possible to test each artifacts separately. We are able to use MTF to test just containers, or rpm repositories, and have there specific tests for them. Some artifact tests are too specific, that is not possible to generalize them to more artifacts
  • Previous point also led us to use it for testing common Fedora (no modular one) You can use it for writing tests for your package.
  • MTF helps you to write multi-host tests (container, rpm repository). It is side effect of using lower level of this library. You can have as many instances  of container/nspawn classes as you want.

 

 

Conclusion

I was perfect conference and I was proud to be part of Fedora community. I like atmosphere there and talk to many interesting people personally.

I’ll glad to see you all on next Flock 2018

Honza

The post Flock 2017 – test, test, test appeared first on Fedora Community Blog.

Bluetooth on Fedora: joypads and (more) security

Posted by Bastien Nocera on September 20, 2017 01:31 PM
It's been a while since I posted about Fedora specific Bluetooth enhancements, and even longer that I posted about PlayStation controllers support.

Let's start with the nice feature.

Dual-Shock 3 and 4 support

We've had support for Dual-Shock 3 (aka Sixaxis, aka PlayStation 3 controllers) for a long while, but I've added a long-standing patchset to the Fedora packages that changes the way devices are setup.

The old way was: plug in your joypad via USB, disconnect it, and press the "P" button on the pad. At this point, and since GNOME 3.12, you would have needed the Bluetooth Settings panel opened for a question to pop up about whether the joypad can connect.

This is broken in a number of ways. If you were trying to just charge the joypad, then it would forget its original "console" and you would need to plug it in again. If you didn't have the Bluetooth panel opened when trying to use it wirelessly, then it just wouldn't have worked.

Set up is now simpler. Open the Bluetooth panel, plug in your device, and answer the question. You just want to charge it? Dismiss the query, or simply don't open the Bluetooth panel, it'll work dandily and won't overwrite the joypad's settings.


And finally, we also made sure that it works with PlayStation 4 controllers.



Note that the PlayStation 4 controller has a button combination that allows it to be visible and pairable, except that if the device trying to connect with it doesn't behave in a particular way (probably the same way the 25€ RRP USB adapter does), it just wouldn't work. And it didn't work for me on a number of different devices.

Cable pairing for the win!

And the boring stuff

Hey, do you know what happened last week? There was a security problem in a package that I glance at sideways sometimes! Yes. Again.

A good way to minimise the problems caused by problems like this one is to lock the program down. In much the same way that you'd want to restrict thumbnailers, or even end-user applications, we can forbid certain functionality from being available when launched via systemd.

We've finally done this in recent fprintd and iio-sensor-proxy upstream releases, as well as for bluez in Fedora Rawhide. If testing goes well, we will integrate this in Fedora 27.

Building a more Inclusive Open Source Community at Fedora.

Posted by Fedora Community Blog on September 20, 2017 01:00 PM
<section class="section section--body section--first">

(This blog is about the Fedora Diversity Team and what we were up to this Flock 2017 held at Cape Cod, USA)

Apart from the keynotes, hack fests and some delicious sea food at this Flock, Diversity Team had a session and decided to gain direction, move a little faster, break a few barriers and do some other amazing things.  I’m writing it down here for people who weren’t there in the room with us.

Just to let you know, things written like this, are action items for us.  Join us and help out.


Adding to our Community
When we started out, to lift Diversity team from ground zero, we decided to start organizing Fedora Women day to start with. Having women contributors on our team made it easier to reach out to the community as well. Now that we have a grounding, we aim to target more groups that comprises of the diversity.

When we go around targeting more people to our community, there is something that we need to take care of. To truly make an impact, we need more people from that community to act as a link between Fedora and them. We don’t want to be like “Hey here is a celebration event for you by Fedora, come contribute.

No. We will not do that. What we are desperately trying to do is actually recognize the unique barriers that each of the community in different geographic locations face.

We would like to ask them and know, what is it that we can do better at? What is that we can do differently so that it will be easier for you to contribute to our community?

We would want to establish communication routes that don’t just lead them to an isolate bubble, but to some place where their concerns are voiced and heard.

The other thing that we want to make sure when we go around asking people to join us, is to also make sure we have something substantial for them to contribute to.

Find communication *links* between different communities and us
Make sure we do not lead them to voids

</section>

Event Planning
The team feels that coming up with a structure/procedure for organizing an event which makes people feel they can do it too. If we smooth out major hassles like budget, swag, target audience, etc..we can definitely hope to have people saying “Hey, this looks easy to organize. I can do that too in my community”

<section class="section section--body">

But, but. The *structure* that we add to an event, also shouldn’t restrict the organizers from making changes that suit their community. We need a framework that is there for their convenience and open to refining.

On this, a council member present at the session was happy to add
“There is always more bandwidth and money for great ideas at Fedora”

Other thing. How do we decide that we want to go ahead with an event proposal by one of our ambassadors or a community member? Developing a set of questions to answer about the event which helps us analyze its impact, reach, budget and other logistics was something we talked about. So if someone needs out help, they need to show us how is that we can help them achieve that. A blog post link on the event page wiki is what we thought of. (This was discussed in further details in the session on Fedora ambassadors.)

Adding more events to our calendar
Coming up with a structure for our events, so that they are easier to organize by our ambassadors and other community members.

<figure class="graf graf--figure graf-after--blockquote" id="9e86">
</figure>

Collaborating with other events

While talking, it hit usSo why are we doing it all on our own again?
What we realized from our discussion at Flock is that we can leverage the existing communities to catalyze our efforts. Girls who Code, PyCon Girls, etc were some that we could think of that we definitely can work along with. This will help us expand our reach, and organize efforts in much lesser efforts and budget.

However when we collaborate, sizing the budgets will be something that the Diversity team will have to take calls on depending on the impact and the audience of the event.

Find existing local communities and collaborate with for our events

<figure class="graf graf--figure graf-after--blockquote" id="4ec3">
</figure>

What we have out there.
Having an open session also helped us to recognize some of our unintentional errors as a team. The descriptions on our wiki page aren’t worded right at places. Though International Disability Day is a worldwide event, the contributors, as well as the team, felt the word disability comes across as very strong and often misleading. It is also rather broad, leading to an ineffective gathering of our audience. What we would rather like to do is to spearhead our efforts at a very specific community, taking time to understand their barriers and work with them to overcome them.

To revisit our wiki pages and update them to reflect our ongoing efforts more accurately.

<figure class="graf graf--figure graf-after--blockquote" id="c91e">
</figure>

Impact. Did we actually make some?
Another thing we would want to focus on in the upcoming events is how do we measure the impact that we are creating. We have badges for the events so far and we can maybe track the number of active FAS accounts after a certain time of an event, but we definitely see a scope of exploration here.

A simple self-assessment here could be “Do I want to come back or do I want to organize this event? Why or why not”

Come up with metrics to measure the success of an event

<figure class="graf graf--figure graf-after--blockquote graf--trailing" id="8f23">
</figure>
</section> <section class="section section--body section--last">
Diversity Team

Most of our efforts aim to have smooth boarding of new contributors to the different teams within Fedora. Amongst all this thoughts and ideas, something that also makes us wonder is, what does it mean to be a member of Fedora Diversity Team?

How do we define our roles and responsibilities? How do we add people to our team who share the same vision as us? Do we want them to be active Fedora users to join the team? (The answer to this was decided to be no, as the person writing this blog wasn’t one either when she started!)

In the coming days, we will also be looking to decide upon a member of our team to hold the position of Diversity Team Representative for the Fedora Council.

Defining a process to onboard members to the Diversity Team
Choosing a Diversity Team Representative

<figure class="graf graf--figure graf-after--blockquote" id="a3ee">
</figure>

That was almost all about what we discussed at Flock this year. If some of these discussions interest you or you think you can help us execute some of the action items we got here (or add more to that list), do drop by and say Hello! We are a bunch of nice people, I assure you.

<figure class="graf graf--figure graf--layoutFillWidth graf-after--p" id="b5cc">
 
<figcaption class="imageCaption">From Left: Mariana, Sachin, Jona (Queen of Albania), Bex, Justin, Amita, Robert, Me, Marina, Sanquii and Pravin</figcaption></figure>

IRC Channel: #fedora-diversity
Mailing List: diversity@lists.fedoraproject.org
Wiki Page: https://fedoraproject.org/wiki/Diversity

</section>

The post Building a more Inclusive Open Source Community at Fedora. appeared first on Fedora Community Blog.

Flock 2017 – A Marketing talk about a new era to come.

Posted by Fedora Community Blog on September 20, 2017 11:00 AM

I had two session at Flock this year, one done by me and another in support of Robert Mayr in the Mindshare one, if there were been any need for discussing.
Here I’m talking about my session: Marketing – tasks and visions (I will push the report about the second one after Robert’s one, for completion).

In order to fit the real target of a Flock conference (that is a contributor conference, not a show where people must demonstrate how much cool they are; we know it!) is to bring and show something new, whether ideas, software, changes and so on, and discuss with other contributors if they’re really innovative, useful and achievable.

We have four foundations and two of them are fitting this concept, friends and first because I’d like to see always news shared with friends.

This year talk was focused on the Marketing activities and how we can make it easier and smarter.

My presentation (after the usual “who am I” and “where you can find us”) started showing what we’re doing, assuming the statement Marketing have in Fedora (What we do), the release tasks (Tasks), the release activities (Releas Activities), and the tickets in Pagure (Tickets) along with other more general activities.

Of course, because I’m not able to hide, I declared we’re not following very well all the things due mainly to the lack of people usually works on them.

After a quick review of how becoming a member, I showed one of the (amongst several) ways to organize a marketing sector in a private company (there is not a standard, I only pointed out the most comparable) and compared it to the actual Project structure in order to point out the difference.

As result I opened the question “visions” listing the improvement that in my head are really needed in order to have the most flexible and smart structure that I draw afterwards.

This was the spark who light the fire of the discussion which I just would to get by the people present that usually give me lots of new material on which I love to work on.

We saw that maybe some of the releases related works could be bring to docs, if possible; the need to gather data from events that commops could share, how to improve talking points, how to make Ambassador more involved….. Essentially all these ideas converged to a new life for the outreach groups, that could be summarized in the second talk I shared with Robert Mayr.

I had a couple of slides to show but people seemed well involved so I didn’t want to stop; listen creative people having brainstorm is still fascinating me.

Discussion took place untill the time was gone, so I did spot it with the absolute certainty that the Project is giving me aid for many years to come.

Thanks to everyone attended.

The post Flock 2017 – A Marketing talk about a new era to come. appeared first on Fedora Community Blog.

FAF in container

Posted by ABRT team on September 20, 2017 09:00 AM

FAF is a framework for aggregating and analyzing reports of crashes and has never been easier to deploy it.

Why deploying own FAF

Firstly, standard users probably do not want to deploy own FAF - FAF is here to collect an enormous amount of crashes from lots of users. Deploying own FAF might be interesting for SysAdmins who control hundreds of machines. It also may be quite interesting on container platform such as OpenShift.

Watch this demo how to deploy and report into FAF quickly.

FAF in container

Quickstart

We advise using official faf-image.

docker run --name faf -dit abrt/faf-image

However you also probably want to mount volumes to /var/lib/postgres and to /var/spool/faf to have a persistent database and FAF’s data.

docker run --name faf -v /var/lib/faf-docker/faf:/var/spool/faf \
 -v /var/lib/faf-docker/postgres:/var/lib/postgres/ -dit abrt/faf-image

If you run FAF for the first time, then there is no database. You have to initialize it.

docker exec faf init_db

Then FAF is ready for use.

Reporting into deployed FAF

You can see incoming reports in web UI. It is accessible on http://<container_IP>/faf.

Finding out container IP address:


docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' faf

Also to send reports into your own FAF, you have to set up libreport on all machines, from with you wish to report into your own FAF. To do so, open a file /etc/libreport/plugins/ureport.conf and enter here:

URL = http://<container_IP>/faf

For servers, you may want to enable automatic reporting. Those reports are called μReports. Run:

abrt-auto-reporting yes

Supported operating systems

This image is ready to accept reports from all active releases of Fedora.

To add a new version of the already supported system you run:

docker exec -u faf faf faf releaseadd -o OPSYS --opsys-release RELEASE

E.g., To add Fedora 99 you run:

docker exec -u faf faf faf releaseadd -o fedora --opsys-release 99

You can even write a plugin for own OS. Once you wrote this plugin and installed it, you can enable it using:

docker exec -u faf faf faf opsysadd OPSYS

Less informative reports

A problem that happened in C/C++ code comes to FAF in the form of build_id and offset. However, for users, it is much more suitable to have file name and line. For better understanding, please see following two images, both describing the same report.

Only with build_id and offset: without With filename and line number: with

The process of transforming build ids and offsets to file names and lines we call retracing. To be able to do that, we need packages, from which these crashes come.

For retrace.fedoraproject.org we have all Fedora packages stored locally. It consumes several TB of storage. This Docker image does not download any packages and therefore does not retrace any symbols. Note that this only affects compiled code, so for example, python unhandled exceptions are not affected.

GSoC2017 Final — Migrate Plinth to Fedora Server

Posted by Fedora Community Blog on September 20, 2017 08:43 AM

There is the summary about my work in Google Summer of Code during the last three months.

About Me

I’m Mandy (Mengying) Wang, I studied in Shanghai Institute of Technology major in Software Engineering, and I graduated two months ago. I’m going to study for a master’s degree after a gap year. You can learn more about me in my Twitter: @MandyMY_ .

Task

Plinth is a web interface to administer the functions of the FreedomBox which is a Debian based project, and the main goal of this idea is to make it available for Fedora.

My Work

Finished

  • Modifying the source code module by module to convert it to RPM-based, including replacing the apt command code with the dnf command code or fit both of them, changing the Deb-based packages into RPM-based packages which play the same roles and testing after each module finished.
  • Add the guide of RPM-based package to Plinth User Guide and create a wiki page for it in Fedora.

This is the welcome page which is run in Fedora:

welcome

To Do

  • Some packages which is needed by Plinth, but I can’t find their suitable replacement or effective solution in Fedora, except copying them from Debian directly. For example:
    • Javascript — many pages can’t be loaded perfectly because of that.
    • LDAP — we can’t complete set up because of that.
  • Make a RPM package for Plinth from source and setup a repo for it in Copr.

Links

Experience

As why Fedora, just because Fedora is the Linux distribution I use the most, so I want to know more about it and make contributions to it, and I believe GSOC is a good chance to integrate into a community, because I had the similar experience in GNOME during Outreachy. And when I went to Taipei for the COSCUP 2017 in early August, I joined the offline meeting of Fedora Taiwan and advertised GSoC to others.

I must say the last three months in GSoC was a quite valuable experience for me. This idea is not easy as I thought, I learned more about the difference between .rpm and .deb during this period, and my VPN was blocked in the second phase. Fortunately, I dealt most problems I met under my try and my mentor’s guide.

At last, thanks to Google and Fedora for giving me this opportunity, and thanks to my mentor, our admin and the people from Fedora and Debian who had given me help.

 


This work by Mandy Wang is licensed under a Creative Commons Attribution-ShareAlike 4.0 International

The post GSoC2017 Final — Migrate Plinth to Fedora Server appeared first on Fedora Community Blog.

Cliente Heroku en Fedora 26, con git subir un proyecto java a la nube

Posted by Bernardo C. Hermitaño Atencio on September 20, 2017 03:41 AM

¿Qué es Heroku?

Heroku es una plataforma en la nube como servicio. Incialmente fue hecho solo para soportar el lenguaje de programación Ruby, pero posteriormente se ha extendido el soporte a Java, Node.js, Scala, Clojure y Python y PHP.

10

En la pagina oficial no encontramos un cliente basado en rpm para la familia fedora, pero si encontramos un modo de instalación mediante npm (Node Package Manager) que es un gestor de paquetes, así que mediante npm instalaremos un cliente heroku, luego usando git subiremos una app a un repositorio free de heroku en la nube.

1. Instalar npm

#dnf install npm

2. Instalar cliente heroku con npm

#npm install -g heroku-cli

3. Actualizar version de nodejs, ya que el cliente heroku post instalación solicitará nodejs version 8.x en adelante, se sugiere desinstalar la versión anterior que viene con npm.

#curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
#dnf install nodejs

4. Después de haber instalado el cliente, ver las versión y acceder a heroku

1

#heroku -v
#heroku login

5. Ahora usar Git, en el directorio del proyecto iniciar git.

#git init

2
6. Subir a un espacio temporal antes de subir a la nube

#git add .
#git status 
#git

7. Crear el primer punto de referencia en git

4

#git commit -m "Proyecto inicial"

8. Crear repositorio en la nube de heroku

5

#heroku apps:create apps-micro-prueba

Ingresar a su cuenta free de heroku y revisar el repositorio creado para la aplicación.

6

9. Enviar el proyecto al repositorio creado con la siguiente orden, luego probar el funcionamiento de la aplicación.

#git push heroku master

7

10


Improved multimedia support with Pipewire in Fedora 27

Posted by Fedora Magazine on September 20, 2017 01:48 AM

Pipewire — a new project of underlying Linux infrastructure to handle multimedia better — has just been officially launched. The project’s main goal is to improve the handling of both audio and video. Additionally, Pipewire introduces a security model to allow easy interaction with multimedia devices from containerized and sandboxed applications, i.e. Flatpak apps.

The Pipewire website clearly states the goals of the project:

PipeWire is a project that aims to greatly improve handling of audio and video under Linux. It aims to support the usecases currently handled by both PulseAudio and Jack and at the same time provide same level of powerful handling of Video input and output. It also introduces a security model that makes interacting with audio and video devices from containerized applications easy, with supporting Flatpak applications being the primary goal. Alongside Wayland and Flatpak we expect PipeWire to provide a core building block for the future of Linux application development.

An initial version of Pipewire is available now in the Fedora 27 prereleases. This initial version only uses Pipewire for video, not audio. Check out the announcement post by Christian Schaller, as well as the Pipewire website for general information about the project, and the Pipewire Wiki for the documentation.

Ember, Prettier, ESLint and Suave

Posted by Sarup Banskota on September 20, 2017 12:00 AM

It pays to keep your JavaScript well-formatted, and having it adhere to a code quality standard. Depending on the application and who’s working on it, one may want to setup different code style guides. Over time, my standard tools of choice have evolved to Prettier for code formatting, and a custom selection of ESLint rules provided by the Ember Suave ESLint plugin.

After working on several Ember apps and pretty much setting up the same linting strategy, I’ve decided to make a starter template Ember app to use for new Ember apps. If you’re starting a new Ember project, you can pick it up from sarupbanskota/ember-linted-app.

If you already have an Ember project and would like to bring world domination to the code styling, you can follow along with me as I outline how I set up ember-linted-app.


Create new Ember app

Start by creating a new Ember app. I prefer to skip npm at first, and then run a yarn install within the repository. That leaves us with a yarn.lock file which we’ll commit in.

ember new ember-linted-app --skip-npm && cd ember-linted-app && yarn install

Install ESLint and remove JSHint

For Node 4+, ESLint can be setup with:

ember install ember-cli-eslint@4

Since ember-linted-app is < Ember 2.5.0, it follows the instructions provided on ember-cli-eslint README.

// ember-cli-build.js
module.exports = function(defaults) {
  const app = new EmberApp(defaults, {
    'ember-cli-qunit': {
      useLintTree: false
    }
  });
};

For those on Ember 2.5.0+, you just need to run:

yarn remove ember-cli-jshint --dev

Remeber to do a quick check in case there are stray .jshintrc files in your project - you want to remove them.

Install Prettier

Prettier rules are available through an ESLint plugin.

yarn add prettier eslint-plugin-prettier --dev

Following this, you want to configure the .eslintrc.js file to stick to prettier rules:

{
  "plugins": [
    "prettier"
  ],
  "rules": {
    "prettier/prettier": "error"
  }
}

Next, any ESLint rules that may conflict with Prettier must be turned off.

yarn add eslint-config-prettier --dev

Setup pre-commit hook

A combination of lint-staged and husky can make your future linting process automagical.

yarn add lint-staged husky --dev
// package.json
{
  "scripts": {
    "precommit": "lint-staged"
  },
  "lint-staged": {
    "*.{js,json}": [
      "prettier --write",
      "git add"
    ]
  }
}

If you want to pre-commit differently, there are more options explained on the Prettier README.

Install Ember Suave

Just like Prettier, Ember Suave is available as an ESLint plugin too.

yarn add eslint-plugin-ember-suave --dev

Since Prettier is doing the code formatting, you don’t want to use Ember Suave’s base rules, just the recommended rules. Unfortunately, the recommended rules extend from base rules, so you’ll just manually mention the rules you’re interested in, in .eslintrc.js.

The final ESLint config should be as follows:

// .eslintrc.js
module.exports = {
  root: true,
  parserOptions: {
    ecmaVersion: 2017,
    sourceType: "module"
  },
  extends: ["prettier"],
  env: {
    browser: true
  },
  plugins: ["prettier", "ember-suave"],
  rules: {
    // Formatting
    "prettier/prettier": "error",

    // ES6
    "arrow-parens": ["error", "always"],
    "generator-star-spacing": [
      "error",
      {
        before: false,
        after: true
      }
    ],
    "no-var": "error",
    "object-shorthand": ["error", "always"],
    "prefer-spread": "error",
    "prefer-template": "error",

    // Overrides for Ember
    "new-cap": [
      "error",
      {
        capIsNewExceptions: ["A"]
      }
    ],

    "ember-suave/no-const-outside-module-scope": "error",
    "ember-suave/no-direct-property-access": "error",
    "ember-suave/prefer-destructuring": "error",
    "ember-suave/require-access-in-comments": "error",
    "ember-suave/require-const-for-ember-properties": "error"
  }
};

Fixing violations on existing code

We should now expect two categories of violations - code formatting ones reported by Prettier, and those based on Ember Suave rules.

For the code formatting ones, an npm script can help us format our existing files:

//package.json

"scripts": {
  "prettify": "find app config tests -name '*.js' -type f | xargs prettier --write"
}

Run it with:

npm run prettify

Commit these changes, and the code should now conform to Prettier rules!

On a new Ember project, the violations from Ember Suave are few in number, so I just manually fix them. I intend to write Babel plugins to help automate this; haven’t had a chance yet. On larger projects, this ends up being a gradual exercise, and often I’ll comment out the specific ESLint rule until I’ve managed to fix all violations.

To help ease the transition, you can make use of Lebab (yep, that’s Babel in reverse!) plugins. Among the safe transforms, I’ve had success with arg-spread, obj-method and obj-shorthand transforms. Within typical Ember codebases, the let and template transforms also usually do alright, although they’re listed as being currently unsafe.


Cheers! Hopefully this article helped you lint your Ember code better - shout out on Twitter if you had comments or if you run into problems.

U2F Protocol Overview

Posted by Nathaniel McCallum on September 19, 2017 04:04 PM

U2F Protocol Overview

This document serves as an overview of the U2F multi-factor authentication specifications as published by the FIDO Alliance. It does not replace the offical specifications but seeks to provide a quick run-through of how the protocol is built. U2F basically works via two main commands: register and authenticate.

When a server wants to use a U2F device for authentication, it asks the host (usually the client computer) to issue a register command to the device. This command creates a new signing key pair and the public key is returned to the server along with an opaque handle (for later reference) and an attestation certifciate (proving trust of the U2F device to the server).

Then, when it wishes to authenticate the user, the server asks the host to issue an authenticate command to the device. The device then signs a nonce from the server and returns the signature. This proves that the user is in posession of the key. It also returns a counter, proving to the server that the device has not been cloned.

U2F provides a high degree of security and a user-friendly workflow. It is supported by Chrome (natively) and Firefox (via an extension).

U2F Layers

The U2F Protocol happens across two layers: 1. The Transport Layer 2. The APDU Layer

All multi-byte integers are in network byte order (big endian).

The Transport Layer

The Transport Layer is responsible for breaking up packets into frames for transport from the computer to the U2F device over HID or Bluetooth. U2F-over-NFC does not use a transport layer. A full packet looks like this:

U2F Packet Field Size (Bytes)
Channel Identifier (CID) 4 (HID Only)
Command (CMD) 1
Data Length 2
Data 0+

The Channel Identifier (CID) is only used for the HID transport and is completely omitted over Bluetooth.

The CMD byte always has the first bit set and contains the number assigned to this particular command. Most commonly, this command will be the MSG command.

If the packet size is less than the maximum transmission unit (MTU), then it is sent directly. Otherwise, the packet is broken up into MTU-sized frames. The first frame is just the beginning of the packet truncated to fit in the frame. Subsequent frames are simply the additional fragments of the packet with a prefix SEQ byte (and, over HID, the CID). The SEQ byte always has the first bit unset, which allows you to distinguish continuation frames from the start of a new packet. It begins with a zero value and increments for each frame sent. A subsequent frame looks like this:

U2F Frame Field Size (Bytes)
Channel Identifier (CID) 4 (HID Only)
Sequence (SEQ) 1
Data 1+

Commands

Below you will find a survey of the most common Transport Layer commands. For the full details on all available commands, please see the U2F specifications.

MSG (CMD = 0x83)

The MSG command contains an APDU request or reply and represents the next layer in communication. This will be discussed in the APDU Layer section below.

PING (CMD = 0x81)

The PING command simply returns the packet to the sender without modification.

ERROR (CMD = 0x84)

The ERROR command is only ever sent as a response. The data contains a one-byte error code. This is only used to communicate errors at the Transport Layer. The APDU Layer has its own error handling.

INIT (CMD = 0x86, HID Only)

The INIT command is only used for HID. Its purpose is to allow the U2F device to assign a unique CID and to get some information about the hardware device. The request contains an 8-byte nonce. The reply data looks like this (fields are 1 byte unless otherwise noted):

U2F INIT Packet Data Field Size (Bytes)
Nonce 8
Channel Identifier (CID) 4
Protocol Version 1
Major Device Version 1
Minor Device Version 1
Build Device Version 1
Capability Flags 1

The request and reply are sent over the reserved broadcast channel (CID = 0xffffffff).

KEEPALIVE (CMD = 0x82, Bluetooth Only)

The KEEPALIVE command is only ever sent from Bluetooth U2F devices to the host. Its purpose is to identify to the host that the request is still being processed or is waiting for user confirmation. Its data contains a single byte indicating the reason for the delayed response.

The APDU Layer

When a MSG command packet is received by the device, the data contains an APDU request. Likewise, the host expects an APDU reply. I will not describe the APDU format in detail. Let it suffice to say that APDU is similar to the transport layer’s packet. It can represent a command (via the CLA and INS bytes), parameters to that command (the P1 and P2 bytes) and a miscelaneous data payload. Likewise, the APDU Layer has its own error handling for errors that occur within this layer.

Commands

VERSION (CLA = 0x00, INS = 0x03)

The VERSION command takes no input parameters and no input data. It simply replies with the version of the U2F Raw Message Format supported. There is currently only one version, which is represented as the UTF-8 encoding of the string U2F_V2 without a NULL terminator.

REGISTER (CLA = 0x00, INS = 0x01)

The REGISTER command takes no input parameters. The input data is in the following format:

U2F REGISTER Request Field Size (Bytes)
Challenge Parameter 32
Application Parameter 32

The U2F device will create a new P-256 signing keypair used for future authentications and will return back the following data:

U2F REGISTER Reply Field Size (Bytes)
Reserved (0x05) 1
Public Key 65
Key Handle Length 1
Key Handle 0-255
Attestation Certificate X.509 (DER)
ECDSA (P-256) Signature 71-73

The Public Key field contains the public key of the newly-created signing key pair. The Key Handle is an opaque field identifying the newly created key pair. The final two fields need some additional explanation.

Remote servers need a way to trust the U2F device. This comes in the form of an attestation key pair which may or may not be different from the newly-created signing key pair. The attestation key pair signs the following payload and inserts the signature into the reply (above):

U2F REGISTER Signature Field Size (Bytes)
Reserved (0x05) 1
Application Parameter 32
Challenge Parameter 32
Key Handle 0-255
Public Key 65

If the remote server trusts the attestation certifciate, then it can also trust the newly-generated signing key pair and key handle by validating the attestation signature. Usually, the server will trust the attestation certificate via an intermediary certification authority.

AUTHENTICATE (CLA = 0x00, INS = 0x02)

The AUTHENTICATE command takes a parameter (in P1) and input data in the following format:

U2F AUTHENTICATE Request Field Size (Bytes)
Challenge Parameter 32
Application Parameter 32
Key Handle Length 1
Key Handle 0-255

If P1 == 0x07 (check only), the device only verifies if the Application Parameter and Key Handle are valid for this U2F device. Otherwise, if P1 == 0x03 (enforce user presence and sign) or P1 == 0x08 (don’t enforce user presence and sign), the U2F device then signs a payload in the following format:

U2F AUTHENTICATE Signature Field Size (Bytes)
Application Parameter 32
User Presence Flags 1
Counter 4
Challenge Parameter 32

The counter is used to detect key theft from the U2F device. It may be global or per key handle. This payload is signed by the signing key pair.

Then a response is returned in the following format:

U2F AUTHENTICATE Reply Field Size (Bytes)
User Presence Flags 1
Counter 4
ECDSA (P-256) Signature 71-73

The User Presence field contains flags identifying the level of user presence testing.

Minor service disruption

Posted by Fedora Infrastructure Status on September 19, 2017 02:47 PM
Service 'Mailing Lists' now has status: minor: Mailing lists email sending crashed since late Sunday. Cause fixed, emails going out now.

F26-20170918 Updated Live isos released

Posted by Ben Williams on September 19, 2017 01:30 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated 26 Live ISOs, carrying the 4.12.13-300 kernel and include the following CVEs:

– Fix CVE-2017-12154 (rhbz 1491224 1491231)

– Fix CVE-2017-12153 (rhbz 1491046 1491057)

– Fix CVE-2017-1000251 (rhbz 1489716 1490906) (bluebourne)

These can be found at  http://tinyurl.com/live-respins), seeders are welcome and encouraged, however addition of additional trackers is strictly prohibited. These isos save about 850M of updates on new installs. We would also like to thank the following irc nicks for helping test these isos: brain83,dowdle,linuxmodder,vwbusguy, short-bike, Southern_Gentlem


Launching Pipewire!

Posted by Christian F.K. Schaller on September 19, 2017 01:18 PM

In quite a few blog posts I been referencing Pipewire our new Linux infrastructure piece to handle multimedia under Linux better. Well we are finally ready to formally launch pipewire as a project and have created a Pipewire website and logo.Pipewire logo

To give you all some background, Pipewire is the latest creation of GStreamer co-creator Wim Taymans. The original reason it was created was that we realized that as desktop applications would be moving towards primarly being shipped as containerized Flatpaks we would need something for video similar to what PulseAudio was doing for Audio. As part of his job here at Red Hat Wim had already been contributing to PulseAudio for a while, including implementing a new security model for PulseAudio to ensure we could securely have containerized applications output sound through PulseAudio. So he set out to write Pipewire, although initially the name he used was PulseVideo. As he was working on figuring out the core design of PipeWire he came to the conclusion that designing Pipewire to just be able to do video would be a mistake as a major challenge he was familiar with working on GStreamer was how to ensure perfect audio and video syncronisation. If both audio and video could be routed through the same media daemon then ensuring audio and video worked well together would be a lot simpler and frameworks such as GStreamer would need to do a lot less heavy lifting to make it work. So just before we starting sharing the code publicaly we renamed the project to Pinos, named after Pinos de Alhaurín, a small town close to where Wim is living in southern Spain. In retrospect Pinos was probably not the worlds best name :)

Anyway as work progressed Wim decided to also take a look at Jack, as supporting the pro-audio usecase was an area PulseAudio had never tried to do, yet we felt that if we could ensure Pipewire supported the pro-audio usecase in addition to consumer level audio and video it would improve our multimedia infrastructure significantly and ensure pro-audio became a first class citizen on the Linux desktop. Of course as the scope grew the development time got longer too.

Another major usecase for Pipewire for us was that we knew that with the migration to Wayland we would need a new mechanism to handle screen capture as the way it was done under X was very insecure. So Jonas Ådahl started working on creating an API we could support in the compositor and use Pipewire to output. This is meant to cover both single frame capture like screenshot, to local desktop recording and remoting protocols. It is important to note here that what we have done is not just implement support for a specific protocol like RDP or VNC, but we ensured there is an advaned infrastructure in place to support any protocol on top of. For instance we will be working with the Spice team here at Red Hat to ensure SPICE can take advantage of Pipewire and the new API for instance. We will also ensure Chrome and Firefox supports this so that you can share your Wayland desktop through systems such as Blue Jeans.

Where we are now
So after multiple years of development we are now landing Pipewire in Fedora Workstation 27. This initial version is video only as that is the most urgent thing we need supported for Flatpaks and Wayland. So audio is completely unaffected by this for now and rolling that out will require quite a bit of work as we do not want to risk breaking audio on your system as a result of this change. We know that for many the original rollout of PulseAudio was painful and we do not want a repeat of that history.

So I strongly recommend grabbing the Fedora Workstation 27 beta to test pipewire and check out the new website at Pipewire.org and the initial documentation at the Pipewire wiki. Especially interesting is probably the pages that will eventually outline our plans for handling PulseAudio and JACK usecases.

If you are interested in Pipewire please join us on IRC in #pipewire on freenode. Also if things goes as planned Wim will be on Linux Unplugged tonight talking to Chris Fisher and the Unplugged crew about Pipewire, so tune in!

How to write bootable Windows .iso image on USB key with Linux

Posted by Luca Ciavatta on September 19, 2017 10:00 AM

WinUSB is a simple tool to create a Windows USB install stick from Linux distros in a simple way. The application supports Windows 8 and 10 and can use either an ISO or a DVD as a source.

Install WinUSB on Linux and create a USB stick Windows installer

WinUSB package is available on the official webpage at http://en.congelli.eu/prog_info_winusb.html and it’s already packed for the main Linux distros.

For Arch:

https://aur.archlinux.org/packages/winusb/

For Fedora/OpenSUSE:

Download package from here and double click on file.

For Ubuntu:


$ sudo add-apt-repository ppa:colingille/freshlight
$ sudo apt-get update
$ sudo apt-get install winusb

This package contains two programs: WinUSB-gui, a graphical interface which is very easy to use, and winusb, the command line tool. The gui version is dummies-proof, just select the .iso source and the destination drive, finally press ‘install’. Easy.

The command line version is a cli tool, so winusb –help give you all the infos.

Sometimes, the program gives errors. Something like: ‘Installation failed! Exit code: 32512 Log:’ or with 3512 issue number.

In that case, the solution is to format the USB with through fav distro tool with FAT filesystem (not NTFS or ext!) and to use the cli WinUSB tool in a way like this:


$ sudo ln -s /usr/sbin/grub2-install /usr/sbin/grub-install #tweak for Fedora systems
$ sudo winusb -v --install Win10_x64.iso /dev/sdb

Like you see above, in Fedora you must do one more step, only make a symbolic link, in order to complete the install process in the right way. And don’t forget that you need the grub2-efi-modules to grub2-install on EFI, so, just in case:


$ sudo dnf install grub2-efi-modules

 

Alternatives. The mantained fork WoeUSB and the dd command

If you have another issues with WinUSB, you could use WoeUSB, a simple tool that enable you to create your own usb stick windows installer from an .iso image or a real DVD. It is a mantained fork of WinUSB.

Last, but not least, you can use the never ending command dd.


$ sudo dd if=Win10_x64.iso of=/dev/sdb bs=4M; sync

Obviously, remember to put your path to Win .iso image instead of Win10_x64.iso and to replace sdb with the drive you are using.

The post How to write bootable Windows .iso image on USB key with Linux appeared first on cialu.net.

New badge: F27 i18n Test Day Participant !

Posted by Fedora Badges on September 19, 2017 08:18 AM
F27 i18n Test Day ParticipantYou helped test i18n features in Fedora 27! Thanks!

Participez à la journée de test consacrée à l'internationalisation

Posted by Charles-Antoine Couret on September 19, 2017 06:00 AM

Aujourd'hui, ce mardi 19 septembre, est une journée dédiée à un test précis : sur l'internationalisation de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Comme chaque version de Fedora, la mise à jour de ses outils impliquent souvent l’apparition de nouvelles chaînes de caractères à traduire et de nouveaux outils liés à la prise en charge de langues (en particulier asiatiques).

Pour favoriser l'usage de Fedora dans l'ensemble des pays du monde, il est préférable de s'assurer que tout ce qui touche à l'internationalisation de Fedora soit testée et fonctionne. Notamment parce qu'une partie doit être fonctionnelle dès le LiveCD d'installation (donc sans mise à jour).

Les tests du jour couvrent :

  • Le bon fonctionnement d'ibus pour la gestion des entrées claviers ;
  • La personnalisation des polices de caractères ;
  • L'installation automatique des paquets de langues des logiciels installés suivant la langue du système ;
  • La traduction fonctionnelle par défaut des applications ;
  • Les polices Serif chinois par défaut (changement de Fedora 27) ;
  • Test de libpinyin 2.1 pour la saisie rapide du chinois Pinyin (changement de Fedora 27).

Bien entendu, étant donné les critères, à moins de savoir une langue chinoise, l'ensemble des tests n'est pas forcément réalisable. Mais en tant que francophones, de nombreuses problématiques nous concernent et remonter les problèmes est important. En effet, ce ne sont pas les autres communautés linguistiques qui identifieront les problèmes d'intégration de la langue française.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Writing LaTeX well in Vim

Posted by Ankur Sinha "FranciscoD" on September 18, 2017 11:40 PM

Vim is a great text editor if one takes a bit of time to learn how to use it properly. There's quite enough documentation on how to use Vim correctly, and efficiently so I shan't cover that here. vimtutor is an excellent resource to begin at.

Similarly, LaTeX is a brilliant documentation system, especially for scientific writing if one takes the time to learn it. Unlike the usual Microsoft Word type systems, LaTeX is a set of commands/macros. Once the document is written using these, it must be compiled to produce a PDF document. It may appear daunting at first, but after one is familiar with it, it makes writing a breeze. Now, there are a editors especially designed for LaTeX, but given that I use Vim for about all my writing, I use it for LaTeX too.

On Fedora, you can install Vim using DNF: sudo dnf install vim-enhanced vim-X11. I install the X11 package too to use the system clipboard.

LaTeX tools

To begin with, there are a few command line commands that one can use other than the necessary latex, pdflatex, bibtex, biber, and so on commands:

  • latexmk is a great tool that figures out the compilation sequence required to generate the document, and it does it for you.
  • lacheck and chktex are both linters for LaTeX that make writing a lot easier.
  • detex strips a tex document of LaTeX commands to produce only the text bits.
  • diction, and style give the author an idea of the readability of the text.

One can use any text editor and then these utilities to improve their LaTeX writing experience.

On Fedora, install these with DNF: sudo dnf install latexmk /usr/bin/lacheck /usr/bin/chktex /usr/bin/detex diction. (Yes, you can tell DNF what file you want to install too!)

Built-in Vim features

Vim already contains quite a few features that make writing quite easy;

  • Omni completion provides good suggestions based on the text under the cursor.
  • There's in-built spell checking already.
  • Folding logical bits makes the document easier to read and navigate through.
  • Syntax highlighting makes it a lot easier to read code by marking different commands in different colours.
  • There are different flavours of linenumbers that make moving about a document much simpler.
  • At some point, the conceal feature was added that further improves readability of documents
  • Buffers, tabs, windows are available in Vim too, of course.

Vim plug-ins

There are a lot of Vim plug-ins that extend some functionality or the other. The simplest way to install plug-ins is to use Vundle. Here are some plug-ins that I use. They're not all specific to LaTeX.

  • Fastfold makes folding faster.
  • vim-polyglot provides better syntax highlighting for a many languages.
  • vim-airline provides an excellent, informative status line.
  • tagbar lists sections (tags in general) in a different pane.
  • vim-colors-solarized provides the solarized themes for Vim.
  • vimtex provides commands to quickly compile LaTeX files, complete references, citations, navigate quicker, view the generated files, and so on.
  • ultisnips provides lots of snippets for many languages, including LaTeX. Get the snippets from the vim-snippets plug-in.
  • YouCompleteMe is a completion engine that supports many languages. Remember that this one needs to be compiled!
  • Syntastic provides syntax checkers for many languages, including LaTeX.

I've also used vim-latex in the past and it's very very good. However, since I have other plug-ins that provide the various functionality that it brings together for many other languages too, I'm no longer using it. Worth a go, though.

An example document

The image below shows a LaTeX file open in Vim with different plug-ins in action:

Screenshot of Vim with a LaTeX file open showing various features.
  • On top, one can see the open buffer. Only one buffer is open at the moment.
  • In the left hand side margin, one can see the fold indicators.
  • The S> bit is an indicator from the linter that Syntastic uses, showing a warning or an error.
  • The line numbers are also visible in the left margin. Since I am in insert mode, they're just plain line numbers. Once one leaves insert mode, they change to relative.
  • On line 171, the conceal feature shows Greek symbols instead of their LaTeX commands.
  • Syntax highlighting is clearly visible. The commands have different colours. This is the solarized dark theme, of course.
  • The "pop-up" shows Ultisnips at work. Here, I'm looking at adding a new equation environment.
  • Underneath the pop up, the dashed line is a folded section. The + symbol in the left margin implies that it is folded.
  • In the status line, one can see that spell check is enabled, and that I'm using the en_gb language.
  • Next, the git status, and the git branch I'm in. That's the vim-fugitive plug-in at work.
  • Then, the filetype, the encoding, the number of words and so on provided by the airline plug-in.

Neat, huh? There is a lot more there that isn't easy to show in a screen-shot. For example, \ll will compile the LaTeX file; \lv opens the generated PDF file in a PDF viewer, Evince in my case; \lc will clean the directory of any temporary files that were generated while compiling the document.

I keep all my vimfiles on Github. Feel free to take a look and derive your own. I tweak my configuration each time I find something new, though, so it may change rather frequently. Remember to read the documentation for whatever plug-ins in use. They provide a lot of options, lots of shortcuts, lots of other commands, and sometimes setting them up incorrectly can cause vim to behave in unexpected ways.

TL;DR: Use Vim, and use LaTeX!!

AnsibleFest SF 2017

Posted by Adam Miller on September 18, 2017 10:19 PM

AnsibleFest was amazing, it always is. This has been my Third one and it's always one that I look forward to attending. The Ansible Events Team does an absolutely stellar job of putting things together and I'm extremely happy I was not only able to attend but that I was accepted as a speaker.

Kick Off and Product Announcements

The event kicked off with some really great product announcements, some interesting bits about Ansible Tower and the newly announced Ansible Engine.

Ansible AWX

<object class="align-center" data="https://maxamillion.sh/images/ansible_awx_logo.svg" style="width: 600px; height: 400px;" type="image/svg+xml"> Ansible AWX Logo</object>

As an avid fan of Open Source Software, the announcement and immediate release of Ansible AWX was the headliner of the event for me. This is the open source upstream to Ansible Tower that Red Hat made the commitment to release once Ansible was acquired in accordance with their continued commitment to Open Source. If you live in Ansible user or contributor land, you know this is something that's been a hot topic for quite some time and I'm so glad it's been launched officially. I've been learning Django over the last week so I can start contributing. Looking forward to it.

Ansible Community Leader and Red Hat CEO Fireside Chat

Fireside Chat with Robyn and Jim

Immediately following the Ansible AWX announcement was a fireside chat with Ansible Community Leader Robyn Bergeron (who is also previously the Fedora Project Leader) and Red Hat CEO Jim Whitehurst to discuss various market trends in the realm of infrastructure automation, the ability to deliver faster and more rapidly, and the challenges business are having with the concept of "Digital Transformation." This was a really cool thing to get the perspective of things from both an open source community perspective and that of a business minded individual, and to see where those two perspectives met in the middle and/or overlapped.

Ansible Community Days

The day before and the day after the main headline of AnsibleFest was the Community Days, the day before AnsibleFest focused entirely on topics around Ansible Core and the greater Ansible Community. The day after AnsibleFest focused on Ansible AWX in the morning, explaining architecture and various technical implementation details to try and get some exposure to things for those of us in the room who weren't previously privy to that information. The afternoon of the second day involved the "Ansible Ambassadors" community (I'm not sure if this is an official term)

Ansible All The Things

I gave a presentation that I like to call "Ansible All The Things" or "Ansible Everything" (depending on who my audience is and how acceptable they are of meme jokes). The basic idea though is to look at Ansible not as a configuration management tool, which I feel a lot of the "Tech Media" (for lack of a better term) has classified it as and therefore it is often known as to the more broad audience, but instead think of it like a Task Automation utility. This particular task automation utility also comes with a nice python API and a way to interact by anything that can "speak JSON." This has some advantages if you step back and thing about this abstract concept of a tool with a programming interface that is ultimately as generic as passing JSON around (with added convenience for python programmers). Effectively you have a method of running a task, or series of tasks, on one or many systems in your infrastructure. This is powerful enough to be used for all sorts of things like configuration management (yes, Ansible can perform configuration management tasks but it's also so much more than that), provisioning, deployment, orchestration, command line tooling, builds, event-based execution, workflow automation, continuous integration, and containers.

For those who would like to check you my slides, they are here.

Infrastructure Testing with Molecule

I had the opportunity to attend a presentation about Molecule, which I was really excited about because this is a toolchain I've wanted to dig into for a while. This is effectively the goal: Infrastructure as Code, TDD/CI on your Code, and transitively your Infrastructure. What a time to be alive.

Anyways, the talk itself was absolutely fantastic. Elana Hashman is a spectacular speaker and the amount of research she put into the talk was apparent. The room was captivated and the questions and conversations were enthusiastic, this was clearly a topic space people were interested in. I also have to give a tip of the hat to the live Demo that went off flawlessly, I've never personally pulled off a live Demo without at least one goof that contained the amount of live editing of code that was contained in this one. Kudos.

For those who are interested in the presentation materials, check them out here. (Do it, it's really good.)

Closing Time

The event was wonderful and I hope to have the opportunity to go next year to the North America based AnsibleFest (they also do one in the EU/UK but it's not often I can pull together the funding to that trip).

Bodhi 2.11.0 released

Posted by Bodhi on September 18, 2017 08:05 PM

Features

  • Bodhi now batches non-urgent updates together for less frequent churn. There is a new
    bodhi-dequeue-stable CLI that is intended be added to cron that looks for batched updates and
    moves them to stable
    (#1157).

Bugs

  • Improved bugtracker linking in markdown input
    (#1406).
  • Don't disable autopush when the update is already requested for stable
    (#1570).
  • There is now a timeout on fetching results from ResultsDB in the backend
    (#1597).
  • Critical path updates now have positive days_to_stable and will only comment about pushing to
    stable when appropriate
    (#1708).

Development improvements

  • More docblocks have been written.

Release contributors

The following developers contributed to Bodhi 2.11.0:

  • Caleigh Runge-Hottman
  • Ryan Lerch
  • Rimsha Khan
  • Randy Barlow

Arch Arch and away! What's with the Arch warriors?

Posted by Sachin Kamath on September 18, 2017 06:36 PM
Arch Arch and away! What's with the Arch warriors?

Foo : Hey, I just installed Arch and I can't connect to the internet.

Bar : Hey, my DE won't boot in Arch. Help, please.

FooBar : Man, I installed Arch and I can't find sudo. Will I die?

I wanted to put this up on Devrant but thought I'd write a bit to enlighten the wannabe Arch-ers. I have been recently getting a lot of messages asking me to help debug issues on Arch. As a fellow Linux and Arch user, I've always responded to most of the messages. But hey, there's a limit to it.

If you choose to begin your Linux adventures with Arch Linux after trying Ubuntu for a month, you're probably doing it wrong. If there's a solid reason why you think Arch is for you; awesome! Do it. You will learn new things. A lot of new things. But hey, what's the point in learning what arch-chroot does if you can't figure out what sudo is or what wpa_supplicant does?

Remember, when you decided to install Arch - you signed up for it. If you really want to get the feel of using Arch and not do all the hard work, try Antergos or Monjaro Linux. They are built on top of Arch Linux and they're kickass distros. You'll love it. Antergos is arch hiding behind a nice GUI installer ;)

Or better yet, start with a distro that comes with out-of-the-box support and get the feel of Linux before attempting to move to Arch. I've seen a lot of people switch back to Windows after trying to install Arch because some online guide said "Arch was awesome", but never said it's not for beginners. I'd recommend Fedora any day because :

  • It's awesome
  • it doesn't have any third party packages with weird licenses
  • Freedom. Features. Friends. First
  • has tons of spins and custom bundles to choose from. (Psst.. Have you tried Security Labs? You might ditch Kali)

Also, please learn to use a search engine whenever you are stuck. It doesn't hurt to use it, does it? So stop asking, and start googling! duck-ing!. When you're confident that you've got your back, go ahead - fire up that live-USB and arch-chroot.

Flock to Fedora 2017

Posted by Adam Miller on September 18, 2017 04:48 PM

Flock to Fedora 2017

Every year, the Fedora User and Developer community puts on an conference entitled "Flock to Fedora" or "Flock" for short. This year was no different and the event was hosted in lovely Cape Cod, MA.

This year's Flock was a little different in focus than previous year's, the goal of the event organizers appeared to be that of "doing" as apposed to "watching presentations" which worked out great. As an user and contributor conference, almost everyone there was a current user or contributor so workshops to help enhance people's knowledge level, have them contribute to an aspect of the project, or to introduce them to a new area of the Fedora Project in a more hands-on way was met with enthusiastic participation. There were definitely still "speaking tracks" but there were more "participation tracks" than years past and it turned out to be a lot of fun.

Note

At the time of this writing, the videos had not yet been posted but it was reported that they will be found at the link below.

All the sessions were being recorded and I highly recommend anyone interested to check them out here.

I will recap my experience and take aways from the sessions I attended and participated in as well as post slides and/or talk materials that I know of.

Flock Day 1

Keynote: Fedora State of the Union

The Fedora Project Leader, Matt Miller took the stage for the morning keynote following a brief instructional Logistics/Intro statement by event organizers. Matt discussed the current state of Fedora, where we are, where we're going, ongoing work and current notable Changes with work under way.

Big key take-aways here was that Fedora Modularity and Fedora CI are big initiatives aiming to bring more content to our users, in newly consumable ways, even faster than ever before without compromising quality (and hopefully improving it).

Flock 2017 Keynote State of Fedora slides

Factory 2.0, Fedora, and the Future

One of the big pain points from the Fedora contributor's standpoint is how long it takes to compose the entire distro into an usable thing. Right now, once contributors have pushed source code and built RPMs out of it, you have to take this giant pile of RPMs, create a repository, then start to build things out of it that are stand-alone useful for users. These kinds of things are install media, live images, cloud and virt images, container images, etc.

Factory 2.0 aims to streamline these processes, make them faster, more intelligent based on tracking metadata about release artifacts and taking action upon those artifacts only when necessary, and make everything "change driven" such that we won't re-spin things for the sake of re-spinning or because some time period has elapsed, but instead will take action conditionally on a change occurring to one of the sources feeding into an artifact.

For those who remember last Flock, there was discussion of this concept of the Eternal September and this was a progress report update of the work that's being done to handle that as well as clean up the piles of technical debt that have accrued over the last 10+ years.

Multi-Arch Container Layered Image Build System

Next time slot that I attended was my presentation on the new plans to provide a Multi-Architecture implementation of the Fedora Layered Image Build Service. The goal here is to provide a single entry point for Fedora Container Maintainers to contribute containerized content, submit it to the build system, and then have multiple architecture container builds as a result. This is similar to how the build system operates for RPMs today and we aim to provide a consistent experience for all contributors.

This is something that's still being actively implemented with various upstream components that make up the build service, but will land in the coming months. It was my original hope to be able to provide a live demo, but it unfortunately didn't work out.

Multi-Arch Fedora Layered Image Build Service slides

Become a Container Maintainer

A workshop put together by Josh Berkus that I helped with was to introduce people who'd never created a container within the Fedora Layered Image Build Service to our best practices and guidelines. Josh took everyone through an exercise of looking at a Dockerfile that was not in compliance with the guidelines and then interactively with the audience bringing it into compliance.

After the example was completed, Josh put up a list of packages or projects that would be good candidates for becoming containerized and shipped to the Fedora User base. Everyone split up into teams of two (we got lucky, there was an even number of people in the room), and they worked together to containerize something off the list. He and I spent a period of the time going around and helping workshop attendees and then with about 10 minutes left the teams traded their containerized app or service with someone else and performed a container review in order to give them an idea of what that side of the process is like.

Hopefully we've gained some new long term container maintainers!

Fedora Environment on Windows Subsystem for Linux

This session is one that I think many were surprised would ever happen, most notably because I think for those who've been in the Linux landscape for long enough to remember Microsoft's top brass calling Linux a cancer, we never would have predicted Windows Subsystem for Linux existing. However, time goes on, management changes, and innovation wins. Now we have this magical thing called "Windows Subsystem for Linux" that doesn't actually run Linux at all, but instead runs programs meant to run on Linux without modification or recompilation.

The session goes through how this works, how the Windows kernel accomplishes the feats of magic that it does and the work that Seth Jennings (the session's presenter) put in to get Fedora working as a Linux distribution to run on top of Windows Subsystem for Linux. It's certainly very cool, a wild time to be alive, and something I think will ultimately be great for Fedora as an avenue to attract new users without having to shove them into the deep end right away.

Fedora Environment on Windows Subsystem for Linux slides

Day 2

Freshmaker

Going along with the theme of continuing to try and deliver things faster to our users, this session discusses a new service that's being rolled out in Fedora Infrastructure that will address the needs of "keeping things fresh" in Fedora. Introducing Freshmaker

As it stands today, we don't have a good mechanism by which to track the "freshness" of various pieces of software, there's been some attempts at this in the past and they weren't necessarily incorrect or flawed but they never had the opportunity to come to fruition for one reason or another. Good news is that Freshmaker is a real thing, it's a component of Factory 2.0 and is tasked the job of making sure that software in build pipeline is fully up to date with latest input sources for easy of maintaining updated release artifacts for end users to download.

Gating on Automated Tests in Fedora - Greenwave

Greenwave is another component of Factory 2.0 with the goal of automatically blocking or releasing software based on automated testing such that the tests are authoritative. This session discussed the motivations and the design as well as discussed how to override Greenwave via WaiverDB.

Discussing Kubernetes and Origin Deployment Options

This session was mostly about kubernetes, OpenShift, and how to deploy them on Fedora in different ways. There was a brief presentation and then discussions about preferred methods of deployment, what we as a community would like to and/or should pursue as the recommended method by which we direct new users to install these technologies.

Fedora ARM Status Update

Fedora's ARM champion, Peter Robinson, gave an update of where things are in ARM land, discussing the various development boards available and what Fedora contributors and community members can expect in the next couple Fedora releases.

On OpenShift in Fedora Infrastructure

This session was a working/discussion session that revolved around how the Fedora Infrastructure Team plans to utilize OpenShift in the future for Fedora services in order to achieve higher utilization of the hardware we currently have available and to allow for applications to be developed and deployed in a more flexible way. The current plans are still being discussed and reviewed, which is part of what this session was for, but stay tuned for more in the coming weeks.

The Future of fedmsg?

Currently, fedmsg is Fedora's unified message bus. This is where all information about activities within the Fedora Infrastructure are sent and that's not slated to change anytime soon. However, there's new use cases for the messages that will go out on the message bus that have changed in scope and the reliability of message delivery is something that will become a more hard pressing requirement. This presentation was about a proposal to add new transports for messages in addition to the one that already exists, allowing various services needing to listen for fedmsgs to subscribe to the protocol endpoint that most makes sense for the purpose. This session opened a discussion with a proposal to satisfy the newer needs while leaving the current infrastructure in place by taking advantage of some of the features of ZeroMQ.

Day 3

What does Red Hat want?

This was a very candid and honest presentation by our once long standing Fedora Infrastructure lead, Mike McGrath, who spoke on behalf of Red Hat as the primary corporate sponsor of Fedora as to what specifically Red Hat as an entity hopes to gain from the ongoing collaboration with the Fedora Community, and the innovations they hope to help foster moving forward. I unfortunately did not take good notes so don't have much in the way to provide as far as specifics so we'll have to wait for the videos to become available for those interested in this material.

Fedora Infrastructure: To infinity and beyond

The Fedora Infrastructure lead, Kevin Fenzi, stood infront of a whiteboard and kicked off a workshop where interested parties and contributors to the Fedora Infrastructure outlined and planned major initiatives for the Fedora Infrastructure for the next year. Headliners here from general consensus is that OpenShift will definitely be leveraged more heavily but it will require some well defined policy around development and deployment for the sake of sanitizing where code libraries come from for security, auditing, and compliance purposes. The other main topic of discussion was metrics reporting, various options will be evaluated with front runners being the Elastic Stack, Hawkular, and Prometheus.

Modularity - the future, building, and packaging

This session was a great introduction to how things are going to fit together, we dove pretty far into the weeds with some of the tech behind how Fedora Modularity fits together and ultimately if anyone is interested in digging in there, the official docs really are quite good. I would recommend anyone interesting in learning about the technical details about modularity to give it a look.

Let's Create a Module

In this workshop, put on my Tomas Tomecek, we learned how to create a module and feed it into the Fedora Module Build System (MBS). This was an interesting exercise to go through because it helped define the relationship between rpms, modules, non-rpm content, and the metadata that ties all of this together with disjoint modules to create variable lifecycles between different sets of software that come together as a module. I was unable to find the slides from the talk, but our presenter recently tweeted that a colleague of his wrote a blog post he thinks is even better than the workshop, so maybe give that a go. :)

Continuous Integration and Deliver of our Operating System

The topic of Continuous Integration (CI) is one that's extremely common in software development teams and it is not a new concept. But what if we were going to take that concept and apply it to the entire Fedora distribution? Now that might be something special and could really pay off for the user and contributor base therein. This is exactly what the Fedora CI initiative aims to do.

What's most interesting to me about this presentation was that it went through an exercise of thought and then showed specifically how a small team was able to accomplish more work than almost anyone though they could because they treat the bot they've written to integrate their CI pipeline with various other services as a member of the team. They taught themselves to not think of it as a system but as a team member they could offload work to, the work that nobody else wanted to do.

I look forward to seeing a lot of this work come to fruition.

Day 4

The last day of the conference we had a "Show and Tell" where various members from different aspects of the projects got together and worked on things. The rest of the day was a hackathon for those who were still in-town and not traveling back home mid-day.

As always, Flock was a blast and I can't wait for Flock 2018!

Until next time...

Slice of Cake #19

Posted by Brian "bex" Exelbierd on September 18, 2017 01:10 PM

A slice of cake

In the last bunch of weeks as FCAIC I1:

  • Failed to post my slice of cake on 21 August 2017. If I had, it would have mentioned that I:
    • fought with a printer … and mostly lost :(
    • worked hard on the final Flock tasks including booklet printing, last minute supplies and badge labels (see printer fight above).
    • was excited and surprised by the rapid launch of the all new docs.fedoraproject.org. A blog post explaining the changes is coming soon2.
  • Attended Flock. Overall I think the conference was a success and I am excited by all of the session and general feedback people have been posting. I know that the council has a lot to think about as we work on next year.
  • Attended a bunch of meetings in Raleigh in Red Hat Tower.

À la mode

  • Was actually in the US on Labor Day for the first time in a long time. It’s still weird to work US holidays.
  • Was the emcee at Write The Docs in Prague, Czech Republic. Two days of talks and all of them introduced by me :).

Cake Around the World

I’ll be traveling some and hope you’ll ping me for coffee if you’re nearby.

  • Personal travel with limited Internet 29 September - 14 October
  • Latinoware, Foz de Iguacu, Brazil, 18 - 20 October
  • BuildStuff.lt, Vilnius, Lithuania, 15 - 17 November
  1. Holy moly it has been a while since I served up some cake! 

  2. For some definitions of soon. 

Flock 2017: How to make your application into a Flatpak?

Posted by Pravin Satpute on September 18, 2017 10:43 AM

"How to make your application into a Flatpak?" was on the first day and delivered by Owen Taylor.

Its around 1 and half year we are observing development of Flatpak's and i am sure this is going to be one of the breakthrough technology for distribution of packages in coming years.  I attended this topic to get more idea about what is happening? and What plan in coming future?

Session was very information and it was mostly from architectural overview of flatpak. 

I will update my blog with recording once it get available. Meanwhile in this blog i am going to cover only Q&A part from session.

Question: If i install normal rpm and flatpak for same application, how system will differentiate between it?
Answer: In command like, application id will be different for one from rpm and one from flatpak. Both will appear and one can choose.

Question: Flatpak is bundle of libraries. Now if Platform like Fedora provide one flatpak for application and same time upstream also provide flatpak. Will one get replaced with other?
Answer: We cant replace one with other.

Question: I created flatpak on F25 and failed in Wayland, some permission missing.
Answer: If it is build for X11, it should work on wayland as well.

Question: Can we test flatpak on F26?
Answer: flatpak.org are there, we can download and start testing. F26 is very much updated.

Question: Are we releasing any application as a flatpak only in Fedora in future?
Answer: Lets packager decide it, if its working well.  At least we are not doing this forfor F27, F28. Fedora 29 packages may able to do it.

Question: Whe we will have Firefox, Libreoffice as a flatpak?
Answer: Low hanging fruits first and gradually we can think or ask people for it. First lets get infra ready.

Question: Is any dependency on kernel?
Answer: Generally very minimal dependency on kernel, more for graphics driver. No, strong dependency between kernel and runtime.

Question: Can you consider flatpak with similar tech in android etc?
Answer: Idea of using specific file system is purely flatpak and docker/containers.   Flatpak has more secured communication model.


I hope, i able to catch all the Q&A correctly, if anyone has to update anything about this feel free to send me email or just update in comment section.

Running the Fedora kernel regression tests

Posted by Fedora Magazine on September 18, 2017 10:10 AM

When a new kernel is released, users often want to know if it’s usable. The kernel has a set of test cases that can be run to help validate it. These tests are run automatically on every successful build. They are designed to validate a basic set of functionality and some Fedora specific features, such as secure boot signing. Here’s how you can run them.

This wiki page provided by the Fedora QA team describes the process to run the regression tests on your local machine. To run these tests, you need the gcc, git, and python-fedora packages installed on your system. Use this sudo command if needed:

sudo dnf install gcc git python-fedora

Getting and running the tests

First, clone the kernel-tests repository and move into the directory:

git clone https://pagure.io/kernel-tests.git
cd kernel-tests

Next, set some configuration options. The easiest way to get started is to copy the config.example file:

cp config.example .config

The most important settings are the ones labeled to set submission. By default, tests do not submit results to the server. To submit results anonymously, use the setting submit=anonymous. To submit results linked to your FAS username, set submit=authenticated and username=<your FAS login> in .config. If you link your submission to your FAS username, you’ll also receive a Fedora badge.

To run the basic set of tests, use this command:

$ sudo ./runtests.sh

To run the performance test suites, use this command:

$ sudo ./runtests.sh -t performance

The expected result is that the tests pass. However, some tests may fail occasionally due to system load. If a test fails repeatedly, though, consider helping by reporting the failure on Bugzilla.

Running these regression tests helps validate the kernel. Look for more tests added in the near future to help make the kernel better.

Building Modules for Fedora 27

Posted by Adam Samalik on September 18, 2017 08:34 AM

Let me start with a wrong presumption that you have everything set up – you are a packager who knows what they want to achieve, you have a dist-git repository created, you have all the tooling installed. And of course, you know what Modularity is, and how and why do we use modulemd to define modular content. You know what Host, Platform, and Bootstrap modules are and how to use them.

Why would I make wrong presumptions like that? First of all, it might not be wrong at all. Especially, if you are a returning visitor, you don’t want to learn about the one-off things every time. I want to start with the stuff you will be doing on regular basis. And instead of explaining the basics over and over again, I will just point you to the right places that will help you solve a specific problem, like not having a dist-git repository.

Updating a module

Let’s go through the whole process of updating a module. I will introduce some build failures on purpose, so you know how to deal with them should they happen to you.

Start with cloning the module dist-git repository to get the modulemd file defining the installer module.

$ fedpkg clone modules/installer
$ cd installer
$ ls
Makefile installer.yaml sources tests

I want to build an updated version of installer. I have generated a new, updated version of modulemd (I will be talking about generating modulemd files later in this post), and I am ready to build it.

$ git add installer.yaml
$ git commit -m "building a new version"
$ git push
$ mbs-build submit
...
asamalik's build #942 of installer-master has been submitted

Now I want to watch the build and see how it goes.

$ mbs-build watch 942
Failed:
 NetworkManager https://koji.fedoraproject.org/koji/taskinfo?taskID=21857852
 realmd https://koji.fedoraproject.org/koji/taskinfo?taskID=21857947
 python-urllib3 https://koji.fedoraproject.org/koji/taskinfo?taskID=21857955

Summary:
 40 components in the COMPLETE state
 3 components in the FAILED state
asamalik's build #942 of installer-master is in the "failed" state

Oh no, it failed! I reviewed the Koji logs using the links above, and I see that:

  • NetworkManager and python-urllib3 failed randomly on tests. That happens sometimes and just resubmitting them would fix it.
  • realmd however needs to be fixed before I can proceed.

After fixing realmd and updating the modulemd to reference the fix, I can submit a new build.

$ git add installer.yaml
$ git commit -m "use the right version of automake with realmd"
$ git push
$ mbs-build submit
...
asamalik's build #942 of installer-master has been submitted

To watch the build, I use the following command.

$ mbs-build watch 943
Failed:
 sssd https://koji.fedoraproject.org/koji/taskinfo?taskID=21859852

Summary:
 42 components in the COMPLETE state
 1 components in the FAILED state
asamalik's build #943 of installer-master is in the "failed" state

Good news is that realmd worked this time. However, sssd failed. I know it built before, and by investigating the logs I found out it’s one of the random test failures again. Resubmitting the build will fix it. In this case, I don’t need to create a new version, just resubmit the current one.

$ mbs-build submit
...
asamalik's build #943 of installer-master has been resubmitted

Watch the build:

$ mbs-build watch 943
Summary:
 43 components in the COMPLETE state
asamalik's build #943 of installer-master is in the "ready" state

Rebuilding against new base

The Platform module has been updated and there is a soname bump in rpmlib. I need to rebuild the module against the new platform. However, I’m not changing anything in the modulemd. I know that builds map 1:1 to git commits, so I need to create an empty commit and submit the build.

$ git commit --allow-empty -m "rebuild against new platform"
$ git push
$ mbs-build submit
...
asamalik's build #952 of installer-master has been submitted

Making sure you have everything

Now it’s the time to make sure you are not missing a step in your module building journey! Did I miss something? Ask for it in the comments or tweet at me and I will try to update the post.

How do I get a dist-git repository?

To get a dist-git repository, you need to have your modulemd to go through a Fedora review process for modules. Please make sure your modulemd comply with the Fedora packaging guidelines for modules. Completing the review process wil result in having a dist-git repository.

What packages go into a module?

Your module need to run on top of the Platform module which together with the Host and Shim modules form the base operating system. You can see the definition of Platform, Host, and Shim modules on github.

You should include all the dependencies needed for your module to run on the Platform module, with few exceptions:

  • If your module needs a common runtime (like Perl or Python) that are already modularized, you shoud use these as a dependencies rather than bundling them to your module.
  • If you see that your module needs a common package (like net-tools), you shouldn’t bundle them either. They should be split into individual modules.
  • To make sure your module doesn’t conflict with other modules, it can’t contain the same packages as other modules. Your module can either depend on these other modules, or such packages can be split into new modules.

To make it easier during the transition from a traditional release to a modular one, there is a set of useful scripts in the dependency-report-scripts repository, and the results are being pushed to the dependency-report repository. You are welcome to participate.

Adding your module to the dependency-report

The module definitions are in the modularity-modules organizations, in a README.md format simlar to the Platform and Host definition. The scripts will take these repositories as an input together with the Fedora Everything repository to produce dependency reports and basic modulemd files.

How can I generate a modulemd file?

The dependency-report-scripts can produce a basic modulemd, stored in the dependency-report repository. The modulemd files stored in the dependency-report repository reference their components using the f27 branch for now. To produce a more reproducible and predictable result, I recommend you to use the generate_modulemd_with_hashes.sh script inside the dependency-report-scripts repository:

$ generate_modulemd_with_hashes.sh MODULE_NAME

The result will be stored at the same place as the previous modulemd, that was using branches as references.

$ ls output/modules/MODULE_NAME/MODULE_NAME.yaml

Where do I get the mbs-build tool?

Install the module-build-service package from Fedora repositories:

$ sudo dnf install module-build-service

What can I build to help?

See the F27 Content Tracking repository and help with the modules that are listed, or propose new ones. You can also ask on #fedora-modularity IRC channel.

Where can I learn more about Modularity?

Please see the Fedora Modularity documentation website.