Fedora People

हिन्दी, मयंक छाया और लिटरेट वर्ल्ड

Posted by Rajesh Ranjan on January 06, 2018 09:44 PM
आज मुझे मयंकजी बहुत याद आ रहे हैं - साथ ही लिटरेट वर्ल्ड भी। शायद हिन्दी दिवस है और बार-बार कई अपडेट सोशल मीडिया पर देख रहा हूँ, शायद इसलिए...। देख रहा हूँ कि कैसे हमने लगभग भुला दिया है कि 2001 की शुरुआत में ही इस आभासी दुनिया यानी इंटरनेट पर जाने-माने (अंग्रेजी) पत्रकार और लेखक मयंक छाया नाम के एक गुजराती शख़्स ने भारतीय भाषा और साहित्य को हिन्दी भाषा में शानो-शौक़त के साथ रखने की एक शानदार ज़िद की थी।

आभासी दुनिया के वे लोग जिन्होंने हाल में इस क्षेत्र में कदम रखा है वे शायद लिटरेट वर्ल्ड और मयंक जी से परिचित नहीं होंगे। बात 2001 की है। तब वे कैलिफ़ोर्निया में रहा करते थे, आजकल शिकागो में हैं। उन्होंने इंटरनेट, सिनेमा और प्रकाशन की एक कंपनी बनाई थी — लिटरेट वर्ल्ड के नाम से। इसके अंतर्गत बहुभाषी साहित्यिक पोर्टल चलाने की योजना भी बनी। स्पेनिश और अंग्रेजी के साथ हिन्दी के लिए भी यह योजना बनी थी। तब मैं जनसत्ता में था और वहीं काम कर रहे संजय सिंहजी के रेफरेंस से लिटरेट वर्ल्ड के लिए काम करने के लिए ख़ुशी-ख़ुशी हाँ की थी। पोर्टल की काफ़ी भव्य और पेशेवर शुरुआत थी। मैं शुरू से इस पोर्टल से जुड़ा रहा और हिन्दी से जुड़े लगभग सभी काम में यहाँ शरीक रहा। बराबरी का रिश्ता, पेशेवर अंदाज़ और काफ़ी पैसा - सब कुछ जो तब हिन्दी पत्रकारिता के लिए लगभग दुर्लभ था  मुझे मिलने लगा। नया अनुभव था। पहली बार मैंने मयंकजी से ही सीखा था कि स्थापित संपादक और एक नया पत्रकार जिसने अपना कैरियर महज कुछ साल पहले शुरू किया हो  दोनों बराबरी के स्तर पर बात कर सकते हैं। सप्ताह में चालीस घंटे काम  यह भी मैंने यहीं सीखा।

हिन्दी के इस पोर्टल पर भाषा साहित्य और भारतीय संस्कृति से जुड़ी सामग्रियाँ पत्रिका की शक्ल में हर सप्ताह अपडेट की जाती थीं। यानी यह साप्ताहिक वेब पत्रिका थी। सारे राज्यों में लिटरेट वर्ल्ड ने अपने प्रतिनिधि रखे। उनके काम के लिए काफी बढ़िया मेहनताना दिया जाता रहा  लगभग हर शब्द एक रूपए की दर से। हर सप्ताह तीन लेखकों के स्तंभ छपते थे। एक लेखक तीन महीने तक लिटरेट वर्ल्ड के लिए लिखते थे — यानी बारह स्तंभ। मुझे याद है, क़रीब पाँच सौ शब्द के एक स्तंभ के लिए ढाई हज़ार रूपए तक दिए जाते थे। कई जाने-माने लेखकों ने लिटरेट वर्ल्ड के लिए लिखा  निर्मल वर्मा, विष्णु खरे, गीतांजलिश्री, मैत्रेयी पुष्पा, केपी सक्सेना, उदय प्रकाश, चित्रा मुद्गल, मृदुला गर्ग जैसे कई नाम। बंद होने के पहले आख़िरी कुछ महीनों को छोड़ दें तो सबकुछ काफ़ी बढ़िया रहा। लिटरेट वर्ल्ड ने प्रकाशन में आने की भी कोशिश की थी। भारी-भरकम साइनिंग अमाउंट के साथ कृष्णा सोबती के साथ बात हुई थी, लेकिन बीच में कहाँ किसने कब क्या कैसे किया कि यह विशुद्ध सद्भावपूर्ण कोशिश 'पूंजीवादी' साज़िश क़रार दी गई। ऐसा नहीं था कि मयंक जी भारतीय बाज़ार से परिचित नहीं थे। अमेरिका जाने के पहले पत्रकारिता का बड़ा हिस्सा उन्होंने भारत में ही बिताया था और उनका मानना था कि भारतीय भाषाओं के लेखक किसी भी दूसरी भाषाओं के लेखक से किसी मामले में कमतर नहीं हैं और इसलिए आर्थिक स्तर पर भी वैसा ही मानदेय उनके लेखन के लिए रहना चाहिए। लेकिन शायद हिन्दी साहित्य की दुनिया को भी यह पेशेवर अंदाज़ रास नहीं आया। कम पैसे में किसी पूंजीपति के लिए काम करें यह 'पूंजीवादी' शोषण अधिकतर को गवारा है, लेकिन उसी काम के कोई अधिक पैसे देने लगे तो वह 'पूंजीवादी' साज़िश हो जाती है! ख़ैर। आज भी हिन्दी में वैसी ई-पत्रिका नहीं आ पाई है जहाँ पर लगभग सारे छपे शब्दों के लिए काफ़ी बढ़िया भुगतान किया जाता हो। तब युनीकोड नहीं था, सुषा का उपयोग हम करते थे। वेब भी उतना ताक़तवर नहीं हो पाया था। काफ़ी तकनीकी समस्याएँ थीं। फिर भी, लिटरेट वर्ल्ड काफ़ी लोकप्रिय हुआ।

मुझे लगता है कि हिन्दी को वेब पर इसके आरंभिक दिनों में ही गरिमा दिलाने में मयंकजी का नाम हमेशा याद किया जाना चाहिए। हिन्दी और भारतीय भाषाओं को लेकर उनका जज़्बा, उनकी सोच अद्भुत है। बहुभाषी मयंकजी हिन्दी में बेहद ख़ूबसूरत कविताएँ लिखते हैं, रंगों की भी उनकी अपनी अलग भाषा है। बहुआयामी व्यक्तित्व के धनी मयंकजी उन कुछ लोगों में से हैं जो किसी को भी अपनी गर्मजोशी, दोस्ताना व्यवहार और प्रतिभा से पलों में क़ायल कर दे। नई तकनीक पर हिन्दी के हाल को लेकर जब भी इतिहास लिखा जाना चाहिए मयंकजी का नाम उसमें आवश्यक रूप में दर्ज करना अनिवार्यता है क्योंकि मयंक छाया और लिटरेट वर्ल्ड के बिना कंप्यूटर और वेब पर हिन्दी का इतिहास नहीं लिखा जा सकता है।

Takeaways from my foray into amateur radio

Posted by Major Hayden on January 06, 2018 07:26 PM

Kenwood TM-D710G mobile radioThe Overland Expo in Asheville last year was a great event, and one of my favorite sessions covered the basics about radio communications while overlanding. The instructors shared their radios with us and taught us some tips and tricks for how to save power and communicate effectively on the trail.

Back at the office, I was surprised to discover how many of my coworkers had an FCC license already. They gave me tips on getting started and how to learn the material for the exam. I took some of my questions to Twitter and had plenty of help pouring in quickly.

This post covers how I studied, what the exam was like, and what I’ve learned after getting on the air.

The basics

FCC licenses in the US for amateur radio operators have multiple levels. Everything starts with the Technician level and you get the most basic access to radio frequencies. From there, you can upgrade (with another exam) to General, and Extra. Each license upgrade opens up more frequencies and privileges.

Studying

A coworker recommended the official ARRL book for the Technician exam and I picked up a paper copy. The content is extremely dry. It was difficult to remain focused for long periods.

The entire exam is available in the public domain, so you can actually go straight to the questions that you’ll see on the exam and study those. I flipped to the question section in the ARRL book and found the questions I could answer easily (mostly about circuits and electrical parts). For each one that was new or difficult, I flipped back in the ARRL book to the discussion in each chapter and learned the material.

I also used HamStudy.org to quickly practice and keep track of my progress. The site has some handy graphs that show you how many questions you’ve seen and what your knowledge level of different topics really is. I kept working through questions on the site until I was regularly getting 90% or higher on the practice tests.

Testing

Before you test, be sure to get a FCC Registration Number (commonly called a FRN). They are free to get and it ensures that you get your license (often called your ‘ticket’) as soon as possible. I was told that some examiners won’t offer you a test if you don’t have your FRN already.

The next step is to find an amateur radio exam in your area. Exams are available in the San Antonio area every weekend and they are held by different groups. I took mine with the Radio Operators of South Texas and the examiners were great! Some examiners require you to check in with them so they know you are coming to test, but it’s a good idea to do this anyway. Ask how they want to be paid (cash, check, etc), too.

Be sure to take a couple of pencils, a basic calculator, your government issued ID, your payment, and your FRN to the exam. I forgot the calculator but the examiners had a few extras. The examiners complete some paperwork before your exam, and you select one of the available test versions. Each test contains a randomly selected set of 35 questions from the pool of 350.

Go through the test, carefully read each question, and fill in the answer sheet. Three examiners will grade it when you turn it in, and they will fill out your Certificate of Successful Completion of Examination (CSCE). Hold onto this paper just in case something happens with your FCC paperwork.

The examiners will send your paperwork to the FCC and you should receive a license within two weeks. Mine took about 11-12 business days, but I took it just before Thanksgiving. The FCC will send you a generic email stating that there is a new license available and you can download it directly from the FCC’s website.

Lessons learned on the air

Once I passed the exam and keyed up for the first transmission, I feared a procedural misstep more than anything. What if I say my callsign incorrectly? What if I’m transmitting at a power level that is too high? What power level is too high? What am I doing?!

Everyone has to start somewhere and you’re going to make mistakes. Almost 99.9% of my radio contacts so far have been friendly, forgiving, and patient. I’ve learned a lot from listening to other people and from the feedback I get from radio contacts. Nobody will yell at you for using a repeater when simplex should work. Nobody will yell at you if you blast a repeater with 50 watts when 5 would be fine.

I’m on VHF most often and I’ve found many local repeaters on RepeaterBook. Most of the repeaters in the San Antonio area are busiest during commute times (morning and afternoon) as well as lunchtime. I’ve announced my callsign when the repeater has been quiet for a while and often another radio operator will call back. It’s a good idea to mention that you’re new to amateur radio since that will make it easier for others to accept your mistakes and provide feedback.

when I’m traveling long distances, I monitor the national simplex calling frequency (146.520). That’s the CB equivalent of channel 19 where you can announce yourself and have conversations. In busy urban areas, it’s best to work out another frequency with your contact to keep the calling frequency clear.

My equipment

My first purchase was a (cheap) BTECH UV-5X3. The price is fantastic, but the interface is rough to use. Editing saved channels is nearly impossible and navigating the menus requires a good manual to decipher the options. The manual that comes with it is surprisingly brief. There are some helpful how-to guides from other radio operators on various blogs that can help.

I picked up a Kenwood TM-D710G mobile radio from a coworker and mounted it in the car. I wired it up with Anderson Powerpole connectors and that makes things incredibly easy (and portable). The interface on the Kenwood is light years ahead of the BTECH, but the price is 10x more.

My car has the Comet SBB-5NMO antenna mounted with a Comet CP-5NMO lip mount. It fits well on the rear of the 4Runner.

Managing a lot of repeater frequencies is challenging with both radios (exponentially more so with the BTECH), but the open source CHIRP software works well. I installed it on my Fedora laptop and could manage both radios easily. The BTECH radio requires you to download the entire current configuration, edit it, and upload it to the radio. The Kenwood allows you to make adjustments to the radio in real time (which is excellent for testing).

More questions?

If you have more questions about any part of the process, let me know!

The post Takeaways from my foray into amateur radio appeared first on major.io.

Slice of Cake #21

Posted by Brian "bex" Exelbierd on January 05, 2018 04:08 PM

A slice of cake

Last week as the FCAIC I:

  • Lamented the fact that I hadn’t written an update since September 2017. Ok, I didn’t really do that last week, I’ve been feeling bad about it for much longer than that.
  • Did a bit of finance work including chasing down a missing airfare receipt. The world needs a better way of funneling receipts to systems that doesn’t involve my email and cut/paste.
  • Taking guidance from Matthew Miller I went through all email older than 90 days and still in an inbox. I deleted most of it. Hooray me?
  • I also spent a lot of time just powering through email. I got to less than 85 in three inboxes … now if I could just turn off the inbound flow :D

À la mode

  • According to Duolingo I am now 18% fluent in Polish. I never knew that 18% could also mean 0% :(. It is hard but surmountable.
  • In other Polish language news, look up the rules for declining nouns acting as direct objects in negated verb sentences. OMG!

Cake Around the World

I’ll be traveling some and hope you’ll ping me for coffee if you’re nearby.

  • DevConf.cz, Brno, Czech Republic - 27-28 January
  • Fedora CommOps FAD, Brno, Czech Republic - 29 January - 1 February
  • Grimoire/CHAOSS Con, Brussels, Belgium - 2 February
  • FOSDEM, Brussels, Belgium - 3-4 February

Note: This was posted early as I will have limited internet access from 6-9 January

How to write pylint checker plugins

Posted by Alexander Todorov on January 05, 2018 11:00 AM

In this post I will walk you through the process of learning how to write additional checkers for pylint!

Prerequisites

  1. Read Contributing to pylint to get basic knowledge of how to execute the test suite and how it is structured. Basically call tox -e py36. Verify that all tests PASS locally!

  2. Read pylint's How To Guides, in particular the section about writing a new checker. A plugin is usually a Python module that registers a new checker.

  3. Most of pylint checkers are AST based, meaning they operate on the abstract syntax tree of the source code. You will have to familiarize yourself with the AST node reference for the astroid and ast modules. Pylint uses Astroid for parsing and augmenting the AST.

    NOTE: there is compact and excellent documentation provided by the Green Tree Snakes project. I would recommend the Meet the Nodes chapter.

    Astroid also provides exhaustive documentation and node API reference.

    WARNING: sometimes Astroid node class names don't match the ones from ast!

  4. Your interactive shell weapons are ast.dump(), ast.parse(), astroid.parse() and astroid.extract_node(). I use them inside an interactive Python shell to figure out how a piece of source code is parsed and converted back to AST nodes! You can also try this ast node pretty printer! I personally haven't used it.

How pylint processes the AST tree

Every checker class may include special methods with names visit_xxx(self, node) and leave_xxx(self, node) where xxx is the lowercase name of the node class (as defined by astroid). These methods are executed automatically when the parser iterates over nodes of the respective type.

All of the magic happens inside such methods. They are responsible for collecting information about the context of specific statements or patterns that you wish to detect. The hard part is figuring out how to collect all the information you need because sometimes it can be spread across nodes of several different types (e.g. more complex code patterns).

There is a special decorator called @utils.check_messages. You have to list all message ids that your visit_ or leave_ method will generate!

How to select message codes and IDs

One of the most unclear things for me is message codes. pylint docs say

The message-id should be a 5-digit number, prefixed with a message category. There are multiple message categories, these being C, W, E, F, R, standing for Convention, Warning, Error, Fatal and Refactoring. The rest of the 5 digits should not conflict with existing checkers and they should be consistent across the checker. For instance, the first two digits should not be different across the checker.

I'm usually having troubles with the numbering part so you will have to get creative or look at existing checker codes.

Practical example

In Kiwi TCMS there's legacy code that looks like this:

def add_cases(run_ids, case_ids):
    trs = TestRun.objects.filter(run_id__in=pre_process_ids(run_ids))
    tcs = TestCase.objects.filter(case_id__in=pre_process_ids(case_ids))

    for tr in trs.iterator():
        for tc in tcs.iterator():
            tr.add_case_run(case=tc)

    return

Notice the dangling return statement at the end! It is useless because when missing the default return value of this function will still be None. So I've decided to create a plugin for that.

Armed with the knowledge above I first try the ast parser in the console:

Python 3.6.3 (default, Oct  5 2017, 20:27:50) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ast
>>> import astroid
>>> ast.dump(ast.parse('def func():\n    return'))
"Module(body=[FunctionDef(name='func', args=arguments(args=[], vararg=None, kwonlyargs=[], kw_defaults=[], kwarg=None, defaults=[]), body=[Return(value=None)], decorator_list=[], returns=None)])"
>>> 
>>> 
>>> node = astroid.parse('def func():\n    return')
>>> node
<Module l.0 at 0x7f5b04621b38>
>>> node.body
[<FunctionDef.func l.1 at 0x7f5b046219e8>]
>>> node.body[0]
<FunctionDef.func l.1 at 0x7f5b046219e8>
>>> node.body[0].body
[<Return l.2 at 0x7f5b04621c18>]

As you can see there is a FunctionDef node representing the function and it has a body attribute which is a list of all statements inside the function. The last element is .body[-1] and it is of type Return! The Return node also has an attribute called .value which is the return value! The complete code will look like this:

uselessreturn.py
import astroid

from pylint import checkers
from pylint import interfaces
from pylint.checkers import utils


class UselessReturnChecker(checkers.BaseChecker):
    __implements__ = interfaces.IAstroidChecker

    name = 'useless-return'

    msgs = {
        'R2119': ("Useless return at end of function or method",
                  'useless-return',
                  'Emitted when a bare return statement is found at the end of '
                  'function or method definition'
                  ),
        }


    @utils.check_messages('useless-return')
    def visit_functiondef(self, node):
        """
            Checks for presence of return statement at the end of a function
            "return" or "return None" are useless because None is the default
            return type if they are missing
        """
        # if the function has empty body then return
        if not node.body:
            return

        last = node.body[-1]
        if isinstance(last, astroid.Return):
            # e.g. "return"
            if last.value is None:
                self.add_message('useless-return', node=node)
            # e.g. "return None"
            elif isinstance(last.value, astroid.Const) and (last.value.value is None):
                self.add_message('useless-return', node=node)


def register(linter):
    """required method to auto register this checker"""
    linter.register_checker(UselessReturnChecker(linter))

Here's how to execute the new plugin:

$ PYTHONPATH=./myplugins pylint --load-plugins=uselessreturn tcms/xmlrpc/api/testrun.py | grep useless-return
W: 40, 0: Useless return at end of function or method (useless-return)
W:117, 0: Useless return at end of function or method (useless-return)
W:242, 0: Useless return at end of function or method (useless-return)
W:495, 0: Useless return at end of function or method (useless-return)

NOTES:

  • If you contribute this code upstream and pylint releases it you will get a traceback:

    pylint.exceptions.InvalidMessageError: Message symbol 'useless-return' is already defined
    

    this means your checker has been released in the latest version and you can drop the custom plugin!

  • This is example is fairly simple because the AST tree provides the information we need in a very handy way. Take a look at some of my other checkers to get a feeling of what a more complex checker looks like!

  • Write and run tests for your new checkers, especially if contributing upstream. Have in mind that the new checker will be executed against existing code and in combination with other checkers which could lead to some interesting results. I will leave the testing to yourself, all is written in the documentation.

This particular example I've contributed as PR #1821 which happened to contradict an existing checker. The update, raising warnings only when there's a single return statement in the function body, is PR #1823.

Workshop around the corner

I will be working together with HackSoft on an in-house workshop/training for writing pylint plugins. I'm also looking at reviving pylint-django so we can write more plugins specifically for Django based projects.

If you are interested in workshop and training on the topic let me know!

Thanks for reading and happy testing!

Using split ssh in QubesOS 4.0

Posted by Kushal Das on January 05, 2018 09:59 AM

The idea behind Qubes OS is known as security by compartmentalization. You create different Qubes (VMs or domains) to compartmentalize your digital data. So that even if one of the VMs is compromised, the attacker will not be able to access data stored in other VMs.

If we look into a typical GNU/Linux user’s daily routine, ssh is a regular tool everyday. We do login to various systems, or access files over ssh. But, if you keep the ssh keys in the place where you are also running the browser, there is a chance that someone will try to access the files by attacking through the browser. Yesterday we all read many things which can be done by attacking through the browsers (Yay! SECURITY!!!).

In this tutorial, we will learn about split-ssh and how we can keep the actual ssh keys safe in QubesOS. At the time of writing this article (2018-01-05), the commit in the master branch is 1b1786f5bac9d06af704b5fb3dd2c59f988767cb.

Modify the template VM

Because we will be adding things to /etc directory of our VMs, we will have to do this in the template VM. Because in the normal VMs the /etc directory will be a fresh copy every time we restart the VM. I modified fedora-26 as that is my default template.

First, add the following code in the /etc/qubes-rpc/qubes.SshAgent file in the template VM and then shut it down.

#!/bin/sh
notify-send "[`qubesdb-read /name`] SSH agent access from: $QREXEC_REMOTE_DOMAIN"
ncat -U $SSH_AUTH_SOCK

Creating the actual ssh-vault VM

Next task is to create a new VM, I named it ssh-vault. The name is important to remember as the code/configuration will access the ssh keys based on the vault VM name. You can have as many ssh vaults as you want. Remember to open the configuration after creation and set the networking to None.

Start the vault VM, either create a new pair of ssh key, or copy your existing key in there. Remember to use qvm-copy command to copy the files, no network is available.

[Desktop Entry]
Name=ssh-add
Exec=ssh-add
Type=Application

Then add the above content to the ~/.config/autostart/ssh-add.desktop file. You may have to create the autostart directory.

$ mkdir -p .config/autostart
# vim ~/.config/autostart/ssh-add.desktop

Configuring the client VM

Client VM is the VM in which you use the ssh key. Add the following to the /rw/config/rc.local file, and then make the file executable. Remember to use sudo for the same.

SSH_VAULT_VM="ssh-vault"

if [ "$SSH_VAULT_VM" != "" ]; then
	export SSH_SOCK=~user/.SSH_AGENT_$SSH_VAULT_VM
	rm -f "$SSH_SOCK"
	sudo -u user /bin/sh -c "umask 177 && ncat -k -l -U '$SSH_SOCK' -c 'qrexec-client-vm $SSH_VAULT_VM qubes.SshAgent' &"
fi

If you look carefully at the shell scrip above, you will find we are setting the vault VM name using a variable called SSH_VAULT_VM. Change this name to whatever VM you want to use as the vault.

$ sudo vim /rw/config/rc.local
$ sudo chmod +x /rw/config/rc.local

Next, we will add the following to the ~/.bashrc file, so that ssh can find the right socket file.

# Set next line to the ssh key vault you want to use
SSH_VAULT_VM="ssh-vault"

if [ "$SSH_VAULT_VM" != "" ]; then
	export SSH_AUTH_SOCK=~user/.SSH_AGENT_$SSH_VAULT_VM
fi

Then I restarted the vault and client VMs. Because my ssh key also has a passphrase, I entered that using ssh-add command in the ssh-vault VM.

Configuring the policy in dom0

In QubesOS you will have to define a policy in the dom0, based on that the VMs can talk to each other (using QubeOS’ internal). In my case I want only the emails VM should be able to ask to get access to the ssh keys. So, I added the following in /etc/qubes-rpc/policy/qubes.SshAgent file.

emails ssh-vault ask

The above policy rule says that when the emails VM tries to contact ssh-vault VM, it has to ask for permission to do so from the user.

Using ssh (finally!)

At this moment you can safely start the client VM, and try to ssh into anywhere. It will open up an authentication dialog, you will have to select and click on Okay button to give access to the ssh keys. You will also see a notification in the top notification area.

There is an active IRC channel #qubesin the Freenode server. Join there and ask any doubts you have.

NOTICE: BLOG HAS MOVED

Posted by Fraser Tweedale on January 05, 2018 03:18 AM

Due to imminent shutdown of OpenShift Online v2 hosting
environment, this blog has MOVED to
https://frasertweedale.github.io/blog-redhat/

KPTI — the new kernel feature to mitigate “meltdown”

Posted by Fedora Magazine on January 05, 2018 02:12 AM

A new set of vulnerabilities were disclosed recently. As part of mitigating “meltdown”, the kernel introduced a new feature called Kernel Page Table Isolation (KPTI). This was a big change to come in late in the typical kernel development cycle but it provides important protection with some performance penalty. Updated kernels for supported versions of Fedora contain the KPTI patches. This article a high level overview of how KPTI works.

Modern processors for desktop computers offer different security levels to run code. The kernel runs at the most privileged level (“kernel space”) since it needs to access all parts of the hardware. Almost all other applications run at a lower privilege level (“user space”) and make system calls into the kernel to access privileged data.

Modern processors for desktop computers also support virtual memory. The kernel sets up page tables to control the mapping between a virtual address and physical address. Access between kernel and userspace is also controlled by page tables. A page that is mapped for the kernel is not accessible by userspace although the kernel can typically access user space.

Translating these mappings can be expensive so the hardware has a translation lookaside buffer (TLB) to store mappings. Sometimes it’s necessary to remove the old mappings (“flush the TLB”) but doing so is costly and code is written to minimize such calls. One trick here is to always have the kernel page tables mapped when user processes are running. The page table permissions prevent userspace from accessing the kernel mappings but the kernel can access the mappings immediately when a system call is made.

Meltdown exploit and how KPTI mitigates it

The meltdown exploit demonstrated that having the kernel mapping available in userspace can be risky. Modern processors prefetch data from all mappings to run as fast as possible. What data gets prefetched depends on the CPU implementation. When a running userspace program accesses a kernel mapping, it will take a fault and typically crash the program. The CPU however, may prefetch kernel data without causing any change to the running program. Prefetching is not usually a security risk because there are still permission checks on the addresses so userpace programs cannot access kernel data. What the meltdown researchers discovered was it was possible to measure how long data accesses took on prefetched data to gain information about the system. This is what’s referred to as a side-channel attack. The KPTI patches reworked how page tables are set up so that the kernel is no longer mapped in userspace. This means that userspace cannot prefetch any kernel data and thus the exploit is mitigated.

Actually writing an attack to collect useful data from this exploit can take weeks or months to develop for a single system. Still, in the interests of security the KPTI patches were merged to close this hole. The side effect of not having the kernel page tables always mapped means that the TLBs must be flushed more often, causing a performance penalty. By default, Fedora has KPTI enabled to provide security for users. KPTI can be turned off by passing “nopti” on the command line if users don’t want the performance hit.

 

Proof for Disqus

Posted by Matěj Cepl on January 04, 2018 11:00 PM

This is to prove for Disqus case ID: 524317 that I control this website.

PHP version 5.6.33, 7.0.27, 7.1.13 and 7.2.1

Posted by Remi Collet on January 04, 2018 09:26 PM

RPM of PHP version 7.2.1 are available in the remi-php72 repository for Fedora 25-27 and Enterprise Linux  6 (RHEL, CentOS) and as Software Collection in the remi-safe repository.

RPM of PHP version 7.1.13 are available in remi repository for Fedora 26-27 and in remi-php71 repository for Fedora 24-25 and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 7.0.27 are available in remi repository for Fedora 25 and in remi-php70 repository for Fedora 24 and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 5.6.33 are available in remi repository for Fedora 24 and  remi-php56 repository for Enterprise Linux.

emblem-important-2-24.pngPHP version 5.5 have reached its end of life and is no longer maintained by the project.

These versions are also available as Software Collections.

security-medium-2-24.pngThese versions fix some security bugs, so update is strongly recommended.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.2 installation (simplest):

yum-config-manager --enable remi-php72
yum update php\*

Parallel installation of version 7.2 as Software Collection (x86_64 only):

yum install php72

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update

Parallel installation of version 7.1 as Software Collection (x86_64 only):

yum install php71

Replacement of default PHP by version 7.0 installation (simplest):

yum-config-manager --enable remi-php70
yum update

Parallel installation of version 7.0 as Software Collection (x86_64 only):

yum install php70

Replacement of default PHP by version 5.6 installation (simplest):

yum-config-manager --enable remi-php56
yum update

Parallel installation of version 5.6 as Software Collection (x86_64 only):

yum install php56

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.4
  • EL6 rpm are build using RHEL-6.9
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70 / php71 / php72)

[Howto] Run programs as non-root user on privileged ports via Systemd

Posted by Roland Wolters on January 04, 2018 03:16 PM

TuxRunning programs as a non-root user is must in security sensitive environments. However, these programs sometimes need to publish their service on privileged ports like port 80 – which cannot be used by local users. Systemd offers a simple way to solve this problem.

Background

Running services as non-root users is a quite obvious: if it is attacked and a malicious user gets control of the service, the rest of the system should still be secure in terms of access rights.

At the same time plenty programs need to publish their service at ports like 80 or 443 since these are the default ports for http communication and thus for interfaces like REST. But these ports are not available to non-root users.

Problem shown at the example gitea

To show how to solve this with systemd, we take the self hosted git service gitea as an example. Currently there are hardly any available packages, so most people end up installing it from source, for example as the user git. A proper sysmted unit file for such an installation in a local path, running the service as a local user, is:

$ cat /etc/systemd/system/gitea.service
[Unit]
Description=Gitea (Git with a cup of tea)
After=syslog.target
After=network.target
After=postgresql.service

[Service]
RestartSec=2s
Type=simple
User=git
Group=git
WorkingDirectory=/home/git/go/src/code.gitea.io/gitea
ExecStart=/home/git/go/src/code.gitea.io/gitea/gitea web
Restart=always
Environment=USER=git HOME=/home/git

[Install]
WantedBy=multi-user.target

If this service is started, and the application configuration is set to port 80, it fails during the startup with a bind error:

Jan 04 09:12:47 gitea.qxyz.de gitea[8216]: 2018/01/04 09:12:47 [I] Listen: http://0.0.0.0:80
Jan 04 09:12:47 gitea.qxyz.de gitea[8216]: 2018/01/04 09:12:47 [....io/gitea/cmd/web.go:179 runWeb()] [E] Failed to start server: listen tcp 0.0.0.0:80: bind: permission denied

Solution

One way to tackle this would be a reverse proxy, running on port 80 and forwarding traffic to a non-privileged port like 8080. However, it is much more simple to add an additional systemd socket which listens on port 80:

$ cat /etc/systemd/system/gitea.socket
[Unit]
Description=Gitea socket

[Socket]
ListenStream=80
NoDelay=true

As shown above, the definition of a socket is straight forward, and hardly needs any special configuration. We use NoDelay here since this is a default for Go on sockets it opens, and we want to imitate that.

Given this socket definition, we add the socket as requirement to the service definition:

[Unit]
Description=Gitea (Git with a cup of tea)
Requires=gitea.socket
After=syslog.target
After=network.target
After=postgresql.service

[Service]
RestartSec=2s
Type=simple
User=git
Group=git
WorkingDirectory=/home/git/go/src/code.gitea.io/gitea
ExecStart=/home/git/go/src/code.gitea.io/gitea/gitea web
Restart=always
Environment=USER=git HOME=/home/git
NonBlocking=true

[Install]
WantedBy=multi-user.target

As seen above, the unit definition hardly changes, only the requirement for the socket is added – and NonBlocking as well, to imitate Go behavior.

That’s it! Now the service starts up properly and everything is fine:

[...]
Jan 04 09:21:02 gitea.qxyz.de gitea[8327]: 2018/01/04 09:21:02 Listening on init activated [::]:80
Jan 04 09:21:02 gitea.qxyz.de gitea[8327]: 2018/01/04 09:21:02 [I] Listen: http://0.0.0.0:80
Jan 04 09:21:08 gitea.qxyz.de gitea[8327]: [Macaron] 2018-01-04 09:21:08: Started GET / for 192.168.122.1
[...]

Sources, further reading


Filed under: Cloud, Debian & Ubuntu, Fedora & RHEL, HowTo, Linux, Security, Shell, SUSE, Technology

Using diceware to generate passwords

Posted by Kushal Das on January 04, 2018 09:11 AM

Choosing a new password is always an interesting topic. When I started using computers for the first time, my idea was to find some useful words which I can remember, maybe 2-3 of those words together. With time I found that the websites have different requirements when it comes to choosing a new password. But, in the last few years we also saw many examples where brute forcing a password is a rather simple thing. The modern powerful computers enable anyone to find a right combination of characters in a decent time frame.

What is a diceware password?

Diceware passwords are normal passwords (a few words together) generated from a list of words by either rolling a dice, or by computer. You can read more in the original Diceware website.

Using diceware project to generate your passphrases

If you notice, I have written passphrase instead of password. This is because passphrases are not only easier to remember than a complex password, but they also provide better security from bruteforce attacks. The following comics from XKCD explains it better than any words.

Installing diceware

diceware is a very simple command line tool written in Python. It can help you to choose a diceware passphrase easily. It was already packaged for Debian, last week I have packaged it for Fedora (thank you Parag for the review). Yesterday night it was pushed to stable. So, now you can install it using dnf.

$ sudo dnf install diceware

Using diceware

$ diceware 
MotorBolsterFountainThrowerPorridgeBattered

By default it is creating passphrases with 6 words in it, but you can increase by using -n command line argument. You should use at least 7 words in your passphrase. Read the story from Micah Lee to understand how this helps to increase the strength of your passphrases in many folds.

The man page of the diceware has more details about usage.

Start using a password manager

Now is a good time to start using a password manager. Save all the passwords/passphrases in one place, and secure it with a super long passphrase which you can remember. This article from Martin Shelton has many examples. The members of Fedora engineering team uses a command line tool called pass which uses gpg to encrypt the passwords.

Link to FAF directly from Fedora Packages

Posted by ABRT team on January 04, 2018 09:10 AM

With a last release of Fedora Packages there is a new feature related to FAF.

When you lookup some package. E.g., libpreport:

fedora_packages

Then you can notice the drop-down menu. When you click on it, you get this menu:

drop_down_menu

And when you click on “FAF” you will be redirected to FAF server, where you can see most recent crash related to this package.

faf_web

Now try this with your package.

Freelancer.com – Let’s blow the whistle

Posted by Sirko Kemter on January 04, 2018 06:58 AM

As I am allways in need of money, I looking fro freelance jobs. So therefore I am member of freelancer.com. From time to time they send me “interesting” projects in my area in kind of newsletter, so I got yesterday also this newsletter with an bid I should look at. It was like to often something, where you dont see how much work it is but you shall give a bi for how much you doing it, so nothing for me. Around 2 hrs later I got an email again from freelancer.com directly addressed to me and written from a person. I took the freedom to answer to this person, how in gods name he figured out that this job offer would fit to me as there is nothing written what shall be to do. I got no answer, but this time I said to myself ok, lets bid and I did. So now I got a clearer job description what I shall do and its this:

I took the freedom and asked the guy when he will pay, after he got the information. Quiet funny to me I shall work for nothing also ^^. But I was already planning to publish this and blow the whistle, but as civilized person I gave freelancer.com the opportunity to give me an answer if they find this is morally right and belongs to their projects. So I contacted the support. After explaining what happen I got a link to foundations why projects get rejected. So if I am seeing that right, freelancer has nothing against being used for such kind of “jobs”. Well the first support employee wanted to have the link to the project so that the “admins” can have a look. She got it and after several minutes she closed the chat, she would have not heard from me. Ok next attempt, after again explaining the support employee contacted the legal team and the project got deleted but for what reason?

Why took this soooooooooo long, 1,5 hours communicating with support of freelancer but the end is interesting, its morally totally ok but it got deleted for being wrong described. The funny thing to me is also they deleted the project (indeed they did) but they kept the “customer” (but under observation hahaa) . Ok judge this all by yourself. For myself I find it not morally right, that such platforms can be used for this and that there is nothing clear written in the rules for the projects about such kind of stuff. I also find it funny that the guy is not deleted, that gives him the chance to communicate with bidders, so on the end oppositional groups and persons might get reported to him. Ok judge that by yourself. And to sad is just, I still need some work to do :D

All systems go

Posted by Fedora Infrastructure Status on January 04, 2018 05:42 AM
New status good: Everything seems to be working. for services: Ask Fedora, Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Darkserver, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, FedoraHosted.org Services, Fedora pastebin service, FreeMedia, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Fedora People, Package Database, Package maintainers git repositories, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on January 04, 2018 01:41 AM
New status scheduled: Scheduled updates in progress for services: Ask Fedora, Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Darkserver, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, FedoraHosted.org Services, Fedora pastebin service, FreeMedia, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Fedora People, Package Database, Package maintainers git repositories, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

More fun with fonts

Posted by Matthias Clasen on January 04, 2018 01:04 AM

Just before Christmas, I spent some time in New York to continue font work with Behdad that we had begun earlier this year.

As you may remember from my last post on fonts, our goal was to support OpenType font variations. The Linux text rendering stack has multiple components: freetype, fontconfig, harfbuzz, cairo, pango. Achieving our goal required a number of features and fixes in all these components.

Getting all the required changes in place is a bit time-consuming, but the results are finally starting to come together. If you use the master branches of freetype, fontconfig, harfbuzz, cairo, pango and GTK+, you can try this out today.

Warm-up

But beyond variations, we want to improve font support in general. To start off, we fixed a few bugs in the color Emoji support in cairo and GTK+.

Polish

Next was small improvements to the font chooser, such as a cleaner look for the font list, type-to-search and maintaining the sensitivity of the select button:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1920-1" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-035205-PM.webm?_=1" type="video/webm">https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-035205-PM.webm</video>

Features

I also spent some time on OpenType features, and making them accessible to users.  When I first added feature support in Pango, I wrote a GTK+ demo that shows them in action, but without a ready-made GTK+ dialog, basically no applications have picked this up.

Time to change this! After some experimentation, I came up with what I think is an acceptable UI for customizing features of a font:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1920-2" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-063819-PM-3.webm?_=2" type="video/webm">https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-063819-PM-3.webm</video>

It is still somewhat limited since we only show features that are supported by the selected font and make sense for entire documents or paragraphs of text.  Many OpenType features can really only be selected for smaller ranges of text, such as fractions or subscripts. Support for those may come at a later time.

Part of the necessary plumbing for making this work nicely was to implement the font-feature-settings CSS property, which brings GTK+ closer to full support for level 3 of the CSS font module. For theme authors, this means that all OpenType font features are accessible from CSS.

One thing to point out here is that font feature settings are not part of the PangoFont  object, but get specified via attributes (or markup, if you like). For the font chooser, this means that we’ve had to add new API to return the selected features: pango_font_chooser_get_font_features(). Applications need to apply the returned features to their text by wrapping them in a PangoAttribute.

Variations

Once we had this ‘tweak page’ added to the font chooser, it was the natural place to expose variations as well, so this is what we did next. Remember that variations define number of ‘axes’ for the font, along which the characteristics of the font can be continuously changed. In UI terms, this means we that we add sliders similar to the one we already have for the font size:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1920-3" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-065307-PM.webm?_=3" type="video/webm">https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-065307-PM.webm</video>

Again, fully supporting variations meant implementing the corresponding  font-variation-settings CSS property (yes, there is a level 4 of the CSS fonts module). This will enable some fun experiments, such as animating font changes:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1920-4" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-070645-PM.webm?_=4" type="video/webm">https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-070645-PM.webm</video>

All of this work would be hard to do without some debugging and exploration tools. gtk-demo already contained the Font Features example. During the week in New York, I’ve made it handle variations as well, and polished it in various ways.

To reflect that it is no longer just about font features, it is now called Font Explorer. One fun thing I added is a combined weight-width plane, so you can now explore your fonts in 2 dimensions:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1920-5" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-074638-PM.webm?_=5" type="video/webm">https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-074638-PM.webm</video>

Whats next

As always, there is more work to do. Here is an unsorted list of ideas for next steps:

  • Backport the font chooser improvements to GTK+ 3. Some new API is involved, so we’ll have to see about it.
  • Add pango support for variable families. The current font chooser code uses freetype and harfbuzz APIs to find out about OpenType features and variations. It would be nice to have some API in pango for this.
  • Improve font filtering. It would be nice to support filtering by language or script in the font chooser. I have code for this, but it needs some more pango API to perform acceptably.
  • Better visualization for features. It would be nice to highlight the parts of a string that are affected by certain features. harfbuzz does not currently provide this information though.
  • More elaborate feature support. For example, it would be nice to have a way to enable character-level features such as fractions or superscripts.
  • Support for glyph selection. Several OpenType features provide (possibly multiple) alternative glyphs,  with the expectation that the user will be presented with a choice. harfbuzz does not have convenient API for implementing this.
  • Add useful font metadata to fontconfig, such as ‘Is this a serif, sans-serif or handwriting font ?’ and use it to offer better filtering
  • Implement @font-face rules in CSS and use them to make customized fonts first-class objects.

Help with any of this is more than welcome!

Protect your Fedora system against Meltdown

Posted by Fedora Magazine on January 04, 2018 12:06 AM

You may have heard about Meltdown, an exploit that can be used against modern processors (CPUs) to maliciously gain access to sensitive data in memory. This vulnerability is serious, and can expose your secret data such as passwords. Here’s how to protect your Fedora system against the attack.

Guarding against Meltdown

New kernel packages contain fixes for Fedora 26 and 27 (kernel version 4.14.11), as well as Rawhide (kernel 4.15 release candidate). The maintainers have submitted updates to the stable repos. They should show up within a day or so for most users.

To update your Fedora system, use this command once you configure sudo. Type your password at the prompt, if necessary.

sudo dnf --refresh update kernel

Fedora provides worldwide mirrors at many download sites to better serve users. Some sites refresh their mirrors at different rates. If you don’t get an update right away, wait until later in the day.

If your system is on Rawhide, run sudo dnf update to get the update.

Then reboot your system to use the latest kernel.

Fedora Atomic Host

The fixes for Fedora Atomic Host are in ostree version 27.47. To get the update, run this command:

atomic host upgrade

Then reboot your system. You can read more details on the Project Atomic blog.

A note on Spectre

Spectre is the common name for another serious vulnerability that exploits both processor and software design to maliciously expose secret data. Work is ongoing by upstream developers, vendors, and researchers to mitigate this vulnerability. The Fedora team will continue to monitor their progress and notify the public about updates as they become available.

Fedora 27 : Fix your distro with package-cleanup command.

Posted by mythcat on January 03, 2018 02:24 PM
Happy New Year 2018 !

A new beginning for us, fedora distribution users, and I prefer to write about what we all use in Fedora and maybe is less well known by new  readers.

Let's start with the development process of Fedora distro come and all the installed kernels.
Normally reason why you maybe want remove kernels is limited disk space, fix problems and see what is wrong with your Fedora distro.
First issue is about installed kernels, use this command:
#rpm -q kernel
Install this package tool named dnf-utils (is a collection of add-on tool for dnf tool).
#dnf install dnf-utils
Let's start with this command, we see that several packages are seemingly installed more than once:
#package-cleanup --cleandupes
If there’s any remaining trouble with the yum database you can see with this command:
#package-cleanup --problems
To remove installed kernels from old Fedora distros use this command:
#package-cleanup --oldkernels --count=2
... the Fedora 27 use this command:
#package-cleanup --oldkernels 2
To obtain list of orphaned packages currently residing in the system:
#package-cleanup --leaves

Best of 2017: articles for System Administrators

Posted by Fedora Magazine on January 03, 2018 01:00 PM

It has been a full year here at the Fedora Magazine, and now it is over, we are looking back at some of the awesome content from our contributors. Here are some of the best articles from our contributors that are useful for system administrators.

Installing WordPress on Fedora

Looking for a step-by-step tutorial on setting up WordPress on your Fedora server? Look no further:

How to install WordPress on Fedora

<iframe class="wp-embedded-content" data-secret="FXFCdH0K0C" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/howto-install-wordpress-fedora/embed/#?secret=FXFCdH0K0C" title="“How to install WordPress on Fedora” — Fedora Magazine" width="600"></iframe>

Configuring Sudo

This awesome quick tip walks you though setting up sudo — either at install-time, or after-the-fact:

Configure your Fedora system to use sudo

<iframe class="wp-embedded-content" data-secret="cP5ZimxnMT" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/howto-use-sudo/embed/#?secret=cP5ZimxnMT" title="“Configure your Fedora system to use sudo” — Fedora Magazine" width="600"></iframe>

Installing Apache

Apache is available in the official Fedora repos. This article shows you how easy it is an set up a web server on Fedora.

How to install Apache web server on Fedora

<iframe class="wp-embedded-content" data-secret="svos9VR0KN" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/install-apache-web-server-fedora/embed/#?secret=svos9VR0KN" title="“How to install Apache web server on Fedora” — Fedora Magazine" width="600"></iframe>

How to boot to an earlier Kernel

On rare occasions, though, a new kernel can bring an issue with it. You might need to revert to an older one to keep things working until the bug is fixed. This article shows you how.

How to boot an earlier kernel on Fedora

<iframe class="wp-embedded-content" data-secret="bOlO6vOqaX" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/boot-earlier-kernel/embed/#?secret=bOlO6vOqaX" title="“How to boot an earlier kernel on Fedora” — Fedora Magazine" width="600"></iframe>

Changing Kernel options

Learn how kernel configurations are generated and how to best make changes for a custom kernel.

Changing Fedora kernel configuration options

<iframe class="wp-embedded-content" data-secret="GlZragwdvQ" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/changing-fedora-kernel-configuration-options/embed/#?secret=GlZragwdvQ" title="“Changing Fedora kernel configuration options” — Fedora Magazine" width="600"></iframe>


The header image for this post contains Laurel Wreath created by Gabriella Fono from the Noun Project & the server icon created by Aybige from the Noun Project.

Using Haven app to secure your belongings

Posted by Kushal Das on January 03, 2018 04:38 AM

On 22nd December, Edward Snowden (President, board of Freedom of the Press Foundation) announced a new project called Haven, which is built in collaboration between The Guardian Project and Freedom of the Press Foundation. Haven is an Android app which will turn any Android phone into a monitoring system to watch over your laptop, or your house.

The problem Haven is trying to solve is an old one. How do you make sure that no one is tampering with your hardware (or secretly searching your house) while you are away? There is no easy and 100% secure solution, but Haven enables us to see and record what is happening. It uses all the available sensors including microphones (generally there are 3 of them), accelerometer, and camera.

How to install Haven on your phone?

I’ve been wanting to try this app for some time, but I didn’t have any old Android phones. So yesterday, as part of new year celebration, I went and bought a new Android phone (around $100) to install Haven. But, remember that Haven can be installed on cheap $50 burner Android phones too (and this is one of the goal of the project). So, feel free to use whatever is available to you.

The project is still in Beta state, and it is available on Google Play Store, and F-Droid store (nightly beta builds). Remember that now there are fake Haven apps in the Google Play Store, so check twice before you install. The original app is published by The Guardian Project.

If you want to use F-Droid like me, add this new a new repository with the following URL.You can do this from F-Droid settings, in the repositories section.

https://guardianproject.github.io/haven-nightly/fdroid/repo/

After adding the repository, refresh all the repositories by clicking the refresh button, then you can install the latest Haven. I have installed the version mentioned in the following screenshot. Remember that Haven can use another app called Orbot to provide remote access to the logs over Tor, but the Orbot from the Play store kept crashing for me, so I installed the latest Orbot (15.5.1-RC-2-multi-SDK23) from the F-Droid store. I am using the 0.1.0-beta-7 version of Haven.

Configuring Haven

You start Haven, a greeter window will welcome you. Swipe left to move to the next windows of the configuration wizard.

In the first configuration window, you will have to setup which noise level should fire up an alert. This totally depends on where you want to keep your phone (on watch). You can start with the default value and then tweak it from there if you’re not getting the alerts you want.

Then you will have to set the motion level. This will detect if someone moves the phone. For example, if you keep the phone on top your laptop, or a document file, there is no easy way to access the laptop or document without moving the phone first.

Next, you can provide a phone number where you may want to receive notifications, either over SMS or Signal messenger.

After the initial configuration wizard, you can click on the settings button in the application. The first thing to do here is to set which number Haven should use to send Signal notifications.

You will need two phone numbers with Signal enabled. One is your primary number, where you will receive the notifications. You will put this number in the Notification Number (Remote). The second number is which Haven will use to send notifications. Put this number to the Signal Number (Local). Best way is to put the second SIM into the same phone of Haven.

Next, click on the REGISTER button. The Signal app on that number will receive a verification code over SMS, you will have to enter that after clicking the VERIFY button.

You can also enable remote access over Tor, just click on the checkbox. This will open the Orbot app, and then come back to the settings screen after Orbot connects to the Tor network.

Remember, you can always come back to the settings and change the values as required. Soon you will find that you will have to do that so that app can adjust to various environmental noises etc.

How to use the app?

By default the app has a 30 second timer so you can make sure that the phone is in a stable place, and then click on the START NOW button. When the timer runs out, the app will start monitoring for any noise, light, movement or vibration to trigger the alarm.

I kept trying to open the door of my office room without any noise, but the motion detector always found me entering the room. I kept the Haven activated and went to sleep in the afternoon. But, first a very loud helicopter, and then a few super bikes and finally some dogs made sure that the system triggered on noise in every other minute. So, I had to increase the noise level in the settings. Though it was fun to hear the recordings on my iPhone, which Haven sent to me over Signal.

Next time if you start the app, you will find the log entries, and you can click on the play button at the right-bottom corner to start it again. Below is a photo taken by the app while I tired to enter the office room.

Can Haven solve all of my physical security issues?

No, but it will record whatever it sees or hears. There are ways to block radio signals (to make sure that Haven can not send out any notification), but that is an expensive step for an attacker to make. You can keep the phone inside of your hotel locker to record if anyone opens up the locker or make it watch your hallway at the house. Government agencies love to see what is inside of our computers/house(s), but they don’t like get recorded while doing so.

How can I help?

Haven is an Open Source application, the source code is hosted on Github. Feel free to submit issues, write blog posts, make people aware about the application. If you can write Android code, you are most welcome to submit patches to the project. Every form of contribution counts, so don’t hesitate.

You can read more about the project in this post from Micah Lee.

  • Update 2018/01/03: Screenshot of configuration window updated for beta7 release

/happy new year

Posted by Paul Mellors [MooDoo] on January 02, 2018 06:37 AM

Hello, welcome, how are you ?

It’s a day or so late, but I wanted to wish every one a happy new year. 2017 was a very busy year and I suspect 2018 will equally be as busy, but I wanted to extend a hand and say Hi, how are you ?

Feel free to come say hello. Nothing very exciting here, but a good ear is sometimes all that’s required.

Enjoy 2018, I hope to see some of you real soon.


2017 blog review

Posted by Kushal Das on January 01, 2018 05:42 PM

Around December 2016, I decided to work more on my writings. After asking around a few friends, and also after reading the suggestions from the masters, it boiled down to one thing. One has to write more. This is no shortcut.

So, I tried to do that through out 2017. I found early morning was the easiest time for me to read/write as there is not much noise, and most importantly, Py still sleeps :).

The biggest point while starting was about what to write? I tried to write about whatever I found useful to me, or things I am excited about. I wrote using FocusWriter (most of the time), and saved the documents as a plain text file (as I use Markdown format in my blog). I also received help from many of my friends, who were kind enough to review my writings. Having a second pair of eyes for the writings is really important, as they can help to not only find the errors, but also show you better ways to express yourself.

One of my weak point (from the childhood) is a small stock of words to express myself. But, that also means my sentences do not have any words which one has to search to find the meaning.

If I just look at the numbers, I wrote 60 blog posts in 2017, which is only 7 more than of 2016. But, the number of views of the HTML pages is more than doubled.

Did your writing skill improved a lot?

The answer is no. But, now, writing is much more easier than ever. I can sit down with any of my mechanical keyboards, and just typing out the things on my mind.

If you ask me about one single thing to read on this topic, I will suggest On Writing by Stephen King.

One thing still can not do on time is replying to emails. Kind of drowned in too many emails. I am trying to slowly unsubscribe from various lists I have joined over the years. I hope you will find the future blog posts useful in different ways.

F27-20171226 updated lives released.

Posted by Ben Williams on January 01, 2018 05:33 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated 27 Live ISOs, carrying the 4.14.8-300 kernel.  

This set of updated isos will save about 800 MB of updates after install.  (for new installs.)

These can be found at  http://tinyurl.com/live-respins .We would also like to thank the following irc nicks for helping test these isos: Bob,dowdle, short-bike, Southern_Gentlem.


J.R.R. Tolkien: The Silmarillion

Posted by Ingvar Hagelund on December 29, 2017 04:27 PM

I read Tolkien’s “canon”, that is, The Hobbit, The Lord of the Rings, and The Silmarillion, around Christmas every year. So also this year.

One of the most fascinating stories in The Silmarillion is of course the story of Túrin Turambar. He is regarded as one of the major heroes of his age. At the Council of Elrond, Elrond himself lists the great men and elf-friends of old, Hador and Húrin and Túrin and Beren. But while reading through the Silmarillion, there are few among mortal men that have also added so much pain and disaster to the elves. While a great war hero, Húrin was also responsible for the slaying of the greatest hunter of the elves, Beleg Cúthalion, the strong bow. Being the war hero, he turned the people of Nargothrond away from the wisdom of their history, and even their king, and made the hidden kingdom available for the enemy. How many elves were cruelly slain or taken to captivity in Angband because of Turin’s pride? Thousands! Perhaps even tens of thousands? So how come the elves, ages later, still reckoned Túrin son of Húrin as one of the great elf-friends?

In a Nordic saga style stunt, Túrin finally slew his greatest enemy, Glaurung the great fire-breathing dragon. Glaurung had been a continous danger to all peoples of Middle-Earth, and the end of that worm was of course a great relief to all the elves, even Elrond’s ancestors, the kings of Doriath and Gondolin. Also, we must remember that the lives of the elves are different from that of men. When the elves’ bodies die, their spirits go to Mandos, where they sit in the shadow of their thought, and from where they may even return, like Glorfindel of both Gondolin and Rivendell. But when men die, they go to somewhere else, and are not bound to the world. It seems that elves are more willing to forgive and let grief rest for wisdom over time, than are men’s wont. Even the Noldor who survived the passing of the Helcaraxë forgave and united the Noldor of Fëanor’s people that left them at the burning of the ships at Losgar.

Perhaps that is one of the lessons learned from the tragic story of Túrin. From all his unhappy life, good things happened, and afterwards, the elves forgave and even mourned him and his family.

Best of 2017: command line & terminal articles

Posted by Fedora Magazine on December 29, 2017 08:00 AM

It has been a full year here at the Fedora Magazine, and now it is nearing its end, we are looking back at some of the awesome content from our contributors. We feature many articles throughout the year that are focused on helping people get the most out of the command line. Here are some of the best articles from our contributors that cover command line tips and tricks.

Taskwarrior

Taskwarrior is a flexible command-line task management program, allowing you to manage your TODO list from the command line. This article covers the basic commands, a handful of more advanced commands, as well as some basic configuration settings.

 

Getting Started with Taskwarrior

<iframe class="wp-embedded-content" data-secret="I6jKFlQs3l" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/getting-started-taskwarrior/embed/#?secret=I6jKFlQs3l" title="“Getting Started with Taskwarrior” — Fedora Magazine" width="600"></iframe>

Argbash

If you write or maintain non-trivial bash scripts, you might want to check out this article. Argbash is a utility to make bash scripts accept command-line arguments in a standard and robust way.

Improve your Bash scripts with Argbash

<iframe class="wp-embedded-content" data-secret="RBXgnptFfj" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/improve-bash-scripts-argbash/embed/#?secret=RBXgnptFfj" title="“Improve your Bash scripts with Argbash” — Fedora Magazine" width="600"></iframe>

Voice synth from the command line

Add speech to your Fedora system

<iframe class="wp-embedded-content" data-secret="nS0wnCIdBx" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/add-speech-fedora-system/embed/#?secret=nS0wnCIdBx" title="“Add speech to your Fedora system” — Fedora Magazine" width="600"></iframe>

GNU Nano

Learn the basics of Nano, a minimalist console text editor.

GNU nano: a minimalist console editor

<iframe class="wp-embedded-content" data-secret="mCJnhbkv0y" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/gnu-nano-minimalist-console-editor/embed/#?secret=mCJnhbkv0y" title="“GNU nano: a minimalist console editor” — Fedora Magazine" width="600"></iframe>

Try the Tilix terminal emulator

If you’ve been in a terminal for a while and want to try something new, why not look at Tilix? Tilix is a tilable emulator that lets you split your terminal windows in different ways at once. It also follows the GNOME Human Interface Guidelines to be as user-friendly as possible. Learn how to get started with Tilix in Fedora 26 in this article.

Try Tilix — a new terminal emulator in Fedora

<iframe class="wp-embedded-content" data-secret="q02n7LTpj9" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/try-tilix-new-terminal-emulator-fedora/embed/#?secret=q02n7LTpj9" title="“Try Tilix — a new terminal emulator in Fedora” — Fedora Magazine" width="600"></iframe>


The header image for this post contains Laurel Wreath created by Gabriella Fono from the Noun Project & the command prompt icon created by Arthus Shlain from the Noun Project.

Raccourcisseurs d'URL

Posted by Casper on December 29, 2017 05:32 AM

Ah les raccourcisseurs d'URL, les TinyURL et les BitLy, quelle belle invention. Initialement inventés pour pouvoir retaper rapidement à la main une URL, ils peuvent aussi servir à cacher des URL que l'on découvre seulement après avoir cliqué dessus.

Je ne critique pas les usages de bonne foi, je les utilise moi-même quand je dois passer en mode dépannage, quand tout s'est écroulé, et qu'il ne reste ni souris ni presse-papier pour faire un copier-coller. Ça peut vous sembler une habitude étrange, mais ceux qui donnent ou reçoivent de l'aide par canal IRC comprendront.

Et puis, au-delà de ça, il y a les usages avec excès, où j'ai carrément l'impression qu'on essaye d'économiser l'énergie des presse-papiers en leur faisant copier des URL le plus court possible. Bah oui, s'il copie une longue URL, il consomme plus d'énergie, pas vrai ? Paye ta flemmardise.

Mais si je suis motivé pour t'écrire un article, cher lecteur, tu te doutes bien que quelque chose de grave s'est produit. Il y a eu un incident.

Au-delà de l'utilisation mal-intentionnée de collecter des clics en cachant l'adresse finale, il y a les utilisations frauduleuses, pour cacher un faux nom de domaine pointant vers un site web de Phishing, Pêche, ou plutôt Hameçonnage. Et là je suis pas content. Non pas parce que c'est pas cool, des tas de gens ne cliquent pas sur les TinyURL et ils ont raison. Non pas parce que j'ai reçu un email de Pêche (à la baleine, hein), j'en reçois des tonnes. Mais parce que l'adresse du site frauduleux était caché derrière une URL raccourcie.

Et pas moyen d'investiguer si le nom de domaine est masqué. Enfin si, retracer l'origine de l'email, mais là n'est pas le sujet...

Du coup, si un jour, pour n'importe quelle raison, vous avez besoin de démasquer une adresse qui se cache derrière un TinyURL, je vous file le truc. C'est un truc tout pourri j'ai même pas eu besoin de chercher pour le trouver :

curl -I <adresse_raccourcie> | grep Location

Et bim! on voit tout.

La technique derrière ce mécanisme est simple : les sites de Raccourcisseur d'URL ne font qu'enregistrer l'adresse finale (le nom de domaine qu'on veut cacher), puis fournissent une adresse unique répondant par le code erreur 301, ce qui signifie dans le langage du protocole HTTP que l'adresse est redirigée définitivement.

L'erreur 301 et tous les détails sont envoyés dans les en-têtes de la réponse HTTP. Parmis les en-têtes de la réponse 301, il y a bien évidemment l'adresse finale, et comme ça la redirection fonctionne.

Du coup, il suffit d'afficher les en-têtes HTTP de l'adresse raccourcie (à bras raccourcis), et le tour est joué.

Par contre, cher lecteur, j'ai vraiment essayé de raccourcir ce billet, mais c'est loupé... :D

Joyeuses Fêtes ;)

radv and vega conformance test status

Posted by Dave Airlie on December 29, 2017 03:24 AM
We've been passing the vulkan conformance test suite 1.0.2 mustpass list on radv for quite a while now on the CIK/VI/Polaris cards. However Vega hadn't achieved the same pass rate.





With a bunch of fixes I pushed this morning and one fix for all GPUs, we now have the same pass rate on all GPUs and 0 fails.

This means Vega on radv can now be submitted for conformance under Vulkan 1.0,  not sure when I'll get time to do the paperwork, maybe early next year sometime.


My journey to Rust

Posted by Zeeshan Ali on December 27, 2017 01:37 PM
As most folks who know me already know, I've been in love with Rust language for a few years now and in the last year I've been actively coding in Rust. I wanted to document my journey to how I came to love this programming language, in hope that it will help people to see the value Rust brings to the world of software but if not, it would be nice to have my reason documented for my own sake.

When I started my professional career as a programmer 16 years ago, I knew some C, C++, Java and a bit of x86 assembly but it didn't take long before I completely forgot most of what I knew of C++ and Java, and completely focused on C. There were a few difference reasons that contributed to that:
  • Working with very limited embedded systems (I'm talking 8051) at that time, I quickly became obsessed with performance and C was my best bet if I didn't want to write all my code in assembly.
  • Shortly before I graduated, I got involved in GStreamer project and became a fan of GNOME and Gtk+, all of which at that time was in C. Talking to developers of these projects (who at that time seemed like gods), I learnt how C++ is a bad language and why they write everything in C instead.
  • An year after graduation, I developed a network traffic shaping solution and the core of it was a Linux kernel module, which as you know is almost always done in C. Some years later, I also wrote some device drivers for Linux.
The more C code I wrote over the years, the more I developed this love/hate relationship with it. I just loved the control C gave me but hated the huge responsibility it came with. With experience, I became good at avoiding common mistakes in C, but nobody is perfect and if you can make a mistake, you eventually will. Another reason C didn't seem perfect to me was the lack of high-level constructs in the language itself. Copy&pasting boilerplate to write simple GObjects is nothing most people enjoy. You end up avoiding to organise your code in the best way to spare yourself the trouble of having to write GObjects.

So I've been passively and sometimes actively seeking a better programming language for more than a decade now. I got excited about a few different ones over the years but there was always something very essential missing. The first language I got into was Lisp/Scheme (more specifically Guile) but the lack of type declarations soon started to annoy me a lot. I felt the same after then about all scripting languages, e.g Python. Don't get me wrong, python is a pretty awesome language for specific uses (e.g writing tests, simple apps, quick prototyping etc) but with lack of type declarations, any project larger than 1000 LOCs can quickly become hard to maintain (at least it did for me).

Because of my love for strong-typing, C# and Java did attract me too briefly. Not having to care about memory management in most cases, not only helps developers focus on the actual problems they are solving, it indirectly allows them to avoid making very expensive mistakes with memory management. However, if developer is not managing the memory, it's the machine doing it and in case of these languages, it does that at run time and not compile time. As a C developer and a big hater of waste in general, that is very hard to be convinced of as a good idea.

There was another big problem all these high-level languages: you can't nicely glue them with the C world. Sure you can use libraries written in C from them but the other way around is not a viable option (even if possible). That's why you'll find GNOME apps written in all these languages but you will not find any libraries written in them.

Along came Vala


So along came Vala, which offered features that at that time (2007-2008) were the most important to me:
  • It is a strongly-typed language.
  • It manages the memory for you in most cases but without any run time costs.
  • It's high-level language so you avoid a lot of boilerplate.
  • GNOME was and still is the main target platform of Vala.
  • It compiled to C, so you can write libraries in it and use them from C code as if they were written in C. Because of GObject-introspection, this also means you can use them from other languages too.
Those who know me, will agree that I was die-hard (I'm writing this on Christmas day so that reference was mandatory I guess) fan of Vala for a while. I wrote two projects in Vala and given what I knew then I think it was the right decision. Some people will be quick to point out specific technical issues with Vala but I think those could have been helped. There two other reasons, I ultimately gave up on Vala. The first one was that the general interest in it started to decline after Nokia stopped funding projects using Vala and so did its development.

 

Hello Rust


But the main reason for giving up was that I saw something better finally becoming a viable option (1.0 release) and gaining adoption in many communities, including GNOME. While Vala had many good qualities I mentioned above, Rust offered even more:
  • Firstly the success of Rust is not entirely dependent on one very specific project or a tiny group of people, even if until now most of the development has been from one company. Ever month, you hear of more communities and companies starting to depend on Rust and that ensure it's success even if Mozilla was to go down (not that I think it's likely) or stopped working on it. i-e "it's too big to fail". If we compare to Vala, the community is a lot bigger. There are conferences and events happening around the world that are entirely focused on Rust and there are books written on Rust. Vala never came anywhere remotely close to that level of popularity.

    When I would mention Vala in job interviews, interviewers would typically have no idea what I'm talking about but when I mention Rust, the typical response is "Oh yes we are interested in trying that out in our company".
  • While Vala is already a safer language than C & C++, you still have null-pointer dereferencing and some other unsafe possibilities. Safety being one of the main focus of the language design, Rust will not allow you to build unsafe code, unless you mark it as such and even then, your possibilities of messing up are limited. Marking unsafe code as such, makes it much easier to find the source of any issues you might have. More over, you usually only write unsafe code to interface with the unsafe C world.

    This is a very important point in my opinion. I really do not want to live in a world where simple human errors are allowed cause disasters.
Admittedly, there are some benefits of Vala over Rust:
  • Ability to easily write GObjects.
  • Creating shared libraries.
However, some people have been working on the former and latter is already possible with some compromise and tricks.

 

Should we stop writing C/C++ code?


Ideally? Yes! Most definitely, yes. Practically speaking, that is not an option for most existing C/C++ projects out there. I can only imagine the huge amount of resources needed for porting large projects, let alone training existing developers on Rust. Having said that, I'd urge every developer to at least seriously consider writing all new code in Rust rather than C/C++.

Especially if you are writing safety-critical software (people implementing self-driving cars, militaries and space agencies, I'm looking at you!), laziness and mental-inertia are not valid reasons to continue writing potentially unsafe code, no matter how smart and good at C/C++ you think you are. You will eventually make mistake and when you do, lives will be at stake. Think about that please.

 

Conclusion


I am excited about Rust and I'm hopeful the future is much safer thanks to people behind it. Happy holidays and have a fun and safe 2018!

Best of 2017: articles for desktop users

Posted by Fedora Magazine on December 27, 2017 08:00 AM

It has been a full year here at the Fedora Magazine, and now it is nearing its end, we are looking back at some of the awesome content from our contributors. We feature many articles throughout the year that are focused on users that use Fedora as a Desktop OS. Here are some of the best articles from our contributors that are useful for Fedora Workstation users.

GNOME Photos

GNOME Photos is a photo library application that also — in recent years — gained basic editing features. This article walks you through the basics of editing images with GNOME Photos

Enhancing photos with GNOME Photos

<iframe class="wp-embedded-content" data-secret="BdbQiIuMPv" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/enhancing-photos-gnome-photos/embed/#?secret=BdbQiIuMPv" title="“Enhancing photos with GNOME Photos” — Fedora Magazine" width="600"></iframe>

Using Nautilus Scripts

Scripts in Nautilus are not a new feature, but still super useful for automating quick tasks into the File Browser.

Integrating scripts in Nautilus to perform useful tasks

<iframe class="wp-embedded-content" data-secret="g2W1JiRasY" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/integrating-scripts-nautilus/embed/#?secret=g2W1JiRasY" title="“Integrating scripts in Nautilus to perform useful tasks” — Fedora Magazine" width="600"></iframe>

Backing up with Déjà Dup

We all know backing up your system is important. This article — part of a series about backing up your Fedora system — walks you though setting up backups with the Déjà Dup utility.

Easy backups with Déjà Dup

<iframe class="wp-embedded-content" data-secret="fnNcMKDMMh" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/easy-backups-with-deja-dup/embed/#?secret=fnNcMKDMMh" title="“Easy backups with Déjà Dup” — Fedora Magazine" width="600"></iframe>

Calculator in the Overview

The overview in Fedora workstation is equipped with search providers that allow apps to show results when searching. The Calculator app uses this feature to let you do simple calculations quickly via the search.

Quick tip: calculator in the Fedora Workstation overview

<iframe class="wp-embedded-content" data-secret="55wR0f5oVg" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/calculator-overview/embed/#?secret=55wR0f5oVg" title="“Quick tip: calculator in the Fedora Workstation overview” — Fedora Magazine" width="600"></iframe>

Flatpak

Flatpak can roughly be described as a modern replacement for RPMs, but its impact is far more significant than simply offering a new packaging format.This article covers what Flatpak is, and how to install Flatpaks on your Fedora system.

Getting Started with Flatpak

<iframe class="wp-embedded-content" data-secret="KdfYQEOEW4" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/getting-started-flatpak/embed/#?secret=KdfYQEOEW4" title="“Getting Started with Flatpak” — Fedora Magazine" width="600"></iframe>

Installing more Wallpapers

The Fedora repos contain a treasure trove of wallpapers created for Fedora releases. This article shows you the wallpapers available from previous releases — going back to Fedora 8 — and what packages to install to get them on your current Fedora install.

How to install more wallpaper packs on Fedora Workstation

<iframe class="wp-embedded-content" data-secret="LyW6wE0Atw" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/installing-extra-wallpaper-packs-to-fedora-workstation/embed/#?secret=LyW6wE0Atw" title="“How to install more wallpaper packs on Fedora Workstation” — Fedora Magazine" width="600"></iframe>


The header image for this post contains Laurel Wreath created by Gabriella Fono from the Noun Project & the laptop icon created by Saeful Muslim from the Noun Project.

Adding tags to my jekyll website

Posted by Ismael Olea on December 26, 2017 11:00 PM

This iteration of the olea.org website uses the Jekyll static website generator. From time to time I add some features to the configuration. This time I wanted to add tags support to my posts. After a fast search I found jekyll-tagging. To put it working has been relatively easy because if you are not into Ruby you can misconfigure the gem dependencies as me. And to add some value to this post I’m just sharing some tips I added not written in the project readme file.

First: added a /tag/ page with the cloud of used tags in the form of a tag/index.html file with this content:

---
layout: page
permalink: /tag/
---

<div class="tag-cloud" id="tag-cloud">
  <a href="/tag/%40firma/" class="set-1">@firma</a> <a href="/tag/akademy/" class="set-1">Akademy</a> <a href="/tag/alepo/" class="set-1">Alepo</a> <a href="/tag/almeria/" class="set-2">Almería</a> <a href="/tag/android/" class="set-1">Android</a> <a href="/tag/barcelona/" class="set-1">Barcelona</a> <a href="/tag/bolivia/" class="set-1">Bolivia</a> <a href="/tag/cacert/" class="set-1">CAcert</a> <a href="/tag/canarias/" class="set-1">Canarias</a> <a href="/tag/centos/" class="set-1">CentOS</a> <a href="/tag/ceres/" class="set-1">Ceres</a> <a href="/tag/chronojump/" class="set-1">ChronoJump</a> <a href="/tag/cuba/" class="set-1">Cuba</a> <a href="/tag/cubaconf/" class="set-1">CubaConf</a> <a href="/tag/epf/" class="set-1">EPF</a> <a href="/tag/fnmt/" class="set-1">FNMT</a> <a href="/tag/fosdem/" class="set-1">FOSDEM</a> <a href="/tag/fudcon/" class="set-1">FUDCon</a> <a href="/tag/factura-e/" class="set-1">Factura-e</a> <a href="/tag/fedora/" class="set-3">Fedora</a> <a href="/tag/flock/" class="set-1">Flock</a> <a href="/tag/fuerteventura/" class="set-1">Fuerteventura</a> <a href="/tag/gdg/" class="set-1">GDG</a> <a href="/tag/gnome/" class="set-2">GNOME</a> <a href="/tag/gnome-hispano/" class="set-1">GNOME-Hispano</a> <a href="/tag/guadec/" class="set-1">GUADEC</a> <a href="/tag/galicia/" class="set-1">Galicia</a> <a href="/tag/geocamp/" class="set-1">GeoCamp</a> <a href="/tag/google/" class="set-1">Google</a> <a href="/tag/guademy/" class="set-1">Guademy</a> <a href="/tag/hacklab_almeria/" class="set-1">HackLab_Almería</a> <a href="/tag/hispalinux/" class="set-1">Hispalinux</a> <a href="/tag/ia/" class="set-1">IA</a> <a href="/tag/ibm/" class="set-1">IBM</a> <a href="/tag/kde/" class="set-1">KDE</a> <a href="/tag/kompozer/" class="set-1">Kompozer</a> <a href="/tag/l10n/" class="set-1">L10N</a> <a href="/tag/la_coruna/" class="set-1">La_Coruña</a> <a href="/tag/la_paz/" class="set-1">La_Paz</a> <a href="/tag/la_rioja/" class="set-1">La_Rioja</a> <a href="/tag/linuxtag/" class="set-1">LinuxTag</a> <a href="/tag/lucas/" class="set-1">LuCAS</a> <a href="/tag/lugo/" class="set-1">Lugo</a> <a href="/tag/mdd/" class="set-1">MDD</a> <a href="/tag/madrid/" class="set-1">Madrid</a> <a href="/tag/microsoft/" class="set-1">Microsoft</a> <a href="/tag/mono/" class="set-1">Mono</a> <a href="/tag/mexico/" class="set-1">México</a> <a href="/tag/nueva_york/" class="set-1">Nueva_York</a> <a href="/tag/ocsp/" class="set-1">OCSP</a> <a href="/tag/odf/" class="set-1">ODF</a> <a href="/tag/osl_unia/" class="set-1">OSL_UNIA</a> <a href="/tag/osor.eu/" class="set-1">OSOR.eu</a> <a href="/tag/oswc/" class="set-1">OSWC</a> <a href="/tag/omegat/" class="set-1">OmegaT</a> <a href="/tag/openid/" class="set-1">OpenID</a> <a href="/tag/openmind/" class="set-1">Openmind</a> <a href="/tag/pycones/" class="set-1">PyConES</a> <a href="/tag/renfe/" class="set-1">Renfe</a> <a href="/tag/scfloss/" class="set-1">SCFLOSS</a> <a href="/tag/soos/" class="set-3">SOOS</a> <a href="/tag/ssl/" class="set-1">SSL</a> <a href="/tag/sonic_pi/" class="set-1">Sonic_Pi</a> <a href="/tag/supersec/" class="set-1">SuperSEC</a> <a href="/tag/superlopez/" class="set-1">Superlópez</a> <a href="/tag/tldp-es/" class="set-1">TLDP-ES</a> <a href="/tag/ue/" class="set-1">UE</a> <a href="/tag/vpn/" class="set-1">VPN</a> <a href="/tag/valencia/" class="set-1">Valencia</a> <a href="/tag/x509/" class="set-1">X509</a> <a href="/tag/yorokobu/" class="set-1">Yorokobu</a> <a href="/tag/zaragoza/" class="set-1">Zaragoza</a> <a href="/tag/anotaciones/" class="set-1">anotaciones</a> <a href="/tag/calidad/" class="set-1">calidad</a> <a href="/tag/ciencia_abierta/" class="set-1">ciencia_abierta</a> <a href="/tag/conferencia/" class="set-3">conferencia</a> <a href="/tag/congreso/" class="set-2">congreso</a> <a href="/tag/correo-e/" class="set-1">correo-e</a> <a href="/tag/cultura/" class="set-1">cultura</a> <a href="/tag/docker/" class="set-1">docker</a> <a href="/tag/ensayo/" class="set-1">ensayo</a> <a href="/tag/entrevista/" class="set-1">entrevista</a> <a href="/tag/filosofia/" class="set-1">filosofía</a> <a href="/tag/flatpak/" class="set-1">flatpak</a> <a href="/tag/fpga_wars/" class="set-1">fpga_wars</a> <a href="/tag/git/" class="set-1">git</a> <a href="/tag/gvsig/" class="set-1">gvSIG</a> <a href="/tag/hardware/" class="set-1">hardware</a> <a href="/tag/historia/" class="set-1">historia</a> <a href="/tag/innovacion/" class="set-1">innovación</a> <a href="/tag/interoperabilidad/" class="set-1">interoperabilidad</a> <a href="/tag/jekyll/" class="set-1">jekyll</a> <a href="/tag/laptop/" class="set-1">laptop</a> <a href="/tag/linux/" class="set-1">linux</a> <a href="/tag/micro-educacion/" class="set-1">micro-educación</a> <a href="/tag/migas/" class="set-1">migas</a> <a href="/tag/node.js/" class="set-1">node.js</a> <a href="/tag/opensource/" class="set-5">opensource</a> <a href="/tag/p2p/" class="set-1">p2p</a> <a href="/tag/politica/" class="set-1">política</a> <a href="/tag/procomunes/" class="set-1">procomunes</a> <a href="/tag/propiedad_intelectual/" class="set-1">propiedad_intelectual</a> <a href="/tag/publciacion/" class="set-1">publciación</a> <a href="/tag/publicacion/" class="set-2">publicación</a> <a href="/tag/revolucion_digital/" class="set-2">revolución_digital</a> <a href="/tag/seguridad/" class="set-1">seguridad</a> <a href="/tag/servicios/" class="set-1">servicios</a> <a href="/tag/software/" class="set-3">software</a> <a href="/tag/sofware/" class="set-1">sofware</a> <a href="/tag/sostenibilidad/" class="set-1">sostenibilidad</a> <a href="/tag/video/" class="set-1">vídeo</a> <a href="/tag/web/" class="set-1">web</a> <a href="/tag/web-semantica/" class="set-1">web-semántica</a> <a href="/tag/etica/" class="set-1">ética</a>
</div>

Compared to the jekyll-tagging examples I only use the tag cloud in that /tag/ page and not in the tag entries pages because it’s a bit annoying when using too much tag words.

And second, probably more interesting, showing the post tags in the html page:

<p class="post-meta">  tags:  <a href="/tag/jekyll/" rel="tag">jekyll</a> </p>

This is relevant because the tagging readme example uses {{ post | tags }} but to work inside the post page you should use {{ page | tags }}.

Yeah, this is not a great post but maybe it can save some time if your adding jekyll-tagging to your web.

Fedora November Worklog

Posted by Athos Ribeiro on December 26, 2017 03:11 PM
This is a list of the activities I have performed as a Fedora Project contributor in November, 2017. This was a slow month for my Fedora contributions (as December will be), since I am quite busy working on my master’s degree dissertation. Packaging Activities Updates Update rubygem-cri to 2.10.1 Update python-ufolib to 2.1.1 Update python-mutatormath to 2.1.0 Update flawfinder to 2.0.5 Update go-i18n to 1.10.0 Updated hugo to 0.

On Pytest-django and LiveServerTestCase with initial data

Posted by Alexander Todorov on December 26, 2017 09:20 AM

While working on Kiwi TCMS I've had the opportunity to learn in-depth about how the standard test case classes in Django work. This is a quick post about creating initial data and order of execution!

Initial test data for TransactionTestCase or LiveServerTestCase

class LiveServerTestCase(TransactionTestCase), as the name suggests, provides a running Django instance during testing. We use that for Kiwi's XML-RPC API tests, issuing http requests against the live server instance and examining the responses! For testing to work we also need some initial data. There are few key items that need to be taken into account to accomplish that:

  • self._fixture_teardown() - performs ./manage.py flush which deletes all records from the database, including the ones created during initial migrations;
  • self.serialized_rollback - when set to True will serialize initial records from the database into a string and then load this back. Required if subsequent tests need to have access to the records created during migrations!
  • cls.setUpTestData is an attribute of class TestCase(TransactionTestCase) and hence can't be used to create records before any transaction based test case is executed.
  • self._fixture_setup() is where the serialized rollback happens, thus it can be used to create initial data for your tests!

In Kiwi TCMS all XML-RPC test classes have serialized_rollback = True and implement a _fixture_setup() method instead of setUpTestData() to create the necessary records before testing!

NOTE: you can also use fixtures in the above scenario but I don't like using them and we've deleted all fixtures from Kiwi TCMS a long time ago so I didn't feel like going back to that!

Order of test execution

From Django's docs:

In order to guarantee that all TestCase code starts with a clean database, the Django test runner reorders tests in the following way:

  • All TestCase subclasses are run first.
  • Then, all other Django-based tests (test cases based on SimpleTestCase, including TransactionTestCase) are run with no particular ordering guaranteed nor enforced among them.
  • Then any other unittest.TestCase tests (including doctests) that may alter the database without restoring it to its original state are run.

This is not of much concern most of the time but becomes important when you decide to mix and match transaction and non-transaction based tests into one test suite. As seen in Job #471.1 tcms/xmlrpc/tests/test_serializer.py tests errored out! If you execute these tests standalone they all pass! The root cause is that these serializer tests are based on Django's test.TestCase class and are executed after a test.LiveServerTestCase class!

The tests in tcms/xmlrpc/tests/test_product.py will flush the database, removing all records, including the ones from initial migrations. Then when test_serializer.py is executed it will call its factories which in turn rely on initial records being available and produces an error because these records have been deleted!

The reason for this is that pytest doesn't respect the order of execution for Django tests! As seen in the build log above tests are executed in the order in which they were discovered! My solution was not to use pytest (I don't need it for anything else)!

At the moment I'm dealing with strange errors/segmentation faults when running Kiwi's tests under Django 2.0. It looks like the http response has been closed before the client side tries to read it. Why this happens I have not been able to figure out yet. Expect another blog post when I do.

Thanks for reading and happy testing!

Rawhide notes from the trail, the holiday edition

Posted by Kevin Fenzi on December 25, 2017 09:44 PM

Happy holidays everyone! A few notes for those riding the rawhide trail or just joining those of us who are.

  • NetworkManager has finally dropped libnm-glib (deprecated 3 years ago now). There’s proposed gnome-shell changes being worked on to switch to libnm, but they have not yet landed. This means if you connect to VPN’s via gnome-shell, things aren’t going to work. As a workaround for now use nmcli: mcli c up ‘your vpn name here’ –ask. More info at https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/4YJIWJXDMND2VL7KGG2C6UNE7RJMHJEI/ and https://bugzilla.gnome.org/show_bug.cgi?id=789811
  • better SATA power management is being enabled. See https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/T2STBDPXKZ7DHC7GS6VLM3ESYI6RHVDM/ There is a small chance of disk corruption, so this is a great time to check your backups. If you have a laptop with a SATA SSD you should expect some nice power savings from the change.
  • Additionally bluetooth power saving mode has been enabled a while back. If you find your bluetooth no longer working you can boot with btusb.enable_autosuspend=n. (My bluetooth here on my yoga 920 needs to revert this change currently. Tracked in https://bugzilla.redhat.com/show_bug.cgi?id=1514836 )
  • I reinstalled the armv7 compose builders a few days ago with Fedora 27 and since then they have done great for rawhide composes. With Fedora 26 they were sometimes hanging, causing rawhide composes to hang until manually rebooted. If they keep working well we can hopefully close https://bugzilla.redhat.com/show_bug.cgi?id=1504264 soon, but I would like to see it work for a few more days.

Here’s to a happy rawhide holiday for everyone!

J.R.R. Tolkien: The Lord of the Rings

Posted by Ingvar Hagelund on December 25, 2017 07:00 AM

I read Tolkien’s “canon”, that is, The Hobbit, The Lord of the Rings, and The Silmarillion, around Christmas every year. So also this year.

2017 was a great year for Tolkien fans. It was the 125th anniversary of the Professor’s birth, and the 80th anniversary for the Hobbit. We also got the magnificent news that Amazon will produce a TV series based on “previously unexplored stories based on J.R.R. Tolkien’s original writings“. So what storylines would that be? A reboot of the 2001-03 trilogy is out of the question, as Peter Jackson explored and extended more than enough already. So, what do we have left? A lot! Let’s have a look.

The Lord of the Rings and its appendices tells stories in several different timelines. Long before (as in hundreds, and even thousands of years) before the main story, just before the main story (like a few decennials), parallel to the main story, and after.

One storyline could follow the ancient history of Gondor and Arnor. There are lots and lots of substories there. If I should pick one I would like to see, it would be the stories of the kings Arvedui of Arnor and  Eärnil II of Gondor, perhaps started with the Firiel incident. There are lots of exciting points to pick up there. Gondor throne heiritage politics, the war against, and the prediction of the downfall of the Witch King, the flight to Forochel, with the disastrous ship’s wreck in the ice, and the loss of the palantiri.

For the “near history” before The War of the Ring, the obvious choice would be a “The young Aragorn” series, where we could follow Aragorn in his many guises, riding with the Rohirrim, going on raids with Gondor against Harad, in and in constant conflict with Denethor. And his love life, of course, with his meeting and very long-term relationship with Arwen. And speaking of Arwen, her family story is a good storyline, with the love of Celebrían and Elrond, travelling from Lorien to Rivendell, and her abduction, and Elladan and Elrohir’s rescue of her from the orcs. Parallel to that, the story I would most love to see, would be, the story of Denethor. His tragic life is worth a season alone. Another storyline from the years just before The War of the Ring, could be Balin’s attempt to retake Moria, and build  a colony of dwarves. Lots of gore and killing of goblins to depict!

Parallel to the War of the Ring, there are a lot of things going on, that are merely mentioned in the book, and completely forgotten in the movies. The fight in Dale. The Ents’ war against the orcs after the capture of Isengard, the loss of Osgiliath and Cair Andros, to name just a few.

And of course, even after the the War of the Ring, and the Return of the King, there are stories to follow up. Aragorn’s “negotiations” for peace with his neighbouring peoples, with armed battle as alternative, supported by Eomer of Rohan. The sweet but bitter death of Aragorn and Arwen. The reign of King Eldarion.

I’m optimistic! This is going to be great!

Managing tasks, time, and making sure one takes a break: Integrating Taskwarrior, Timewarrior, and Gnome Pomodoro

Posted by Ankur Sinha "FranciscoD" on December 25, 2017 01:16 AM

With the new year, come resolutions. On many a list will there be a determination to do better in the coming year, to be more organised, more efficient, more productive.

I'm quite organised myself. I have lists, calendars, reminders, budgets, and all of that. Being a FOSS person, my first thought, inevitably, is to see if there's a piece of software that would aid me.

This post documents how one can get Taskwarrior, Timewarrior, and Gnome Pomodoro to work together to manage tasks, track them, and break those long hours into smaller bits with regular breaks.

Taskwarrior helps manage tasks

For managing tasks, there's the rather excellent Taskwarrior. It's command line, and there are various user interfaces that have been developed for it too. (Vit is one that provides a terminal interface with Vim like keybindings, and there's a Vim plugin too.) One can even set up a Taskwarrior server to sync the data between different machines. There are a few hosted services that give free Taskwarrior server accounts too. Perhaps the best bit, is excellent documentation. Taskwarrior really does make it easy to get things done.

Timewarrior tracks time spent on tasks

Taskwarrior is not meant to be a time tracker, and upstream says so quite plainly. In fact, upstream went ahead and wrote Timewarrior for that purpose entirely. Like Taskwarrior, Timewarrior is also a command line tool.

Integrating the two is quite easy, using a Taskwarrior hook, as documented here. Each time a task is started, or stopped in Taskwarrior, the hook calls Timewarrior to start or stop tracking the task too.

Note: to ensure that this hook is run before the Gnome Pomodoro hook that we set up in the next section, please save the hook file as ~/.task/hooks/on-modify.00-timewarrior

Gnome Pomodoro reminds us to take regular breaks

So, one can manage tasks, and track the time spent working on them, and that's great. It was sufficient for me for quite a while, before I realised that I was spending too much time at my desk. What made it worse was the realisation that for us white collar professionals, a majority of our lives will be spent at a desk typing away on a computer. There's enough research to show that spending all those long hours working in a seated position is bad for one's health.

So, I went looking for the changes I should make to my work regime, and ran into the Pomodoro technique. The idea is to take short breaks at regular intervals. One can use these to get up and walk around a bit, get them 10,000 steps! There are plenty of tools that implement the Pomodoro technique. A simple timer works too. The one I settled on is Gnome Pomodoro which integrates really well with Gnome Shell. Every 25 minutes, it'll remind the user to take a 5 minute break.

Now, let us integrate Gnome Pomodoro with both Taskwarrior and Timewarrior:

  • When a task is started using task <filter> start, Taskwarrior already begins to track it using the hook, and a Pomodoro should also be started.
  • When a Pomodoro is over and Gnome Pomodoro notifies of a break, Timewarrior should be paused too.
  • When the break is over, and another Pomodoro starts, Timewarrior should resume tracking the task.
  • When a task is stopped, Taskwarrior will stop tracking it via the hook already, and the Pomodoro should be stopped as well.

This is a very simple set up. A task must be started using Taskwarrior here, and each time Gnome Pomodoro pauses and resumes from breaks, the same task will be resumed unless it was stopped and another started.

It turned out to be quite easy because of how well these three tools have been designed. Here's a Taskwarrior hook for Gnome Pomodoro similar to the one for Timewarrior:

#!/usr/bin/env python2
# API is here: https://taskwarrior.org/docs/hooks.html
# To be saved at ~/.task/hooks/on-modify.01-gnome-pomodoro to ensure it is
# run after the timewarrior hook, which should be saved as
# ~/.task/hooks/on-modify.00-timewarrior
# Otherwise, this is run before which then runs the Gnome-Pomodoro actions
# things get quite messy!
import json
import os
import sys

# Make no changes to the task, simply observe.
old = json.loads(sys.stdin.readline())
new = json.loads(sys.stdin.readline())
print(json.dumps(new))

# Start pomodoro when task is started
if 'start' in new and 'start' not in old:
    os.system('gnome-pomodoro --start')
# Stop pomodoro when a task is stopped
elif 'start' not in new and 'start' in old:
    os.system('gnome-pomodoro --stop')

It's called when a task is modified. It checks the old and new states. If a task is started, it starts gnome-pomodoro, and when it's stopped, it stops it. This is one direction.

The other direction requires some tinkering with Gnome Pomodoro to set up custom scripts. In the preferences, one must enable the "Custom actions" plugin:

A screenshot showing the plugin preferences in Gnome Pomodoro.

Then, a "Custom Actions" entry will be added to the preferences. We need to add two of them. The first, resumes Timewarrior tracking when the Pomodoro resumes:

A screenshot showing custom action that will resume timew after a break.

Similarly, the second stops Timewarrior when a break begins, or the user pauses the Pomodoro:

A screenshot showing custom action that will stop timew at the start of a break.

(If no tasks are active, Timewarrior doesn't do anything, so that case does not need to be handled separately.)

There are certain limitations to what commands can go in there, so I've used a shell script to implement the required logic:

#!/bin/bash
# save as ~/bin/track-timew.sh
# note that ~/bin/ must be in PATH

resume ()
{
    timew || timew continue
}

pause ()
{
    timew && timew stop
}

clean ()
{
    # sed only does greedy regex so it's slightly complicated
    # could use perl to make this a lot simpler because perl does non
    # greedy too.
    for entry in $(timew summary :ids | grep -o '@.*' | sed -E 's/(^@[[:digit:]]+[[:space:]]+)/\1 |/' | sed -E 's/([[:digit:]]+:[[:digit:]]+:[[:digit:]]+ )/| \1/' | sed 's/|.*|//' | sed -E 's/[[:space:]]{2,}/ /' | cut -d ' ' -f 1,4 | grep -E '0:0[01]:..' | cut -d ' ' -f 1 | tr '\n' ' '); do timew delete "$entry"; done
}

usage ()
{
    echo "$0: wrapper script around timewarrior to carry out common tasks"
    echo "For use with Gnome-Pomodoro's action plugin"
    echo
    echo "Usage: $0 <option>"
    echo
    echo "OPTIONS:"
    echo "-r    resume tracking of most recently tracked task"
    echo "-p    pause tracking"
    echo "-c    clean up short tasks (less than 2 minutes long)"
}

# check for options
if [ "$#" -eq 0 ]; then
    usage
    exit 1
fi

# parse options
while getopts "rpch" OPTION
do
    case $OPTION in
        r)
            resume
            exit 0
            ;;
        p)
            pause
            exit 0
            ;;
        c)
            clean
            exit 0
            ;;
        h)
            usage
            exit 1
            ;;
        ?)
            usage
            exit 1
            ;;
    esac
done

The script is quite simple, and I hope, self-explanatory too. I'll leave interpretation of the clean function to the reader ;)

That's all there is to it. There must be other ways of doing the same thing, possibly with different tools too, but this system required least changes to my current workflow. Do remember that these tools can only aid us. It is us that need to show that bit of discipline to follow the plan through. I hope some will find it helpful, and may the new year be healthier and more productive for us all! :)

J.R.R. Tolkien: The Hobbit

Posted by Ingvar Hagelund on December 24, 2017 07:00 AM

I read Tolkien’s “Canon”, that is, The Hobbit, The Lord of the Rings, and The Silmarillion, every year about Christmas. These year, it’s even The Hobbit’s 80th Anniversary, and to celebrate, I have of course read through The Hobbit again.

So many have said so much about this book, so I’d rather show off my newest addition to my Tolkien bookshelf. This is the Swedish 1962 edition of The Hobbit, Bilbo, En Hobbits Äventyr (Bilbo, A Hobbit’s Adventure), and it has quite an interesting history.

In the 50s and 60s, Astrid Lindgren, maybe most famous for her children’s books about Pippi Longstocking, worked as an editor at the department for Children’s literature at Rabén & Sjögren, who published Tolkien’s works in Sweden. Lindgren was very interested in Tolkien’s work, and while she later denied Tolkien as an inspiration for it, she published the quite Lord of the Rings reminiscing Mio my Son in 1954, and later the world beloved classic children’s fantasy novels The Brothers Lionheart and Ronia, the Robber’s daughter.

In the early 60s Lindgren was not content* with the current Swedish translation of The Hobbit, Hompen (translation by Tore Zetterholm, 1947), and wanted to better it. So she opted for a new translation and got hold of Britt G. Hallqvist for the job. For illustrations, she contacted her friend Tove Jansson, now World famous for her Moomin Valley universe. Jansson had already had success with her Moomintrolls, and had previously made illustrations for a Swedish edition of Lewis Carrol’s classic poem Snarkjakten (The Hunting of the Snark, 1959), so a successful publication seemed likely.

Hallqvist translated, Jansson drew, Lindgren published it, and it flopped! Tolkien fans didn’t enjoy Jansson’s drawings much, and the illustrations were not used** again before 1994. By then, the 1962 version was cherished by Tove Jansson fans and Tolkien collectors over the World, and it had become quite hard to find. The 1994 edition was sold out in a jiffy. The illustrations were finally “blessed” by the Tolkien Estate, when they were used for the 2016 Tolkien Calendar.

Jansson’s illustrations were also used in the 2016 Tolkien calendar, which I’m, afraid to say, have not acquired (yet).

I was lucky and found a decent copy of the 1962 edition in a Japanese(!) bookstore on the Net. Now I LOVE this book. Its illustrations are absolutely gorgeous.

20171219_003402.jpg.small

20171219_144844.small

20171219_003429.jpg.small

The destruction of Lake Town and the death of Smaug are my personal favourites

The destruction of Lake Town and the death of Smaug is my personal favourite

It makes a great additon to my ever growing list of Hobbits.

This book makes a great additon to my ever growing list of Hobbits.

It would be a pity to let this book stay alone without decent Janssonic company, so I searched a few weeks, was lucky again and found a nice copy of the mentioned Snarkjakten by Lewis Carrol, and an almost mint copy of the absolutely fantastic (in all meanings of that word) Swedish 1966 edition of Alice i underlandet (Alice in Wonderland). If you enjoy Alice, you will love Janssons’ illustrations, even outshining her work on The Hobbit.

Janssons illustrations of <i>Alice</i> were later used in a lot of versions, among them, Finnish, American, British, and Norwegian editions.

Janssons illustrations of Alice were later used in a lot of versions, among them, Finnish, American, British, and Norwegian editions.

For an intensely interesting read about Jansson’s artistic work on these classics: Read Olga Holownia’s essay at barnboken.net.

That’s it. Merry Christmas and happy Youletide everybody!

*) Neither was Tolkien himself. He specially disliked the translation of Elvish names into Swedish, like Esgaroth -> Snigelby (ie. Snail Town!!!). Also interesting: Svensson, Louise, Lost in Translation? – A Comparative Study of Three Swedish Translations of J.R.R. Tolkien’s ‘The Hobbit’, Lund University 2016

**) Actually, there were other versions with Jansson’s illustrations; the Finnish Hobbit Lohikäärme-vouri (The Dragon mountain) from 1973, and the updated Finnish translation in 2003. The illustrations were also used in this year’s Finnish 80th Anniversary edition of The Hobbit.

LinuXatUNI held last meeting of the year

Posted by Julita Inca Chiroque on December 23, 2017 10:01 PM

The local Linux community in Lima, Peru held the last meeting today sharing a breakfast. Peruvians usually take “chocolatada” (made with chocolate and milk) with paneton for Christmass holidays, and we are not the exception. Thanks to the LinuxFoundation we have new jackets, scarves and vest branded with the LinuxFoundation logo.

After having our breakfast, instead of hacking, we interacted through a work group “Linux” dynamics to strong the relationship among the participants. They are students from different universities: PUCP, UNMSM, UNI, UTP and UNAC in the picture! 🙂

The games table were followed by a physical contact and coordination as a group. We needed a big space to support the game, so we did not have other choice than the streetThanks so much to all the students that have participated as LinuXatUNI during this year, and in previous rounds. Special thanks to students from UNMSM: Martin Vuelta and Fiorella Effio for their support during this year as well as Toto, Solanch and Leyla Marcelo for her work as a designer. Another thanks to PUCP students which has been helping us for four years in a raw: Giohanny Falla and Fabian Orccon 😀 I am extremely grateful for the support of the Linux Foundation, GNOME, Fedora, BacktrackAcademy and LinuXatUNI work members for outreaching Linux newcomers.

 You can see more pictures here!

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: by 2017, fedora, GNOME, Julita Inca, Julita Inca Chiroque, last meeting, Linux Foundation, Linux Foundation event, LinuXatUNI, prize Linux Foundation

Using b43 firmware on Fedora Atomic Workstation

Posted by William Brown on December 22, 2017 02:00 PM

Using b43 firmware on Fedora Atomic Workstation

My Macbook Pro has a broadcom b43 wireless chipset. This is notorious for being one of the most annoying wireless adapters on linux. When you first install Fedora you don’t even see “wifi” as an option, and unless you poke around in dmesg, you won’t find how to enable b43 to work on your platform.

b43

The b43 driver requires proprietary firmware to be loaded else the wifi chip will not run. There are a number of steps for this process found on the linux wireless page . You’ll note that one of the steps is:

export FIRMWARE_INSTALL_DIR="/lib/firmware"
...
sudo b43-fwcutter -w "$FIRMWARE_INSTALL_DIR" broadcom-wl-5.100.138/linux/wl_apsta.o

So we need to be able to write and extract our firmware to /usr/lib/firmware, and then reboot and out wifi works.

Fedora Atomic Workstation

Atomic WS is similar to atomic server, that it’s a read-only ostree based deployment of fedora. This comes with a number of unique challenges and quirks but for this issue:

sudo touch /usr/lib/firmware/test
/bin/touch: cannot touch '/usr/lib/firmware/test': Read-only file system

So we can’t extract our firmware!

Normally linux also supports reading from /usr/local/lib/firmware (which on atomic IS writeable ...) but for some reason fedora doesn’t allow this path.

Solution: Layered RPMs

Atomic has support for “rpm layering”. Ontop of the ostree image (which is composed of rpms) you can supply a supplemental list of packages that are “installed” at rpm-ostree update time.

This way you still have an atomic base platform, with read-only behaviours, but you gain the ability to customise your system. To achive it, it must be possible to write to locations in /usr during rpm install.

This means our problem has a simple solution: Create a b43 rpm package. Note, that you can make this for yourself privately, but you can’t distribute it for legal reasons.

Get setup on atomic to build the packages:

rpm-ostree install rpm-build createrepo
reboot

RPM specfile:

%define debug_package %{nil}
Summary: Allow b43 fw to install on ostree installs due to bz1512452
Name: b43-fw
Version: 1.0.0
Release: 1
License: Proprietary, DO NOT DISTRIBUTE BINARY FORMS
URL: http://linuxwireless.sipsolutions.net/en/users/Drivers/b43/
Group: System Environment/Kernel

BuildRequires: b43-fwcutter

Source0: http://www.lwfinger.com/b43-firmware/broadcom-wl-5.100.138.tar.bz2

%description
Broadcom firmware for b43 chips.

%prep
%setup -q -n broadcom-wl-5.100.138

%build
true

%install
pwd
mkdir -p %{buildroot}/usr/lib/firmware
b43-fwcutter -w %{buildroot}/usr/lib/firmware linux/wl_apsta.o

%files
%defattr(-,root,root,-)
%dir %{_prefix}/lib/firmware/b43
%{_prefix}/lib/firmware/b43/*

%changelog
* Fri Dec 22 2017 William Brown <william at blackhats.net.au> - 1.0.0
- Initial version

Now you can put this into a folder like so:

mkdir -p ~/rpmbuild/{SPECS,SOURCES}
<editor> ~/rpmbuild/SPECS/b43-fw.spec
wget -O ~/rpmbuild/SOURCES/broadcom-wl-5.100.138.tar.bz2 http://www.lwfinger.com/b43-firmware/broadcom-wl-5.100.138.tar.bz2

We are now ready to build!

rpmbuild -bb ~/rpmbuild/SPECS/b43-fw.spec
createrepo ~/rpmbuild/RPMS/x86_64/

Finally, we can install this. Create a yum repos file:

[local-rpms]
name=local-rpms
baseurl=file:///home/<YOUR USERNAME HERE>/rpmbuild/RPMS/x86_64
enabled=1
gpgcheck=0
type=rpm
rpm-ostree install b43-fw

Now reboot and enjoy wifi on your Fedora Atomic Macbook Pro!

Improve your code searching skills with pss

Posted by Fedora Magazine on December 22, 2017 08:00 AM

Searching a code base is a part of every day developer activities. From fixing a bug, to learning a new code base or checking how to call an api, being able to quickly navigate your way into a code base is a great skill to have. Luckily, we have dedicated tools to search code. Let’s see how to install and use one of them – pss.

What is pss?

pss is a command line tool that helps searching inside source code file.  pss searches recursively within a directory tree, knows which extensions and file names to search and which to ignore, automatically skips directories you wouldn’t want to search in (for example .svn or .git), colors its output in a helpful way, and much more.

Installing pss

Install pss on Fedora with the following command:

 $ dnf install pss

Once the installation is complete you can now call pss in your terminal

 $ pss

Calling pss without any argument or with the -h flag will print a detailed usage message.

Usage examples

Now that you have installed pss let’s go through some Usage examples.

 $ pss foo

This command simply looks for foo. You can be more restrictive and ask pss to look for foo only in python files:

 $ pss foo --py

and for bar in all other files:

 $ pss bar --nopy

Additionally, pss supports most of the well known source file types, to get the full list execute:

$ pss --help-types

You can also ignore some directories. Note that by default, pss will ignore directories like .git, __pycache__, .metadata and more.

$ pss foo --py --ignore-dir=dist

Furthermore, pss also gives you the possibility to get more context from your search using the following :

$ pss -A 5 foo

will display 5 line of context after the matching word

$ pss -B 5 foo

will display 5 line of context before the matching word

$ pss -C 5 foo

will display 5 line of context before & after the matching word

If you would like to learn how to use  pss  with regular expression and other options, more examples are available here.

And then kdelibs 2 is ready for base consumption…

Posted by Helio Chissini de Castro on December 21, 2017 09:58 PM

So again in the time for xmas, i basically done the base kdelibs 2.2.2 port. Is far from be perfect as stated on my README.md, but can be perfected now due start to porting kdebase.

If someone asked why i’m doing some ( alleged ) useless work, is because i’m really want to restore KDE 2 as well and improve my porting skills, since i think is a valuable skill for any programmer.

I think in a future, companies and organizations will have the need to porting or maintaining legacy C/C++ software like already happened for COBOL software and we need to be ready for this.

And i really love do that, is part of my KDE history…

So, for kdelibs, most of the tests works, dcopserver works perfectly, graphics work, so is a little beyond of proof of concept.

Autotools proved to be a worthy adversary, but i found my way around it, so Cmake it is.

The super repo still in github, but when i decided at some point kdebase done, i will request a proper place in out home base, the KDE Git repository

KDE 2 Super Repo

Follow a copy of README.md for the lazy ones:

Merry Xmas and a Happy New Year

 

<article class="markdown-body entry-content">KDE Restoration Project – KDE 2.2.2This is an Software Engineering Archeology work. The intention is to keep original KDE 2 working as along as possible in modern architectures ( Unix and Linux only for now )Small premises are taken to go with this port:

  • Keep the original code as original as possible
  • Replace current BuildSystem for a modern one. The actual choice was Cmake since i do know it better and current KDE uses it

The current status:

Qt2 is done with some remarks:

  • There’s a issue as Qt2 didn’t recognize ARGB visuals on thos times ( of course ! ), so thanks to Gustavo Boiko that found the issue. So, if you do intend to run software like Qt designer, export this on command line: XLIB_SKIP_ARGB_VISUALS=1
  • Compilation depends on byacc. ONe of the sources are not ported to modern bison/flex. Thanks to @EXl for pinting this out

kdelibs is done with some remarks:

  • arts is not compiled yet. It is my nemesis since i worked at Conectiva several years ago and still a pain. Help welcome
  • Documentation is not generated. This is secondary and wil be dealt after kdebase
  • Install part is done, but is not 100% proved if is done properly
  • libtool porting “should” work, but then, not properly tested.
  • MOst software can compiled directo from the super repo, but to test unfortunately we need to run dcop and have install directory properly setted. This will be preperly test when kdebase port start ( soon i hope, crossing my fingers).

Building:

  • Clone the super repo: git clone --recursive git@github.com:heliocastro/kde2.git
  • Enter the directory: cd kde2
  • Create an non source build dir ( i usually use build ): mkdir build
  • Run cmake: cmake ..

My default compiler is Clang on Fedora Linux 27 at this moment I can’t remember all required libraries, so for now you need run cmake and see what is missing on your side.

I will thanks any help, been clear that this is probably a uselles project, but has some meaning for me at least.

Again thanks to:

Gustavo Boiko – @boiko

@EXL

</article>

Fedora 27 : Firefox and selinux : sepolgen tool .

Posted by mythcat on December 21, 2017 09:00 PM
To writing the actual policy for selinux application, you can get many of the permissions your application needs by running.
First test if is install into your Fedora distro.
I used Fedora 27 with selinux set Enforcing.
If your application is named my_app the use this command:
sepolgen --init  /path/to/my_app
The result of this command will be this:
app.fc
my_app.sh
my_app.if
my_app_selinux.spec
my_app.te
If your application will be a rpm package, you can delete app.spec and app.sh.
The file with extension .te is a Type Enforcement file.

About this five files the linux help tell us:

Type Enforcing File NAME.te 
This file can be used to define all the types rules for a particular domain.

Note: Policy generated by sepolicy generate will automatically add a permissive DOMAIN
to your te file. When you are satisfied that your policy works, you need to remove
the permissive line from the te file to run your domain in enforcing mode.

Interface File NAME.if
This file defines the interfaces for the types generated in the te file, which can
be used by other policy domains.

File Context NAME.fc
This file defines the default file context for the system, it takes the file types
created in the te file and associates file paths to the types. Tools like restorecon
and RPM will use these paths to put down labels.

RPM Spec File NAME_selinux.spec
This file is an RPM SPEC file that can be used to install the SELinux policy on to
machines and setup the labeling. The spec file also installs the interface file and
a man page describing the policy. You can use sepolicy manpage -d NAME to generate
the man page.

Shell File NAME.sh
This is a helper shell script to compile, install and fix the labeling on your test
system. It will also generate a man page based on the installed policy, and compile
and build an RPM suitable to be installed on other machines
Open the my_app.te file will see something like this:
policy_module(my_app, 1.0.0)

########################################
#
# Declarations
#

type my_app_t;
type my_app_exec_t;
init_daemon_domain(my_app_t, my_app_exec_t)

# Please remove this once your policy works as expected.
permissive my_app_t;

########################################
#
# my_app local policy
#
allow my_app_t self:fifo_file rw_fifo_file_perms;
allow my_app_t self:unix_stream_socket create_stream_socket_perms;

domain_use_interactive_fds(my_app_t)
files_read_etc_files(my_app_t)
auth_use_nsswitch(my_app_t)
miscfiles_read_localization(my_app_t)
sysnet_dns_name_resolve(my_app_t)

First line uses the name of the binary and will be the name of the policy too, and a version.
policy_module(my_app, 1.0.0)
The nest rows come with this:

type my_app_t;
type my_app_exec_t;
init_daemon_domain(my_app_t, my_app_exec_t)
- the unique type to describe this application is: my_app_t.
- SELinux tell us we’ll be executing this file with : my_app_exec_t.
- this program will run as a service: init_daemon_domain(my_app_t, my_app_exec_t).

The next row is about log permission errors ( but let the application continue to run) .
permissive my_app_t;

The next rows with allow is about how the application use file permisions and if the application will use unix steam.
Don't change it , you can get a google search to see more examples with this type of allow.
allow my_app_t self:fifo_file rw_fifo_file_perms;
allow my_app_t self:unix_stream_socket create_stream_socket_perms;

Abou this rows:
domain_use_interactive_fds(my_app_t)
files_read_etc_files(my_app_t)
auth_use_nsswitch(my_app_t)
miscfiles_read_localization(my_app_t)
sysnet_dns_name_resolve(my_app_t)

The domain_use_interactive_fds and term_use_all_terms support operations where SSH allocates a tty for the user( example the allow fifo_file supports the opposite).
The my_app want to read from /etc folder with files_read_etc_files.
The auth_use_nsswitch also can adds rules allowing access to NIS/YPBIND ports.
The miscfiles_read_localization is about localization code.

To better understand this tutorial, you can create a folder in your home directory and then test it with a different application from Fedora 27.
One good example: sepolgen --init /opt/firefox .

Year in Review - Red Hat training activity

Posted by Susan Lauber on December 21, 2017 05:52 PM
I have been a certified instructor since the beginning of the program in 1999. I have contracts with delivery partners and am required to keep up on my skills and other information around the program.  This a roundup of information from 2017 which is relevant to this part of my world.


Red Hat Certification updated certification titles  at the end of 2017.


<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

Certification Magazine also did a "deep dive" but only managed to get male respondents. There are a lot of talented women out there with RHCE certification. They really missed a market.


<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
Most of my deliveries for the year involved keeping current with the updates to  DO280 (OpenShift Administration), RH403 (Satellite 6), and DO407 (Ansible). Those were mostly point release refreshes.

Red Hat Training expanded the OpenShift curriculum to include an introductory course (Which is an absolute required prerequisite!) common to both the administration and developer track. The administration course became an Admin I class and a new Admin II course was released in December.  The updates to the developer track are in progress. Interestingly the OpenShift resources page has not yet been updated to reflect this new course outline.

Ansible by Red Hat updated their offerings and also opensourced the Tower project this year.

The Ansible team has been on the road offering short training sessions.

Red Hat Training also updated the DO407 course and expanded offerings with a DO409 Tower course.

The video courses that are a part of the Red Hat Learning Subscription - and which I have had the privilege to be involved with - were recognized for
"Best Use of Video" by Brandon Hall Group Excellence in Learning.

Red Hat Training also continues to release introductory videos on their YouTube channel and courses in partnership with EdX.

I met some of the North America Red Hat Academy instructors at the annual instructor conference. I "herded cats" to get most of the instructors together for a pre conference gathering and several sales partners came to meet us too.  It was supposed to be a chance for newer instructors and sales partners to "Ask an RHCA" about upper level training paths.  Mostly we just sat around and talked about all sorts of things. It was a great evening!  The EMEA team posted their training partner recognition in this blog posting. If the NA team announced the recognition from our conference, I missed it.

<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

I also had one delivery of RH236 (Gluster) and one delivery of DO405 (Puppet with Satellite 6.1). These had not updated since I last taught them but that creates its own preparation issues to compare the content in the course to the current released products.

Near the end of the year I was also "honored" with the responsibility of delivering the pilot of the updated RH318 (RHV 4.1) course. That course is available as of this week. Pilots are always more work but the challenge does lead to finding all the new cool features of a product. I really like some of the new disk image management processes and I am looking forward to the next update with some more self hosted manager improvements. I think the WebUI is also still due to change.

On the agenda for the coming week is to finish my updated course reviews so I am ready for:
  • DO180, DO280, and DO380 - DO280 has has a minor updated since I last taught it and DO380 is a new course for me.
  • RH318 - A quick review of the final content plus a look ahead at what is coming from upstream for RHV4.2
  • DO409 - I still need to review the course materials for the new Ansible Tower class.
  • RH403 - Satellite 6.3 beta is now available. Time to check out the new features.
I also want to get involved in the Ansible Lightbulb project.

In many ways it was a light year for me in Red Hat training courses. I didn't even add a new certification! (I'll have to get on that for 2018). I was working on a curriculum project with another client much of the year and juggling some personal issues.

I still love teaching Red Hat Certification courses. They are some of the best out there and when the students come in with the prerequisites we all learn a lot and have a good time.  Work hard. Play hard.

Go learn stuff!

-SML

KubeVirt Status 0x7E1 (2017)

Posted by Fabian Deutsch on December 21, 2017 12:30 PM

Where do we stand with KubeVirt at the end of the year? (This virtual machine management add-on for Kubernetes)

We gained some speed again, but before going into what we are doing right now; let’s take a look at the past year.

Retro

The year started with publishing KubeVirt. The first few months flew by, and we were mainly driven by demoing and PoCing parts of KubeVirt. Quickly we were able to launch machines, and to even live migrate them.

We had the chance to speak about KubeVirt at a couple of conferences: FOSDEM 2017, devconf.cz 2017, KubeCon EU 2017, FrOSCon 2017. And our work was mainly around sorting out technical details, and understanding how things could work - in many areas. with many open ends.

At that time we had settled with CRD based solution, but aiming at a User API Server (to be used with API server aggregation), a single libvirt in a pod nicluding extensions to move the qemu processes into VM pods, and storage based on PVCs and iSSCI leveraging qemu’s built-in drivers. And we were also struggling with a nice deployment. To name just a few of the big items we were looking at.

Low-level changes and storage

Around the middle of the year we were seeing all the features which we would like to consume - RAW block storage for volumes, device manager, resource classes (now delayed) - to go into Kubernetes 1.9. But at the same time we started to notice, that our current implementation was not suited to consume those features. The problem we were facing was that the VM processes lived in a single libvirtd pod, and were only living in te cgroups of the VM pods in order to allow Kubernets to see the VM resource consumption. But because the VMs were in a singlel ibvirt pod from a namespace perspective, we weren’t bale to use any feature which was namespace related. I.e. raw block storage for volumes is a pretty big thing for us, as it will allow us to directly attach a raw block device to a VM. However, the block devices are exposed to pods, to the VM pods (the pods we spawn for every VM) - and up to now our VMs did not see the mount namespace of a pod, thus it’s not possible to consume raw block devices. It’s similar for the device manager work. It will allow us to bring devices to pods, but we are not able to consume them, because our VM proccesses live in the libvirt pod.

And it was even worse, we spent a significant amount of time on finding workarounds in order to allow VM processes to see the pods’ namespaces in order to consume the features.

All of this due to the fact that we wanted to use libvirtd as intended: A single instance per host with a host wide view.

Why we do this initially - using libvirtd? Well, there were different opinions about this within the team. In the end we stuck to it, because there are some benefits over direct qemu, mainly API stability and support when it comes to things like live migrations. Further more - if we did use qemu directly, we would probably come up with something similar to libvirt - and that is not where we want to focus on (the node level virtualization).

We engaged internally and publicly with the libvirtd team and tried to understand how a future could look. The bottom line is that the libvirtd team - or parts of it - acknowledged the fact that the landscape if evolving, and that libvirtd want’s to be part of this change - also in it’s own interest to stay relevant. And this happened pretty recently, but it’s an improtant change which will allow us to consume Kubernetes features much better. And should free time because we will spend much less time on finding workarounds.

API layer

The API layer was also exciting.

We spend a vast amount of time writing our own user API server to be used with API server aggregation. We went with this because it would have provided us with the full controll over our entity types and http endpoints, which was relevant in order to provide access to serial and graphical consoles of a VM.

We worked this based on the assumption that Kubernetes will provide anything to make it easy to integrate and run those custom API servers.

But in the end these assumptions were not met. The biggest blocker was that Kubernetes does not provide a convenient way to store an custom APIs data. Instead it is left to the API server to provide it’s own mechanism to store it’s data (state). This sounds small, but is annoying. This gap increases the burden on the operator to decide where to store data, it could eventually add dependencies for a custom etcd instance, or adds a dependency for PV in the cluster. In the end all of this might be there, but it wasis a goal of KubeVirt to be an easy to deploy add-on to Kubernetes. Taking care of these pretty cluster specific data storage problems took us to much off track.

Rigth now we are re-focusing on CRDs. We are looking at using jsonscheme for validation which landed pretty recently. And we try to get rid of our subresources to remove the need of a custom API server.

Network

Networking was also an issue where we spend a lot of time on. It still is a difficult topic.

Traditionally (this word is used intentionally) VMs are connected in a layer 2 or layer 3 fashion. However, as we (will really) run VMs in a pod, there are several ways of how we can connect a VM to a Kubernetes network

  • Layer 2 - Is not in Kubernetes
  • Layer 3 - We’d hijack the pod’s IP and leave it to the VM
  • Layer 4 - This is how applications behave within a pod - If we make VMs work this way, then we are most compatible, but restrict the functionality of a VM

Besides of this - VMs often have multiple interfaces - and Kubernetes does not have a concept for this.

Thus it’s a mix of technical and conceptual problems by itself. It becomes even more complex if you consider that the traditional layer 2 and 3 connectivity helped to solve problems, but in the cloud native worlds the same problems might be solved in a different way and in a different place. Thus here it is a challenge to understand what problems were solved, and how could they be solved today, in order to understand if feature slike layer 2 connectivity are really needed.

Forward

As you see, or read, we’ve spent a lot of time on reseraching, PoCing, and failing. Today it looks different, and I’m positive for the future.

On a broader front:

This post got longer than expected and it is still missing a lot of details, but you see we are moving on, as we still see the need of giving users a good way to migrate the workloads from today to tomorrow.

There are probably also some mistakes, feel free to give me a ping or ignore then in a friendly fashion.

FOSDEM 2018 IAM devroom

Posted by Alexander Bokovoy on December 21, 2017 10:35 AM

FOSDEM is one of largest free software conferences in Europe. It is run by volunteers for volunteers and since 2001 gathers together more than 8000 people every year. Sure, during first years there were less visitors (I had been lucky to actually present at the first FOSDEM and also ran a workshop there) but the atmosphere didn’t change and it is still has the same classical hacker gathering feeling.

In 2018 FOSDEM will run on the weekend of February 3rd and 4th. Since the event has grown up significantly, there are multiple development rooms in addition to the main tracks. Each development room is given a room for Saturday or Sunday (or both). Each development room issues own call for proposals (CfP), chooses talks for the schedule and runs the event. FOSDEM crew films and streams all devrooms online for those who couldn’t attend them in real time but the teams behind actual devrooms are what powers the event.

In 2018 there will be 42 devrooms in addition to the main track. Think about it as 43 different conferences happening at the same time, that’s the scale and power of FOSDEM. I’m still being impressed by the power of volunteers who contribute to FOSDEM success even long since the original crew of sysadmins of Free University of Brussels decided to stop working on FOSDEM.

Identity management related topics has been always part of FOSDEM. In 2016 I was presenting in the main track about our progress with GNOME desktop readiness for enteprise environments, integration with freeIPA and other topics, including a demo of freeIPA and Ipsilon powering authentication for Owncloud and Google Apps. Some of my colleagues ran freeIPA presentation well before that too.

We wanted to have a bit more focused story telling too. Radovan Semancik tried to organize a devroom in 2016 but it wasn’t accepted. Michael Ströder tried the same in 2017. Getting a devroom proposal to pass always comes with a fair amount of luck but finally we suceeded with FOSDEM 2018. I’d like to thank you my colleague Fraser Tweedale who wrote the original proposal draft out of which grew up the effort with Identity and Access Management devroom.

We tried to keep a balance between a number of talks and a variety of topics presented. We only have 8.5 hours of schedule allocated. With 5 minutes intervals between the talks we were able to accomodate 14 talks out of 25 proposals.

The talks are structured in roughly five categories:

  • Identity and access management for operating systems
  • Application level identity and access management
  • Interoperability issues between POSIX and Active Directory environments
  • Deployment reports for open source identity management solutions
  • Security and cryptography on a system and application level

Admittedly, we’ve got one of smallest rooms (50 people) allocated but this is a start. On Saturday, February 3rd, 2018, please come to room UD2.119. And if you couldn’t be in person at FOSDEM, streaming will be available too.

See you in Brussels!

When should behaviour outside a community have consequences inside it?

Posted by Matthew Garrett on December 21, 2017 10:09 AM
Free software communities don't exist in a vacuum. They're made up of people who are also members of other communities, people who have other interests and engage in other activities. Sometimes these people engage in behaviour outside the community that may be perceived as negatively impacting communities that they're a part of, but most communities have no guidelines for determining whether behaviour outside the community should have any consequences within the community. This post isn't an attempt to provide those guidelines, but aims to provide some things that community leaders should think about when the issue is raised.

Some things to consider

Did the behaviour violate the law?

This seems like an obvious bar, but it turns out to be a pretty bad one. For a start, many things that are common accepted behaviour in various communities may be illegal (eg, reverse engineering work may contravene a strict reading of US copyright law), and taking this to an extreme would result in expelling anyone who's ever broken a speed limit. On the flipside, refusing to act unless someone broke the law is also a bad threshold - much behaviour that communities consider unacceptable may be entirely legal.

There's also the problem of determining whether a law was actually broken. The criminal justice system is (correctly) biased to an extent in favour of the defendant - removing someone's rights in society should require meeting a high burden of proof. However, this is not the threshold that most communities hold themselves to in determining whether to continue permitting an individual to associate with them. An incident that does not result in a finding of criminal guilt (either through an explicit finding or a failure to prosecute the case in the first place) should not be ignored by communities for that reason.

Did the behaviour violate your community norms?

There's plenty of behaviour that may be acceptable within other segments of society but unacceptable within your community (eg, lobbying for the use of proprietary software is considered entirely reasonable in most places, but rather less so at an FSF event). If someone can be trusted to segregate their behaviour appropriately then this may not be a problem, but that's probably not sufficient in all cases. For instance, if someone acts entirely reasonably within your community but engages in lengthy anti-semitic screeds on 4chan, it's legitimate to question whether permitting them to continue being part of your community serves your community's best interests.

Did the behaviour violate the norms of the community in which it occurred?

Of course, the converse is also true - there's behaviour that may be acceptable within your community but unacceptable in another community. It's easy to write off someone acting in a way that contravenes the standards of another community but wouldn't violate your expected behavioural standards - after all, if it wouldn't breach your standards, what grounds do you have for taking action?

But you need to consider that if someone consciously contravenes the behavioural standards of a community they've chosen to participate in, they may be willing to do the same in your community. If pushing boundaries is a frequent trait then it may not be too long until you discover that they're also pushing your boundaries.

Why do you care?

A community's code of conduct can be looked at in two ways - as a list of behaviours that will be punished if they occur, or as a list of behaviours that are unlikely to occur within that community. The former is probably the primary consideration when a community adopts a CoC, but the latter is how many people considering joining a community will think about it.

If your community includes individuals that are known to have engaged in behaviour that would violate your community standards, potential members or contributors may not trust that your CoC will function as adequate protection. A community that contains people known to have engaged in sexual harassment in other settings is unlikely to be seen as hugely welcoming, even if they haven't (as far as you know!) done so within your community. The way your members behave outside your community is going to be seen as saying something about your community, and that needs to be taken into account.

A second (and perhaps less obvious) aspect is that membership of some higher profile communities may be seen as lending general legitimacy to someone, and they may play off that to legitimise behaviour or views that would be seen as abhorrent by the community as a whole. If someone's anti-semitic views (for example) are seen as having more relevance because of their membership of your community, it's reasonable to think about whether keeping them in your community serves the best interests of your community.

Conclusion

I've said things like "considered" or "taken into account" a bunch here, and that's for a good reason - I don't know what the thresholds should be for any of these things, and there doesn't seem to be even a rough consensus in the wider community. We've seen cases in which communities have acted based on behaviour outside their community (eg, Debian removing Jacob Appelbaum after it was revealed that he'd sexually assaulted multiple people), but there's been no real effort to build a meaningful decision making framework around that.

As a result, communities struggle to make consistent decisions. It's unreasonable to expect individual communities to solve these problems on their own, but that doesn't mean we can ignore them. It's time to start coming up with a real set of best practices.

comment count unavailable comments

A week in review: PHX2 Colo Move

Posted by Ricky Elrod on December 21, 2017 05:11 AM

A few weeks ago (4 December 2017 - 9 December 2017), I had the opportunity to fly to Phoenix to help out with a move of Fedora servers in our main colo site (phx2). This post outlines some of my experiences.

I flew in Sunday (3 December 2017) and got to the hotel. It was around 1pm and I had nothing else to do that day, so I took an uber to the Musical Instrument Museum, which was amazing, but I'll save that for another post.

On Monday, I was waiting for my coworker to arrive, so I called in some hard-drive replacements from the comfort of my hotel room, and ended up heading over to the datacenter and having some of the on-site techs let me in and show me around a bit. This was my first time in a production datacenter that wasn't part of a university, so it was neat to see.

I had to confirm that I was who I said I was, but after a while, I was given a visitor badge, which allowed me access throughout the building and into the colo cage.

There was a drive that had already been called in and needed to be replaced, so I took care of that while I was there. Other than that, I just hung out, talked with the on-site techs a bit, and looked around.

Tuesday through Friday is a bit of a blur because of how many things were happening. Everyone was focusing on their own projects and tasks, so I'll talk about some of mine:

  • Playing with console servers (OpenGear)
    • Updating firmware
    • Labeling which ports were going where
    • Testing to make sure the console cables all worked
  • Taking inventory
    • Making a spreadsheet of each rack, which server was where, which PDU ports it belonged to, etc.
    • Crash-carting servers to see what they were, when nobody knew.
    • Physically labeling servers which were not already labeled.
    • Checking wiring, making sure dual-PSU boxes weren't both going into the same PDU, and so on.
  • Updating aarch64 boxes like crazy
  • Replacing a bunch of HDDs with SSDs in one of our ARM servers.
  • Various other things that came up (playing with networking wires for various PPC boxes, etc.)

Most of that is pretty straightforward, but takes a long time. The console server stuff involved finding where to download the firmware, tracing console wires, and whining at people for changing console wires (or adding new ones) without telling me so I could update my documents.

The inventory-taking took much longer and involved a lot of crash-carting, playing with label machines, and walking around with a tablet, updating a spreadsheet of what was where.

I did come up with a clever idea to make inventory easier going forward: Using Ansible variables to generate the documents we were automatically coming up with.

For example, we keep a document that is an ASCII representation of our racks and the servers in them and how many units they take each up. My idea is to put all of that information in Ansible variables and generate that document automatically...

The result would be that our physical boxes have something like:

rack_units: 2
rack_location: 10
rack_number: 147

... in their Ansible host-variables files, and there would be a script to generate a pictorial representation of our server racks, based on this. We could do similar for PDU ports, console ports, etc.

The idea is that we already use Ansible for storing information about our inventory anyway, so why not make it smarter and put everything in one place? I will write another post about that, showing off the script, once I actually add those variables and write the script.

Anyway, this isn't meant to be a status report on the trip, but to show some of what I've been busy with.

Debugging with GNOME Builder

Posted by Fedora Magazine on December 20, 2017 11:02 AM

GNOME Builder 3.26 ships a number of new features, including improved symbol searching, inline documentation and integrated debugging. Prior to this, developers could code and compile their program in Builder, but would need to switch to a separate debugger (either gdb or a graphical debugger like Nemiver). Now that Builder integrates a debugger, a developer can write, compile and run their program with an attached debugger right inside Builder!

Install GNOME Builder

Builder is available in the main Fedora repositories. Install it with dnf or GNOME Software. If you haven’t yet upgraded to Fedora 27, Builder 3.26 is also available in a Flatpak hosted on Flathub and can be installed on any Linux distribution that Flatpak supports.

Clone the project

Clone the stop-watch project in Builder by selecting Open Project from the application menu. Click the Clone… button and enter https://pagure.io/fedora-magazine/stop-watch.git into the Repository URL field. Click the Clone button to clone and open the project.

Debugging the project

Run the Stop Watch app by clicking the Run button (or press Control + F5). After the project compiles, the Stop Watch window opens.

Note that the Run button may be disabled at first. This is because Builder is downloading the necessary runtimes needed to develop the application. Check the transfers pop-up in the right of the menubar for progress on these downloads. Once the downloads have completed, the build button will enable.

Now try starting the stop watch by clicking Start. Uh-oh. It’s not counting properly. It appears the timer never gets past 1 second.

Open src/stop-watch-window.c and set a breakpoint in the start_button_clicked_cb function by clicking on the line number (line 58 in this case). This will allow you to pause the program once the clicked signal handler is invoked on the Start button.

Now Run with Debugger by clicking the expand arrow next to the Run button, or press F5. Builder automatically sets a breakpoint at the entrypoint to your program (the entry to the main function). Below the editor view, there is a control panel showing the debug controls. Click the Continue button to continue the program execution, presenting the Stop Watch window. Now click the Start button again. This time the breakpoint inside start_button_clicked_cb catches and the program pauses again.

Once again, the debugging control panel will appear. Step over any function calls to maintain execution within this signal handler. As you step, the current line will advance. Since self->timer_started is FALSE, control jumps into the else block. Continue stepping, watching the flow of control through the program until you reach the last line of the signal handler. Look closely at line 76. Looks like the programmer forgot to toggle the timer_started variable! Change line 76 to:

self->timer_started = !self->timer_started;

Now run the program again. Click Start and the timer starts counting seconds properly.

Get Involved

GNOME Builder can debug more than just C programs; it supports Python, JavaScript and Vala too. The GNOME project has a collection of projects aimed at new contributors. For example, take a look at Music for a Python project. If you’re most comfortable with JavaScript, take a look at Maps or Polari. Happy hacking!

12 days of Varnish

Posted by Ingvar Hagelund on December 19, 2017 10:24 PM

While Varnish is most famous for its speedy caching capabilities, it is also a general swiss army knife of web serving. In the spirit of Christmas, here’s Twelve Days of Varnish Cache, or at least, twelve Varnish use cases. Read the rest of this post on Redpill Linpro’s Sysadvent calendar.

Episode 75 - Security Planner review

Posted by Open Source Security Podcast on December 19, 2017 07:49 PM
Josh and Kurt talk about the Security Planner website. It's pretty good all things considered.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/6065761/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes