April 28, 2016

News: About Fedora 24 Alpha.

I will told about Fedora 24 Alpha because is a great operating system.
First I will share some links with more infos about Fedora 24 Alpha.

If you want to read about the release schedule for Fedora 24, then take a loot at: Fedora 24 - Releases Schedule.
What is Alpha a release ?
A release contains all the features of Fedora editions in a form that anyone can help test.
Where you can get it ?
You cand read and download this release from here.
...now about testing the new Fedora 24.
You can make one Live DVD /CD or just use one virtual machine software like Virtual Box.
The Virtual Box software from Oracle it is an x86 virtualization software not just for Linux distro but also for any operating system.You find many tutorials about how to do that.

So, I install the new Fedora 24 Alpha and I test it. 
First the team make a great job with this features:
- not many packages ;
- the speed of my VirtualBox is good with virtual Hyper-V, enabled 3D - Acceleration and 64 Mb on video was under 1 min to see first screen and almost 2 min to start the GUI install on hard disk with my old Intel(R) Pentium(R) CPU 2117U has 1.80GHz;
- the design start with an beautiful wallpaper; 
- the instalation GUI is simple;
- the size on disk to make the first update I used is 15Gb - is to much for me;
- the initial size of instalation come with ~6Gb;
- the last Virtual Box version 5.0.18 r106667 come with full network settings so you can use internet to update Fedora packages and use browser.
I don't use this distro with Live DVD or install on my HDD but you can do that...
I want to make this article more complex but is enough for now ...
 
Regards. 

 

Linux Fest North West Day 1
The booth was set up on Friday before Fedora Game Night but the large TV didn't arrive until Saturday morning, so we had a little work to do Saturday morning. We were showing games in Fedora featuring the Fedora Labs Game Spin.

On the OLPC we showed two games, Implode and Maze, always a draw. To demonstrate “convert your XP computer to a Fedora Workstation and still play your games”, we ran some of the games from the Games Spin on my little atom processor computer with WineHQ. To appeal to the serious gamer we showed STEAM, the client is part of the Fedora repository. The large TV was for the SuperTuxKart tournament.

Guests had questions about all of these systems, the OLPC, XP replacement, STEAM gaming and game-pad interfaces. Many of them tried out the SuperTuxKart game, receiving an entry card for the elimination tournament in the afternoon. The elimination races were close but Roko Spain (middle) eliminated Shivek Bnonote and Tyson Van Dyke for the gold medal.

The Saturday exhibits ended with the world famous auction, followed by the after party featuring the Alpha Geek Trivia Contest.
FLISOL Panama 2016 Report

Festival Latinoamericano de Instalación de Software Libre (FLISOL) is one of the most important events in Latin America where we share knowledge and experience related to free software simultaneously in the participant countries (Panama included).


Fedora Panama had a good participation on the event. José Reyes organized the event. Alejandro Pérez gave an introduction talk and Gonzalo Nina (from Fedora Bolivia) spoke about security. I gave a short workshop about SELinux. The venue was Universidad Interamericana de Panamá.



It is important to mention that we had FLISOL David on April 9th at Universidad del Istmo. Kiara Navarro and Alejandro Pérez gave talks there.

In both venues attendance was low but with very high interest in participating on open source projects, specially within Fedora: 
  • From David, we get a new possible contributor, Julian Vega, which knows a lot of Blender and 3D design. 
  • From Panama, students from Universidad Interamericana de Panamá formed a LUG and are aiming to participate on Fedora and organize Linux Day.

I would like to thank Universidad Interamericana de Panamá and Universidad del Istmo, the sponsors, the speakers, the attendees and everyone involve in the success of FLISOL.

See you next year!
pcp+grafana scripted dashboards

Our previous work gluing Performance Co-Pilot and Grafana together has made it possible to look at a networkful of systems' performance stats and histories with just a few clicks on a web browser, and no auxiliary software (databases, web servers, etc.) other than PCP itself.

Many people probably stopped at the most basic use of the technology: with the grafana dashboards provided. grafana-dashboard-1.png

Each graph draws a one or more time series specified by a graphite-syntax series, as outlined in the pmwebapi man page, as applied to the graphite specification. The gist is that each series is a dot-separated name consisting of an host/archive identifier followed by components to identify PCP metrics and their instances. Each component can be wildcarded.

grafana-dashboard-edit-1.png In the grafana chart-editor window for the top default-dashboard chart, one can see that the series it draws is *.kernel.all.1 minute. This means that this series will expand to the kernel.all metric's 1 minute instance, within all archives for all hosts that cover the selected time interval. In plainer language, all machines' one-minute load average will be overlaid on this chart. Other charts on that dashboard are similarly wildcarded: one or more related metrics across all machines.

What about if one's interested in just one machine? While one can hunt & peck amongst the overlapping curves to find a particular machine of interest, this is obviously clumsy. One could pop open that dashboard editor in the web browser (by clicking on each chart title) and change the metrics to replace the first position (host/archive) from the general wildcard * with a more specific one such as *HOSTNAME*. (The * on both sides are still needed, because the host's data may be split across several physical archives, and may be stored in different physical parent directories.) While at it, one can change or add other PCP metrics. Then one can use the "save/download JSON" button to get a modified dashboard JSON file, and reload it later. That reloading can be done by interactively "load/import" button, or by depositing the JSON file under /usr/share/pcp/webapps/grafana/app/dashboards/FOO.json and directing one's browser to http://server:44323/grafana/index.html#/dashboard/file/FOO.json.

This works, and is a general and powerful technique, but is pretty clunky. Sorry about that.

Coming along with the next version of PCP is some new automation to help. It uses the grafana scripted dashboard mechanism. (Thanks to my colleague Zack Cerza for the pointer.) This is a way of having a javascript program generate a grafana dashboard on the fly. So, instead of hard-coded *.something, a program running in one's browser can synthesize something special. Perhaps interactively!

Presenting #/dashboard/script/multichart.js. This scripted dashboard populates one grafana dashboard, exclusively by reference to its invoking URL querystring. It builds one chart per each incoming &target=SOURCE parameter, with some optional formatting parameters. Since the dashboard is specified entirely from its querystring, it can be bookmarked in the browser. It can be shared with others by just copying the URL. It's much easier than exporting JSON and re-importing it again.

Some examples:

  • http://server:44323/grafana/index.html#/dashboard/script/multichart.js?target=*.kernel.all.load.1%20minute
    approximately matches the top chart of the classic default dashboard.
  • http://server:44323/grafana/index.html#/dashboard/script/multichart.js?target=*HOSTNAME*.kernel.all.load.1%20minute
    the same - but restricted to HOSTNAME!
  • http://server:44323/grafana/index.html#/dashboard/script/multichart.js?target=*HOSTNAME*.kernel.all.load.1%20minute&target=*HOSTNAME*.proc.nprocs
    same, adding a chart of the proc.nprocs metric. (Note repetition of the &target= parameter.)
  • http://server:44323/grafana/index.html#/dashboard/script/multichart.js?target=*FIRSTHOST*.kernel.all.nusers,*SECONDHOST*.kernel.all.nusers
    two overlapping timelines of the same metric from two separate hosts. (Note comma separated series in the same &target= parameter.
You can mix and match. One can add something like &from=now-7d&to=now to set the time range of interest (the past seven days in thisexample).

Presenting #/dashboard/script/hostselect.js. This scripted dashboard populates one grafana dashboard, but a special one that doesn't actually include any charts. Why is that? Simple. It's a host selector! It uses an internal query to pmwebd to generate a list of hosts for which it has archives. Then a user is shown a menu of them, and can click on any to take them to a multichart.js scripted dashboard tuned to that host. If you run pmmgr with its subtarget-containers flag turned on, the dashboard will list all per-container archives too.

Which metrics to show for the host? Those are specified by &metric=PCP.MET.RIC parameters to the hostselect.js dashboard; if none were given, the script gives a default set. It looks like this.

grafana-dashboard-hostsel.png

Each of the host names in the list at the bottom is a link one can click on. Well, I can, not you - that's just a screen shot after all. But once you install your copy of this fresh pcp-webjs stuff, you will get your own 100% PCP-powered, dynamically-generated host selector and URL-controlled multi-chart grafana dashboards. You can wait until the next PCP release, or can download all the PCP web applications straight onto your machine right now:
# >>> uninstall pcp-webjs* if already present
# cd /usr/share/pcp
# mkdir webapps
# cd webapps
# git clone --depth 1 -b webjs git://sourceware.org/git/pcpfans.git .
# >>> restart pmwebd service

% firefox http://localhost:44323/

April 27, 2016

hitch-1.2.0 for fedora and epel

Hitch is a libev-based high performance SSL/TLS proxy. It is developed by Varnish Software, and may be used for adding https to Varnish cache.

hitch-1.2.0 was recently released. Among the new features in 1.2.0, might be mentioned more granular per-site configuration. Packages for Fedora and EPEL6/7 were requested for testing today. Please test and report feedback.

Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, cloud, and data center, contact us at www.redpill-linpro.com.

FLISoL 2016 Panamá
FLISoL es el evento de difusión de Software Libre más grande en Latinoamérica. Sigue leyendo
Publish Docker image with Automated build

docker1

Docker is an open source tool that helps us to develop, ship and run applications.  Each and every application needs a Dockerfile. Instructions are written over Dockerfile in a layered/step-by-step structure to run the application. The Dockerfile is then built to create Docker image. We can then push the image either to Docker Hub or keep it in our Workstation. Finally to run the application we have to run container from the image which can be treated as a concept like an instance of the image since we can have more than one container of a single image. If the Docker image is not in our local Workstation we first have to pull the image from Docker hub and then run container from it. But when we need to make changes in Dockerfile(as needed to make changes in application) we have to build a new Docker image again from it. Or we can make change in the container we are running and commit it but that doesn’t reflect the Dockerfile which contains all the instructions to run an application say it for development or production purpose. Also building Docker image locally from Dockerfile makes us doing things like build, push and all manually. Automating build of Docker image from Dockerfile that is either on Github or Bitbucket is the only solution for this:) .

In Short:  We first create Dockerfile and then push it to Github or Bitbucket. After authenticating Github account with Dockerhub we choose repository on Dockerhub from Github that contains the Dockerfile. After that the Dockerfile triggers build which in result creates Docker image getting ready to pull.

I will share an example of making a CentOS image having Apache httpd server pre-installed.

First we need to create a Dockerfile which can be viewed here.

FROM centos:centos6
MAINTAINER trishnag <trishnaguha17@gmail.com>
RUN yum -y update; yum clean all
RUN yum -y install httpd
RUN echo "This is our new Apache Test Site" >> /var/www/html/index.html
EXPOSE 80
RUN echo "/sbin/service httpd start" >> /root/.bashrc 

Then push the Dockerfile to Github. I have created a repository named CentOS6-Apache and pushed the Dockerfile to it. The repository can be found here.

After doing so

  • Go to DockerHub and Settings —> Linked Account —> Link your Github account.
  • Create —> Create Automated Build —> Create Auto-build Github.
  • Select the repository that contains Dockerfile of your application.
  • Create followed by the Automating build process.

After the image is built successfully you will be able to see something like `docker pull <username>/centos6-apache` which indicates that the image is successfully built. The Image is live on docker hub https://hub.docker.com/r/trishnag/centos6-apache.

Now we have to test the image to make sure whether Apache httpd server is actually pre-installed or not.

docker pull trishnag/centos6-apache  #Pull the image from Dockerhub
docker run -t -i -d trishnag/centos-apache /bin/bash #Run the container as daemon
docker ps #Get the name of the container

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8bcd1199bb8f trishnag/centos6-apache "/bin/bash" 2 minutes ago Up 2 minutes 80/tcp jovial_cray

docker logs jovial_cray #To see What is happening on bash of container.Gives httpd status
docker inspect jovial_cray | grep IPAddress #Shows the IPAddress
curl 172.17.0.1 #curl the IPAddress we got
This is our new Apache Test Site

We just have got the text that we echoed out to index.html. Hence we can finally come to conclusion that Apache httpd server has already been pre-installed in the image.

Now even if we commit changes to the Dockerfile on Github we really don’t have to worry about the image. The build process of Docker image starts automatically once any changes are committed to Dockerfile. When we pull the newly built image and run container we will be able to find those changes.

Automating build process really makes our life easier.

Resources:

 


TeX: Justification with no hyphenation

I’ve been reading a copy of 1984 by George Orwell, published by Fingerprint publishing — a beautifully typeset one. Already into half of Part II, but that’s when I noticed that the book is typeset with full justification sans any hyphenation. Incidentally I was typesetting something else, which probably is the reason I noticed it now. And I wanted to typeset my article the same way, with full justification but no hyphenation.

Biggest strength of TeX is its line, paragraph and page breaking algorithms where hyphenation plays a big part. Thus removing hyphenation means taking away a lot of those advantages. In any case, there are seemingly multiple ways to do it. From the TeX FAQ:

Use hyphenat package with

\usepackage[none]{hyphenat}

or set \hyphenpenalty and \exhyphenpenalty to 10000. These avoided hyphenation, but justification was bad – there were long words extending beyond the right margin. So moved on to next solution:

\hyphenchar\font=-1
\sloppy

This one kept hyphenation to the minimum with justified text, but didn’t fully avoid it. And it would work only for text with current font. \sloppy sets \tolerance and \emergencystretch to large values.

And the last one, which provided full justification with no hyphenation is:

\tolerance=1
\emergencystretch=\maxdimen

The \tolerance parameter sets how much badness is allowed, which influence paragraph/line breaking. \emergencystretch is the magical parameter which stretches text over multiple passes to balance the spacing.


Tagged: TeX
Fedora Developer Portal updated

Since the Fedora Developer Portal started last year, it has been the go-to place for information on setting up and using Fedora as a platform for development. This week, the Fedora Developer Portal team announced the release of the refreshed and updated version of the portal.

The major new addition from a content perspective is the Start a Project section, that contains information on setting up development with a specific end-goal in mind, such as Creating a Desktop Application, or Hacking on Arduino on Fedora.

If you want to help the Fedora Developer Portal expand their content, they also have a new contributor page that shows how you can get involved.

developerportalscreenshot

3rd Party Fedora Repositories and AppStream

I was recently asked how to make 3rd party repositories add apps to GNOME Software. This is relevant if you run a internal private repo for employee tools, or are just kind enough to provide a 3rd party repo for Fedora or RHEL users for your free or non-free applications.

In most cases people are already running something like this to generate the repomd metadata files on a directory of RPM files:

createrepo_c --no-database --simple-md-filenames SRPMS/
createrepo_c --no-database --simple-md-filenames x86_64/

So, we need to actually generate the AppStream XML. This works by exploding any interesting .rpm files and merging together the .desktop file, the .appdata.xml file and preprocessing some icons. Only applications installing AppData files will be shown in GNOME Software, so you might need to fix before you start.

appstream-builder			\
	--origin=yourcompanyname	\
	--basename=appstream		\
	--cache-dir=/tmp/asb-cache	\
	--enable-hidpi			\
	--max-threads=1			\
	--min-icon-size=32		\
	--output-dir=/tmp/asb-md	\
	--packages-dir=x86_64/		\
	--temp-dir=/tmp/asb-icons

This takes a second or two (or 40 minutes if you’re trying to process the entire Fedora archive…) and spits out some files to /tmp/asb-md — you probably want to change some things there to make more sense for your build server.

We then have to take the generated XML and the tarball of icons and add it to the repomd.xml master document so that GNOME Software (via PackageKit) automatically downloads the content for searching. This is as simple as doing:

modifyrepo_c				\
	--no-compress			\
	--simple-md-filenames		\
	/tmp/asb-md/appstream.xml.gz	\
	x86_64/repodata/
modifyrepo_c				\
	--no-compress			\
	--simple-md-filenames		\
	/tmp/asb-md/appstream-icons.tar.gz	\
	x86_64/repodata/

Any questions, please ask. If you’re using a COPR then all these steps are done for you automatically. If you’re using xdg-app already, then this is all magically done for you as well, and automatically downloaded by GNOME Software.

Summaries from Gnome Asia Summit 2016

Hats of for making it happen in India again. Most of the information regarding conference available at web. Though i am using Gnome almost 10+ years, this is my first Gnome summit and also mine first conference as a Keynote speaker.

Day Zero

  1. Manav Rachana is big university with promising upcoming talents in the form of bright students. Its worth to making event Gnome Asia here.
  2. Excellent campus
  3. Workshop on first day was worth, Since i arrived late, attended last GStreamer workshop and that happened really in great depth. Excellent stuff from Speakers Nirbheek Chauhan and Arun Raghavan.

Day One

  1. It was good to see Gnome contributors on stage. Quick introduction from most of the members. University lead by excellent peoples. Talk from K.K. Aggarwal was really motivating. Loved some stories from him, specifically Frog one - Good to be deaf in some situation.
  2. Next Billion users for GNOME talk was good. Even i am sure with sandboxing, application will become platform independent and best one (most likely FOSS apps) will win Users. Amen !! for Next Billions :)
  3. Though i thought we are behind schedule but case was not that, due to good buffer in lunch time. Happy nicely handled by organizers.
  4. Here my talk started. As usual i tried to make it bit two-way rather than one way communication. Covered Indian language status and also covered how it can help to contributors specifically students.
    After my talk: We had good lightening talks.

    Post lunch we started parallel session. Ankit Prateek had huge slide deck for "Privacy & Security in the Age of Surveillance ". He covered very important topics, i am sure after his talk many attendees must have thought for there privacy on Web 2.0 Era.

    Later there was talk by "Functionality, Security, Usability: Choose any two. Or GNOME."   He invited developers for contributions.

    Thanks to PJP, Pranav for managing Fedora booth during first day. We had good crowd around and good communication whole day.



    Day ended with lightening talks. In the evening i visited Tughlaqabad fort with Tobias, Daiki and Raju.

Day Two

    Third day i arrived bit early, had good conversion with faculty and students. Ekaterina gave nice keynote and provided most of pointers for starting with gnome contribution.


    After Tea, i remained at Fedora booth. Had good time resolving queries from new users. I think overall 6-7 students installed Fedora in 2 days, mostly on VirtualBox. They were asking what's are the feature of Fedora? and i was like what do you want? :)

    During August Manav Rachana planning for activities on FOSS stuff and they were collecting contacts from speakers.
    Post lunch, i attended talk from Ueno on "Contribute your first application to GNOME" and Tobias on "Five years of GNOME 3"

    Then we had again nice lightening talks and closing ceremony.

Nice Swags  

  • University provided Momento and Certificated to all speakers.
  • Also speakers received Micro Charge&Data.
  • T-Shirts and  Shoes from Conference.

    Funny part was last year FUDCon 2015 we provided socks to participants and Gnome went ahead and provided shoes ;)

Fedora booth 2nd last day Pranav

Conclusion

    Overall Gnome Asia 2016 was good conference with good content and at right location. I hope we will get lots of contributors from Manav Rachana International university in coming years.
   We had good presence of Fedora over all days. Missed Fedora 23 DVD's.
Google Summer of Code, Fedora Class of 2016

This summer, I’m excited to say I will be trying on a new pair of socks for size.

Bad puns aside, I am actually enormously excited to announce that I am participating in this year’s Google Summer of Code program for the Fedora Project. If you are unfamiliar with Google Summer of Code (or often shortened to GSoC), Google describes it as the following.

Google Summer of Code is a global program focused on bringing more student developers into open source software development. Students work with an open source organization on a 3 month programming project during their break from school.

I will work with the Fedora Project over the summer on the CommOps slot. As part of my proposal, I will assist with migrating key points of communication in Fedora, like the Fedora Magazine and Community Blog, to Ansible-based installations. I have a few more things planned up my sleeve too.

Google Summer of Code proposal

My proposal summary is on the GSoC 2016 website. The full proposal is available on the Fedora wiki.

The What

The Community Blog is becoming an important part of the Fedora Project. This site is a shared responsibility between CommOps and the Infrastructure team. Unlike most applications in the Fedora infrastructure, the Community Blog is not based off Ansible playbooks. Ansible is an open-source configuration management suite designed to make automation easier. Fedora already uses Ansible extensively across its infrastructure.

My task would consist of migrating the Community Blog (and by extension, Fedora Magazine) to an Ansible-based set up and writing the documentation for any related SOPs.

The Why

Ansible is a useful tool to make automation and configuration easier. In their current set up, the Community Blog and Fedora Magazine are managed separately from each other, and are managed by a single member of the Infrastructure team. By moving them to Ansible-based installations and merging the WordPress bases together, it provides the following benefits:

  1. Makes it easier for other Infrastructure team members to fix, maintain, or apply updates to either site
  2. Prevents duplicate work by maintaining a single, Ansible-based WordPress install versus two independent WordPress sites
  3. Creates a standard operating procedure for hosting blog platforms within Fedora (can be used for other extensions in the future)

Thanks to my mentors

I would like to issue a special thanks to my mentors, Patrick Uiterwijk and Remy DeCausemaker. Patrick will be my primary mentor for the slot, as a member of the Fedora Infrastructure team. I will be working closest with him in the context of my proposal. I will also be working with Remy on the “usual” CommOps tasks that we work on week by week.

Another thanks goes out to all of those in the Fedora community who have positively affected and influenced my contributions. Thanks to countless people, I am happy to consider Fedora my open source home for many years to come. There is so much to learn and the community is amazing.

Getting started

As of the time of publication, the Community Bonding period is currently happening. The official “coding” time hasn’t started yet. Without much delay, I will be meeting up with Patrick and Remy later today in a conference call to check in after the official announcement, make plans for what’s coming up in the near future, and become more acquainted with the Infrastructure community.

In addition to our conference call, I’m also planning on (formally) attending the next Fedora Infrastructure meeting on Thursday. Shortly afterwards, I hope to begin my journey as an Infrastructure apprentice and learn more about the workflow of the team.

Things are just getting started for the summer and I’m beyond excited that I will have a paid excuse to work on Fedora full-time. Expect more check-ins as the summer progresses!

The post Google Summer of Code, Fedora Class of 2016 appeared first on Justin W. Flory's Blog.

Announcing Fedora Google Summer of Code (GSoC) Class of 2016

On Friday, April 22nd, Google officially announced the participants for the 11th year of Google Summer of Code (GSoC) program. If you’re not familiar with Google Summer of Code, you can read more on the Community Blog. There were 1,205 accepted projects submitted for this year. Several open source organizations participated by offering projects for students to work on.

This year, Fedora was a participating organization. Alongside Fedora-specific projects, there were several other projects with Fedora, such as…

The applications were many and it was difficult to narrow them down. We are happy and confident with this year’s selection of participants.

Google Summer of Code Class of 2016

This year’s GSoC Class of 2016 for the Fedora Project are as follows.

Anerist

Cockpit

Fedora CommOps

Fedora Hubs

Pagure

Project Atomic

Tinykdump

Thanks to all of the students that applied, and we’re looking forward to working with you over the summer!

The post Announcing Fedora Google Summer of Code (GSoC) Class of 2016 appeared first on Fedora Community Blog.

Recording a quick screencast on Fedora Workstation

Ever needed to make a quick screencast of your desktop to share what you are working on with someone else? Fedora Workstation ships with an easy way of capturing high-quality, short videos of your screen.

To create a screencast, use the key-combination Ctrl + Shift + Alt +R to start your recording, and a small orange dot will appear in the status icon area in the top right of your screen.

screencast indicator

The screencast will now record your entire screen for the next 30 seconds, or if you want to stop the recording early, simply press the Ctrl + Shift + Alt + R key-combination again. Once the recording is finished, your freshly-created video in the webm format will be waiting for you in the Videos folder in your home directory.

Here is a quick 10-second screencast created with the built-in screencasting feature in Fedora Workstation:

<video autoplay="1" class="wp-video-shortcode" controls="controls" height="423" id="video-12597-1" loop="1" preload="metadata" width="676"><source src="https://fedoramagazine.org/wp-content/uploads/2016/04/Screencast-from-21-04-16-213145.webm?_=1" type="video/webm">https://fedoramagazine.org/wp-content/uploads/2016/04/Screencast-from-21-04-16-213145.webm</video>

More control when making a screencast

You should note that this feature is designed for creating quick screen capture videos to share. There are a few limitations. The maximum video length is 30 seconds, there is no audio recording, and it only records your entire desktop — if you have two monitors it will record both.

If you want to have more control over screencasting, you should take a look at the awesome EasyScreenCast extension that works on Fedora Workstation. EasyScreenCast uses the same recording mechanism, but lets you record longer length screencasts, allows you to also record an audio stream, lets you specify which part of the screen to record, and can even overlay a video from your webcam.

runC and libcontainer on Fedora 23/24

In this post, I will post my notes on how I got runC and then using libcontainer on Fedora. The first step is to install golang:

$ sudo dnf -y install golang
$ go version
go version go1.6 linux/amd64

We will set GOPATH=~/golang/ and then do the following:

$ mkdir -p ~/golang/github.com/opencontainers
$ cd ~/golang/github.com/opencontainers
$ git clone https://github.com/opencontainers/runc.git
$ cd runc

$ sudo dnf -y install libseccomp-devel
$ make
$ sudo make install

At this stage, runc should be installed and ready to use:

$ runc --version
runc version 0.0.9
commit: 89ab7f2ccc1e45ddf6485eaa802c35dcf321dfc8
spec: 0.5.0-dev

Now we need a rootfs that we will use for our container, we will use the "busybox" docker image - pull it and export a tar archive:

$ sudo dnf -y install docker
$ sudo systemctl start docker
$ docker pull busybox
$ sudo docker export $(sudo docker create busybox) > busybox.tar
$ mkdir ~/rootfs
$ tar -C ~/rootfs -xf busybox.tar

Now that we have a rootfs, we have one final step - generate the spec for our container:

$ runc spec

This will generate a config.json (config) file and then we can start a container using the rootfs above: (runC expects to find config.json and rootfs in the same directory as you are going to start the container from)

# for some reason, i have to pass the absolute path to runc when using sudo
$ sudo /usr/local/bin/runc start test #  test is the "container-id"
/ # ps
     PID   USER     TIME   COMMAND
 1 root       0:00 sh
 8 root       0:00 ps
/# exit

Getting started with libcontainer

runC is built upon libcontainer. This means that wcan write our own Golang programs which will start a container and do stuff in it. An example program is available here (thanks to the fine folks on #opencontainers on Freenode for helpful pointers). It starts a container using the above rootfs, runs ps in it and exits.

Once you have saved it somewhere on your go path (or go get https://github.com/amitsaha/libcontainer_examples/), we will first need to get all the dependent packages:

$ cd ~/golang/src/github.com/amitsaha/libcontainer_examples
$ go get
$ sudo GOPATH=/home/asaha/golang go run example1.go /home/asaha/rootfs/
 [sudo] password for asaha:
 PID   USER     TIME   COMMAND
 1 root       0:00 ps
Flisol Panama 2016
Flisol is a big event on latam, many communities join and celebrate and exchange free software and knowledge with each others and general public. Normal event will include free software installations, talks and workshops.

This year Flisol Panama was organized by Jose Reyes who took care of all organization, so many thanks for the job done,

Flisol  David on April 9 of this year hosted by  Universidad del Itsmo thanks for hosting the event on David, Kiara Navarro and me visit hosted the event, we present talks and share with the local community, which bring some new members to Floss-pa our Panamanian free software group and some of them showed interest on becoming Fedora contributors.


Blender talk by JulianVega


On Flisol official Date April 23 we celebrated it at the Universidad Interamericana de Panama  thanks for hosting it. While we did not have the normal attendance for this event  we have really interested crowd. Must interesting part of the crowd was to have 2 people interested on becoming Fedora packagers, both of them starting working to archive that task. Another interesting development was to meet a group of students who want to contribute with some software development and design for our local group Floss-pa, plus others who  wanted to contribute on organization and learn more about free software.


Special thanks to Gonzalo Nina who soon will become a packager and full Fedora Panama contributor.

So it was a good event with many things to learn and do.

Thanks to the people who work to make it possible and we hope to have new contributors doing some task on Fedora.
Plan to level up contributors with Fedora Hubs!

Fedora Hubs

What’s going on with Hubs?

So a little update for those not following closely to get you up to date:

  • We have a big milestone we’re working towards – a working version of Fedora Hubs in time for Flock. It won’t have all of the bells and whistles of the mockups that we’ve presented, but it will be usable and hopefully demonstrate the potential of the app as well and enable more development.
  • We have a number of fantastic interns coming on board (including Devyani) who will be helping us work on Fedora Hubs this summer.
  • pingou is going to be leading development on fedora-hubs.
  • I’m clearly back from an extended leave this past winter and cranking back on mockups again. 🙂
  • ryanlerch has upgraded hubs to fedora-bootstrap so it has a fresh look and feel (which you’ll see reflected in mockups moving forward.)
  • Overall, we’ve gotten more momentum than ever before with a clear goal and timeline, so you’ll hopefully be seeing a lot more of these juicy updates more frequently!

(“Wait, what is Fedora Hubs?” you ask. This older blog post has a good summary.)

Okay, so let’s move on and talk about Hubs and Badges, particularly in light of some convos we’ve had in the regular weekly Fedora Hubs check-in meetings as well as an awesome hack session Remy D. and jflory7 pulled together last Thursday night.

Fedora Hubs + Badges – what’s old is new again

Behold, a mockup from 2011:

Fedora RPG mockup

In a sense, this is actually an early prototype/idea for Fedora Hubs + Badges integration. Remember that one of the two main goals of Fedora Hubs is to enable new Fedora users and make it easier for them to get bootstrapped into the project. Having tasks in the form of badges awarded for completing a task arranged in “missions” makes it clear and easy for new contributors to know what they can do and what they can do next to gradually build up their skills and basically ‘level up’ and feel happy, helpful, and productive. So there’s a clear alignment between badges and hubs in terms of goals.

So that was 2011, where are we going in 2016?

First thoughts about a badge widget

We have a couple of tickets relating to badges in the hubs issue tracker:

As we’ve been figuring out as we’ve been going through the needsmockup queue and building widgets, most widgets have at least two versions: the user version (what data in this widget relates to me? Across all projects, what bugs are assigned to me?) versus the project version (across all users, what bugs relate to this project?) You can’t just have one badges widget, because certain data related to that widget is more or is less useful in the context it’s being viewed in.

Today, the Fedora badges widget in Hubs is not unlike the one on the Fedora wiki (I have both the sidebar version and the content side version on my profile.) It’s basically small versions of the badge icon art in a grid (screenshot from the wiki version):

screenshot of wiki badges widget

The mockup below (from issue #85) shows what a little pushing in terms of working the metadata we’ve got available can do to provide a clearer picture of the badge earner via the badges he or she has won (left version is compressed, right version is expanded):

mockup of badges widget for hubs profiles

Layering on some more badgery

The above mockups are all just layer 0 stuff though. Layer 0? Yeh, here’s a hokey way of explaining how we’re starting to think about hubs development, particularly in the context of getting something out the door for Flock:

  • Layer 0 – stuff we already have in place in hubs, or refinements on what’s already implemented.
  • Layer 1 – new feature development at a base level – no whizbang or hoozits, and absolutely nothing involving modifications to ‘upstream’ / data-providing apps. (Remember that Hubs is really a front-end on front of fedmsg… we’re working with data coming from many other applications. If a particular type or format of data isn’t available to us, we have to modify the apps putting that data on the bus to be able to get it.)
  • Layer 2 – making things a bit nicer. We’re not talking base model here, we’re getting some luxury upgrades, but being pretty sensible about them. Maybe making some modifications to the provider apps.
  • Layer 3 – solid gold, gimme everything! This is the way we want things, having to make modifications to other apps isn’t of concern.

To get something out the door for Flock… we have to focus mostly on layer 0 and layer 1 stuff. This is hard, though, because when this team gets together we have really awesome, big, exciting ideas and it’s hard to scale back. 🙂 It’s really fun to brainstorm together and come up with those ideas too. In the name of fun, let’s talk through some of the layers we’ve been talking about for badges in hubs in particular, and through this journey introduce some of the big picture ideas we have.

Badges Layer 1: Tagging Along

An oftrequested feature of tahrir, the web app that powers badges.fedoraproject.org, is the notion of grouping badges together in a series (similar to the “missions” in the 2011 mockup above.) The badges in the series can be sequentially ordered, or they may have no particular order. Some in a series can have a sequential ordering and some not at all.

Here’s an example of badges with a sequential ordering (this series goes on beyond these, but the example three illustrate the concept well enough):

Here’s an example of badges that are closely related but have no particular sequence or order to them:

You can see, I hope, how having these formally linked together would be a boon for onboarding contributors. If you earned the first badge artist badge, for example, the page could link you to the next in the series… you could view a summary of it and come to understand you’d need to make artwork for only four more badges to get to the next level. Even if there isn’t a specific order, having a group of badges that you have to complete to get the whole group, like a field of unchecked checkboxes (or unpopped packing bubbles), kind of gives you the urge to complete them all. (Pop!) If a set of badges correspond to a set of skills needed to ramp up for work on a given project, that set would make a nice bootstrapping set that you could make a prerequisite for any new join requests to your project hub. So on and so forth.

So here’s the BIG SECRET:

There’s no badge metadata that links these together at all.

How do we present badges in series without this critical piece of metadata? We use a system already in place – badge tags. Each series could have an agreed upon tag, and all badges with that tag can become a group. This won’t give us the sequential ordering that some of the badges demand, but it’ll get us a good layer 1 to start. Mockup forthcoming on this, but it will get us a nicer badge widget for project / team hubs (issue #17).

Badges Layer 2: Real Badge Metadata

Here’s layer 2 for the feature – and I thought this would be the end of the road before Remy set us straight (more on that in the next section on layer 3):

So this one is somewhat simple. We potentially modify the badges themselves by adding additional fields to their yaml files (example behind link), and modify tahrir, the web app that drives badges.fpo, to parse and store those new fields. I tried to piece together a plan of attack for achieving this in tahrir ticket #343.

The problem here is that this would necessarily require changing the data model. It’s possible, but also a bit of a pain, and not something you want to do routinely – so this has to be done carefully.

Part of this would also involve dropping our overloading of tags. Now we can store descriptions for each badge series, and store sequential ordering for individual badges, and a few other nice things tags couldn’t enable.

If we’re changing the data model for layer 2, may as well also change it for *LAYER 3!!*, which I am emphasizing out of excitement.

Layer 3: How the years I spent playing Final Fantasy games finally pay off

skill tree diagram

“A simplified example of a skill tree structure, in this case for the usage of firearms.” Created by user Some_guy on Wikipedia; used under a CC0 license.

Remy D. suggested instead of linear and flat groupings of badges, we also add the ability to link them together into a skill tree. Now, you may already have experience with say the Final Fantasy series, Diablo series, Star Wars: The Old Republic, or other RPG-based games. Related skills are grouped together in particular zones of the tree, and depending on which zones of the tree you have filled out, you sort of fulfill a particular career path or paths. (e.g., in the case of Final Fantasy X… when you work towards filling out Lulu’s sphere grid area, you’re making your character a dark mage. When you work towards filling out Rikku’s area, you’re building skills towards become a skilled thief. So on, and so forth.)

Where this gets cool for Fedora is that we not only can help contributors get started and feeling awesome about their progress by presenting them clear sequences of badges to complete to earn a series (layer 2), but we can also help guide them towards building a ‘career’ or even multiple ‘careers’ (or ‘hats,’ heh) within the project and build their personal skill set as well. Today we already have five main categories for badges in terms of the artwork templates we use, but we can break these down further if need be – as-is, they map neatly to ‘careers’ in Fedora:

  • Community
  • Content
  • Development
  • Quality
  • Events

Fedora contributors could then choose to represent themselves using a radar chart (example displayed below), and others can get a quick visual sense of that contributor’s skillset:

<script async="async" src="http://jsfiddle.net/fxs2n8s5/embed/result/"></script>

So that’s layer 3. 🙂

Okay, so have you actually thought about what badges should be chained together for what teams?

Yes. 🙂 Remy D. and jflory7 started a list by researching the current onboarding procedures across a number of Fedora teams. Coming up with the actual arrangements of badges within series is important work too that has a big influence on whether or not the system actually works for end users! These suggestions Remy and Justin put together are suggestions of badges new contributors should complete while getting boostrapped and ready to contribute to the corresponding team.

In some cases these involve existing badges, in some cases additional badges we need to create to support the scenario have been uncovered. (This is great work, because over time badges has tended to be unbalanced in terms of awarding folks involved in packaging and who go to a lot of events more than others. It makes sense – the packaging infrastructure was the first part of Fedora’s infrastructure to get hooked up to fedmsg IIRC, so the data was more readily available.)

Here’s an excerpt of the first-cut of that work by Justin and Remy:

Ambassadors
  1. Get a FAS Account (sign the FPCA) (Involvement)
  2. Create a User Wiki Page
  3. Join Mailing Lists and IRC Channels
  4. Contact a Regional Mentor, get sponsored
  5. Get mentor approval
  6. Attend regional ambassador meeting, introduce yourself
CommOps
  1. If no fas account, create one (Involvement)
  2. Intro to commops mailing list
  3. Join IRC #fedora-commops
  4. Get with a mentor and start writing / editing blog / fed mag articles
Design
  1. Create a FAS account (sign the FPCA) (Involvement)
  2. Join the mailing list, introduce yourself: https://admin.fedoraproject.org/mailman/listinfo/design-team
  3. Claim a ticket in the Design Trac: https://fedorahosted.org/design-team/report/1
  4. Update your ticket, send updated information to Design List
  5. Once work is approved, request an invite for your FAS username for the design-team group on the design team list: https://admin.fedoraproject.org/mailman/listinfo/design-team
  6. Add yourself to the contributors list: http://fedoraproject.org/wiki/Artwork/Contributors
  7. Attend Design Team IRC meeting? (Speak Up)
  8. Subscribe to the design tasks fedocal: https://fedorapeople.org/groups/schedule/f-24/
Documentation
  1. Create a FAS Account (sign the FPCA) (Involvement)
  2. Create a GPG Key, and upload it to keyservers, one of which being keys.fedoraproject.org (Crypto Panda)
  3. Write a self-introduction to the mailing list with some ideas on how you would like to contribute: https://fedoraproject.org/wiki/Introduce_yourself_to_the_Docs_Project
  4. Create your own user wiki page, or update with new info if one exists from another prject (Let me Introduce myself Badge)
  5. Attend the freenode.net InterRelay Chat channel #fedora-meeting meetings on Mondays. (Speak Up Badge)
  6. Hang out on freenode.net InterRelay Chat channel #fedora-docs
  7. Interact with other fedora contributors (how to use fasinfo, lookup others wiki user pages, asking for sponsorship)
  8. Make a contribution: Choose an item from this page: https://fedoraproject.org/wiki/How_to_contribute_to_Docs
  9. Post to mailing list, describing which contribution you want to make, asking for feedback
  10. Post to mailing list with links to your contribution
Marketing
  1. Create a FAS Account (and sign the FPCA) (Involvement)
  2. Join the mailing list and introduce yourself: https://fedoraproject.org/wiki/Introduce_yourself_to_the_marketing_group
  3. Choose a marketing task you’d like to help with, and post to the mailing list asking for feedback: https://fedoraproject.org/wiki/Marketing/Schedule
  4. Post to the mailing list with a link to your contribution.
  5. Request to join the marketing group in FAS

Hopefully that gives a better picture on specifics, and what some of the bootstrapping series particularly would involve. You see here how a skill tree rather than badge series makes more sense – you only need create one FAS account, participate in IRC once intially, and participate on a mailing list once initially to learn how those things work before you shoudl really move on. So with this system, you could learn those “skills” joining any team, and where you complete the series for any particular team are the higher-numbered badges in that team’s bootstrap series. (Hope that makes sense.)

Get involved in this business!

we need your help!

Help us build fun yet effective RPG-like components into a platform that can power the free software communities of the future! How do you start? Sadly, we do not have the badge series / skill tree feature done yet, so I can’t simply point you at that. But here’s what I can point you to:

  • hubs-devel Mailing List – our list is powered by HyperKitty, so you don’t even need to have mail delivered to your inbox to participate! Mostly our weekly meeting minutes are posted here. I try to post summaries so you don’t have to read the whole log.
  • The Fedora Hubs Repo – the code with instructions on how to build a development instance and our issue tracker which includes tickets discussed above and many more!
  • Fedora Hubs weekly check-in meeting – our weekly meeting is at 14:00 UTC on Tuesdays in #fedora-hubs. Come meet us!

What do you think?

Questions? Comments? Feedback? Hit me up in the comments (except, don’t literally hit me. Because mean comments make me cry.)

kasapanda

April 26, 2016

Fedora Media Writer – The fastest way to create Live-USB boot media

This post will provide a quick tutorial about Fedora Media Writer, and its usage in both Fedora and Windows. Fedora Media Writer is a very small, lightweight, comprehensive tool that simplifies the linux getting started experience – it downloads and …

All systems go
New status good: Everything seems to be working. for services: The Koji Buildsystem, COPR Build System
Major service disruption
New status major: Build system network outage. Being looked into for services: COPR Build System, The Koji Buildsystem
How to create Fedora feed in Jekyll

Fedora Planet. A world of Fedora contributors

Across the Linux communities, there are several people that write and maintain their own blogs across all four corners of the world. From low-skills men to professionals, a lot of contents are posted everyday and informations at all levels are available on the Internet. What about to stay in touch with Fedora people that publish on the web?

The answer is Fedora Planet. It’s an aggregator service for blogs available to contributors within the Fedora Project community.

Using categories or tags on their own blogs, contributors can “link” their blog into Fedora Planet to have any posts they publish with the matching category or tag also appear on the Fedora Planet website. You can learn more about it on the Community Blog in this article.

In Jekyll, we do everything by hand

Ok, I’m a contributor of Fedora Project and I have a small blog. So, I would like to see my posts on Fedora Planet.

Just a little problem: my blog is made with Jekyll and it’s hosted on GitHub. This combination is amazing and simple, far away from pachyderm blog platforms like Wordpress and other, but it’s also minimal by default and without easy to implement plug-in system. In Jekyll, everything is made by hand.

So, let’s go and let me develop a Fedora tagged RSS feed system for Jekyll.

JFF. The Jekyll Fedora Feed

In the last version of Jekyll, adding a RSS feed to a Jekyll blog is really super simple.

You must only create a file in the root of the site called feed.xml with the following contents:

---
layout: null
---
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>{{ site.name | xml_escape }} - Articles</title>
    <description>{% if site.description %}{{ site.description | xml_escape }}{% endif %}</description>
    <link>
    {{ site.url }}</link>
    {% for post in site.posts %}
      {% unless post.link %}
      <item>
        <title>{{ post.title | xml_escape }}</title>
        {% if post.excerpt %}
          <description>{{ post.excerpt | xml_escape }}</description>
        {% else %}
          <description>{{ post.content | xml_escape }}</description>
        {% endif %}
        <pubDate>{{ post.date | date: "%a, %d %b %Y %H:%M:%S %z" }}</pubDate>
        <link>
        {{ site.url }}{{ post.url }}</link>
        <guid isPermaLink="true">{{ site.url }}{{ post.url }}</guid>
      </item>
      {% endunless %}
    {% endfor %}
  </channel>
</rss>

Then, in _config.yml add this code with your data:

...
name: Your blog name
description: Your amazing description.
url: http://your-url.com
...

Finally, add the link to the RSS feed to <head> in _layouts/default.html or where you put that part of code:

...
<link rel="alternate" type="application/rss+xml" title="Your Site RSS" href="/feed.xml" />
...

Ok, now your feed is operative but if you give this feed to Fedora Planet people will read all your posts, also the posts that don’t talk about Fedora things. I have tried to modify the code to implement a ‘Tags’ and ‘Cat’ selection but nothing went fine.

I decided to make a specific feed for the Fedora related articles and it’s really perfect for the Planet. This is my solution:

---
layout: null
---
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>{{ site.title | xml_escape }}</title>
    <description>{{ site.description | xml_escape }}</description>
    <link>{{ site.url }}{{ site.baseurl }}/</link>
    <atom:link href="{{ "/fedora-feed.xml" | prepend: site.baseurl | prepend: site.url }}" rel="self" type="application/rss+xml"/>
    <pubDate>{{ site.time | date_to_rfc822 }}</pubDate>
    <lastBuildDate>{{ site.time | date_to_rfc822 }}</lastBuildDate>
    <generator>Jekyll v{{ jekyll.version }}</generator>
    {% for post in site.tags.Fedora limit:10 %}
      <item>
        <title>{{ post.title | xml_escape }}</title>
        <description>{{ post.content | xml_escape }}</description>
        <pubDate>{{ post.date | date_to_rfc822 }}</pubDate>
        <link>{{ post.url | prepend: site.baseurl | prepend: site.url }}</link>
        <guid isPermaLink="true">{{ post.url | prepend: site.baseurl | prepend: site.url }}</guid>
	{% for tag in post.tags %}<category term="{{ tag }}"/>{% endfor %}
        {% for tag in page.tags %}
        <category>{{ tag | xml_escape }}</category>
        {% endfor %}
        {% for cat in page.categories %}
        <category>{{ cat | xml_escape }}</category>
        {% endfor %}
      </item>
    {% endfor %}
  </channel>
</rss>

And save that as fedora-feed.xml in your main path and remember to tag your posts with the word Fedora or change it in the code above.

At last, join the Planet following this instructions and provide a .planet file like this:

[http://your-blog-with-jekyll.com/fedora-feed.xml]
name = Your Name
face = http://yourname.fedorapeople.org/yourpicture.png

After all, it may take time for the server to sync, but you can check the Planet to see if your blog posts are also there.

Mismatch in Pyparted Interfaces

Last week my co-worker Marek Hruscak, from Red Hat, found an interesting case of mismatch between the two interfaces provided by pyparted. In this article I'm going to give an example, using simplified code and explain what is happening. From pyparted's documentation we learn the following

pyparted is a set of native Python bindings for libparted. libparted is the library portion of the GNU parted project. With pyparted, you can write applications that interact with disk partition tables and filesystems.

The Python bindings are implemented in two layers. Since libparted itself is written in C without any real implementation of objects, a simple 1:1 mapping of externally accessible libparted functions was written. This mapping is provided in the _ped Python module. You can use that module if you want to, but it's really just meant for the larger parted module.

_ped       libparted Python bindings, direct 1:1: function mapping
parted     Native Python code building on _ped, complete with classes,
           exceptions, and advanced functionality.

The two interfaces are the _ped and parted modules. As a user I expect them to behave exactly the same but they don't. For example some partition properties are read-only in libparted and _ped but not in parted. This is the mismatch I'm talking about.

Consider the following tests (also available on GitHub)

diff --git a/tests/baseclass.py b/tests/baseclass.py
index 4f48b87..30ffc11 100644
--- a/tests/baseclass.py
+++ b/tests/baseclass.py
@@ -168,6 +168,12 @@ class RequiresPartition(RequiresDisk):
         self._part = _ped.Partition(disk=self._disk, type=_ped.PARTITION_NORMAL,
         self._part = _ped.Partition(disk=self._disk, type=_ped.PARTITION_NORMAL,
                                     start=0, end=100, fs_type=_ped.file_system_type_get("ext2"))
 
+        geom = parted.Geometry(self.device, start=100, length=100)
+        fs = parted.FileSystem(type='ext2', geometry=geom)
+        self.part = parted.Partition(disk=self.disk, type=parted.PARTITION_NORMAL,
+                                    geometry=geom, fs=fs)
+
+
 # Base class for any test case that requires a hash table of all
 # _ped.DiskType objects available
 class RequiresDiskTypes(unittest.TestCase):
diff --git a/tests/test__ped_partition.py b/tests/test__ped_partition.py
index 7ef049a..26449b4 100755
--- a/tests/test__ped_partition.py
+++ b/tests/test__ped_partition.py
@@ -62,8 +62,10 @@ class PartitionGetSetTestCase(RequiresPartition):
         self.assertRaises(exn, setattr, self._part, "num", 1)
         self.assertRaises(exn, setattr, self._part, "fs_type",
             _ped.file_system_type_get("fat32"))
-        self.assertRaises(exn, setattr, self._part, "geom",
-                                     _ped.Geometry(self._device, 10, 20))
+        with self.assertRaises((AttributeError, TypeError)):
+#            setattr(self._part, "geom", _ped.Geometry(self._device, 10, 20))
+            self._part.geom = _ped.Geometry(self._device, 10, 20)
+
         self.assertRaises(exn, setattr, self._part, "disk", self._disk)
 
         # Check that values have the right type.
diff --git a/tests/test_parted_partition.py b/tests/test_parted_partition.py
index 0a406a0..8d8d0fd 100755
--- a/tests/test_parted_partition.py
+++ b/tests/test_parted_partition.py
@@ -23,7 +23,7 @@
 import parted
 import unittest
 
-from tests.baseclass import RequiresDisk
+from tests.baseclass import RequiresDisk, RequiresPartition
 
 # One class per method, multiple tests per class.  For these simple methods,
 # that seems like good organization.  More complicated methods may require
@@ -34,11 +34,11 @@ class PartitionNewTestCase(unittest.TestCase):
         # TODO
         self.fail("Unimplemented test case.")
 
-@unittest.skip("Unimplemented test case.")
-class PartitionGetSetTestCase(unittest.TestCase):
+class PartitionGetSetTestCase(RequiresPartition):
     def runTest(self):
-        # TODO
-        self.fail("Unimplemented test case.")
+        with self.assertRaises((AttributeError, TypeError)):
+            #setattr(self.part, "geometry", parted.Geometry(self.device, start=10, length=20))
+            self.part.geometry = parted.Geometry(self.device, start=10, length=20)
 
 @unittest.skip("Unimplemented test case.")
 class PartitionGetFlagTestCase(unittest.TestCase):

The test in test__ped_partition.py works without problems, I've modified it for visual reference only. This was also the inspiration behind the test in test_parted_partition.py. However the second test fails with

======================================================================
FAIL: runTest (tests.test_parted_partition.PartitionGetSetTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tmp/pyparted/tests/test_parted_partition.py", line 41, in runTest
    self.part.geometry = parted.Geometry(self.device, start=10, length=20)
AssertionError: (<type 'exceptions.AttributeError'>, <type 'exceptions.TypeError'>) not raised

----------------------------------------------------------------------

Now it's clear that something isn't quite the same between the two interfaces. If we look at src/parted/partition.py we see the following snippet

137     fileSystem = property(lambda s: s._fileSystem, lambda s, v: setattr(s, "_fileSystem", v))
138     geometry = property(lambda s: s._geometry, lambda s, v: setattr(s, "_geometry", v))
139     system = property(lambda s: s.__writeOnly("system"), lambda s, v: s.__partition.set_system(v))
140     type = property(lambda s: s.__partition.type, lambda s, v: setattr(s.__partition, "type", v))

The geometry property is indeed read-write but the system property is write-only. git blame leads us to the interesting commit 2fc0ee2b, which changes definitions for quite a few properties and removes the _readOnly method which raises an exception. Even more interesting is the fact that the Partition.geometry property hasn't been changed. If you look closer you will notice that the deleted definition and the new one are exactly the same. Looks like the problem existed even before this change.

Digging down even further we find commit 7599aa1 which is the very first implementation of the parted module. There you can see the _readOnly method and some properties like path and disk correctly marked as such but geometry isn't.

Shortly after this commit the first test was added (4b9de0e) and a bit later the second, empty test class, was added (c85a5e6). This only goes to show that every piece of software needs appropriate QA coverage, which pyparted was kind of lacking (and I'm trying to change that).

The reason this bug went unnoticed for so long is the limited exposure of pyparted. To my knowledge anaconda, the Fedora installer is its biggest (if not single) consumer and maybe it uses only the _ped interface (I didn't check) or it doesn't try to do silly things like setting a value to a read-only property.

The lesson from this story is to test all of your interfaces and also make sure they are behaving in exactly the same manner!

Thanks for reading and happy testing!

Using the Java Security Manager in Enterprise Application Platform 7

One of the major enhancements added in JBoss Enterprise Application Platform (EAP) 6 was modular classloading. Combined with the use of a Java Security Manager in EAP 7, we now have a flexible extra layer of security for threats against Java Enterprise Applications. However the extra layer of security still comes at the cost of performance.

The main difference between EAP 6, and 7 is that EAP 7 implements the Java Enterprise Edition 7 specification. Part of that specification is the ability to add Java Security Manager permissions per application. How that works in practice is that the Application Server defines a minimum set of policies that need to be enforced, as well as a maximum set of policies that an application is allowed to grant to itself.

Let’s say we have a web application which wants to read Java System Properties. For example:

System.getProperty("java.home");

If you ran with the Security Manager enabled, this call would throw an AccessControlException. In order to enable the security manager:

Start JBoss EAP 7 with the following option:

-secmgr

Or set SECMGR to "true" in standalone, or domain configuration files.

Now if you added the following permissions.xml file to the META-INF folder in the application archive, you could grant permissions for the Java System Property call:

Add to META-INF/permissions.xml of application:

<permissions ..>
        <permission>
                <class-name>java.util.PropertyPermission</class-name>
                <name>*</name>
                <actions>read,write</actions>
        </permission>
</permissions>

The custom Security Manager in EAP 7 also provides some extra methods for doing unchecked actions. These can be used by developers instead of PrivilegedActions in order to improve the performance of the application. There are a few of these optimized methods:

  • getPropertyPrivileged
  • getClassLoaderPrivileged
  • getCurrentContextClassLoaderPrivileged
  • getSystemEnvironmentPrivileged

Out of the box EAP 7 ships with a minimum, and maximum policy like so:

$EAP_HOME/standalone/configuration/standalone.xml:

        <subsystem xmlns="urn:jboss:domain:security-manager:1.0">
            <deployment-permissions>
                <maximum-set>
                    <permission class="java.security.AllPermission"/>
                </maximum-set>
            </deployment-permissions>
        </subsystem>

That doesn't enforce any particular permissions on applications, and grants them AllPermissions if they don’t define their own. If an administrator wanted to grant at least permissions to read System Properties to all applications then they could add this policy:

$EAP_HOME/standalone/configuration/standalone.xml:

        <subsystem xmlns="urn:jboss:domain:security-manager:1.0">
            <deployment-permissions>
            <minimum-set>
                <permission class="java.util.PropertyPermission" name="*" actions="read,write"/>
            </minimum-set>
                <maximum-set>
                    <permission class="java.security.AllPermission"/>
                </maximum-set>
            </deployment-permissions>
        </subsystem>

Alternatively if they wanted to restrict all permissions for all applications except FilePermission than they should use a maximum policy like so:

        <subsystem xmlns="urn:jboss:domain:security-manager:1.0">
            <deployment-permissions>
                <maximum-set>
                    <permission class="java.io.FilePermission" name="/tmp/abc" actions="read,write"/>
                </maximum-set>
            </deployment-permissions>
        </subsystem>

Doing so would mean that the previously described web applications which required PropertyPermission would fail to deploy. Because it is trying to grant permissions to Properties, which is not granted by the application administrator. There will be a chapter on using the security manager in the official documentation for EAP 7 when it goes GA.

Enabling the security manager after development of an application can be troublesome because a developer would then need to add the correct policies one at a time, as the AccessControlExceptions where hit. However the custom Security Manager EAP 7 will have a debug mode, which if enabled, doesn’t enforce permission checks, but logs violations of the policy. In this way, a developer could see all the permissions which need to be added after one test run of the application. This feature hasn’t been backported from upstream yet, we raised a request to get it backported here. In EAP 7 GA release you can get extra information about access violations by enabling DEBUG logging for org.wildfly.security.access.

When you run with the Security Manager in EAP 7 each module is able to declare it’s own set of unique permissions. If you don’t define permissions for a module, a default of AllPermissions is granted. Being able to define Security Manager policies per modules is powerful because you can prevent sensitive, or vulnerable features of the application server from a serious security impact if it’s compromised. That gives the ability for Red Hat to provide a workaround for a known security vulnerability via a configuration change to a module which limits the impact. For example, to restrict the permissions of the JGroups modules to only the things required you could add the following permissions block to the JGroups:

$EAP_HOME/modules/system/layers/base/org/jgroups/main/module.xml:

   <permissions>
       <grant permission="java.io.FilePermission" name="${env.EAP_HOME}/modules/system/layers/base/org/jgroups/main/jgroups-3.6.6.Final-redhat-1.jar" actions="read"/>
       <grant permission="java.util.PropertyPermission" name="jgroups.logging.log_factory_class" actions="read"/>
       <grant permission="java.io.FilePermission" name="${env.EAP_HOME}/modules/system/layers/base/org/jboss/as/clustering/jgroups/main/wildfly-clustering-jgroups-extension-10.0.0.CR6-redhat-1.jar" actions="read"/>
       ...
   </permissions>

In EAP 7 GA the use of ${env.EAP_HOME} as above won't work yet. That features has been implemented upstream and backporting can be tracked here. That feature will make file paths compatible between systems by adding support for System Property, and Environment Variable expansion in module.xml permission blocks, making the release of generic security permissions viable.

While the Security Manager could be used to provide multi-tenancy for the application server. Red Hat does not think that it's suitable for that. Our Java multi-tenancy in Openshift is achieved by running each tenant’s application in a separate Java Virtual Machine, with the Operating System providing sandboxing via SELinux.

In conclusion EAP 7 introduced a custom Java Security Manager which allows an application developer to define security policies per application, while also allowing an application administrator the ability to define security policies per module, or a set of minimum, or maximum security permissions for applications. Enabling the Security Manager will have an impact on performance. We recommend taking a holistic approach to security of the application, and not relying on the Security Manager only.

For more information see this presentation slide deck by David Lloyd.

Product

Red Hat JBoss Enterprise Application Platform

Tags

eap jboss security

Component

jbossas
ThatOneTimeI...
<figure>

<figcaption>src: https://openclipart.org/detail/61171/old-key</figcaption> </figure>

...replied to an important email thread, which caused me to discover a bug, which I reported, to then catch wind of a possible solution, which to attempt I had to hack my .ssh/config, to add a ProxyCommand, to go through a bastion host (which is more than just a hostname I found out today), to clone an ansible repo, hack on a python config, and wield git format-patch and interactive rebasing to squash commits, to attach a clean patch to a ticket, that got shouted out on the mailing list, and accepted during an infra freeze.

If that garbled mess of jargon didn't make any sense to you, fret not. You should find comfort in that at one point in my life--not even that long ago--it would not have made any sense to me either...

Don't be afraid to ask (lots of) questions.
Don't be afraid to get (very) productively lost.
Don't be afraid to be a (complete) beginner.
Don't be afraid.
Slightly Richer Man's CI

Not so long ago I have written about my attempts to bring CI to Pagure. It was pointed out to me that a couple assumptions I've made are actually incorrect.

Here are the errata:

  • When a pull request is updated or rebased, there is no need to check the message body. Pagure already puts this information into the message (look for notification: true).

  • Using comments for indicating status is too clumsy, especially given the fact there is a feature designed to communicate exactly this type of information – flags. Setting a flag is pretty much the same as posting a comment, but they appear in a sidebar with a link and some text. You can also add a percentage to it that will determine the color of a badge.

  • The Fedmsg hook that you can enable in the project settings is actually not required. I misunderstood what it does. The notifications on new pull requests get send automatically without any change in configuration required.

    The hook is actually git post-receive hook that will send you tons of e-mail through the Fedora notification system whenever a commit is pushed to master without going through a pull-request.

    UPDATE: Since Pagure 2.0 the hook will send only one e-mail on each push.

Anyway, fixing these is quite simple.

While the setup described in previous post worked fine for my use case, it was not ideal. One of the biggest issues is the configuration: adding stuff to /etc is not a scalable model. First order of business was to create a web interface where the configuration could be managed. Added benefit: with a decent authentication system (yes, FAS does nicely) it's possible for anyone to configure their integration points.

<figure> Screenshot of index page<figcaption>Screenshot of index page</figcaption> </figure>

Jenkins wants a voice too

Another big drawback is the dependency on Fedora Infrastructure Jenkins. While setting up your own Jenkins is easy (approx. three clicks on OpenShift), connecting it to production Fedmsg is probably not (I don't know, did not try).

Since there is now a web server for the interface, it is not a big step to support web hooks. There is a Notification plugin for Jenkins that can ping a URL with JSON data whenever a job finishes.

The real message contains a ton of data, but for this use-case only project name and build number are really interesting. As long as this data is supplied, we are happy.

{
    "name": "asgard",
    "build": {
        "number": 1,
    }
}

If instead of web requests the plugin gives you stack traces, try setting log lines to 1.

<figure> Plugin configuration<figcaption>Plugin configuration</figcaption> </figure>

(No, that is not an actual token in the shown URL. No need to try.)

Here we are and there we go

The semi-finished service is available at http://poormanci.lsedlar.cz/. The documentation really is lacking, though. The best guide how to use it is the previous blog post or a somewhat work-in-progress help page.

Now there is still a ton of things to improve. Currently, all requests to external services are sent directly from the fedmsg consumer or web app process. Since these are blocking and could potentially take a long time, it's a prime candidate for denial-of-service. I need to refactor this into a separate worker process.

Another thing to add would be the support for a web hook sent from Pagure. This would make it possible to use a custom instance. First I need to learn what data is actually sent in the HTTP request.

UPDATE: This is actually documented and the hook contains the same information that the Fedmsg notification has.

Another point of improvement is the deployment of the whole thing. Currently, I build everything in COPR, install the RPM and do any database migrations almost by hand. The initial setup was also manual. I plan to write an Ansible playbook to make future deployments simpler. It will also document the process a bit.

Next item on the list is support for multi-configuration projects in Jenkins. The Matrix plugin in Jenkins allows a single test suite to run on multiple builders (e.g. with different Python versions). I want to support such configuration too, but the Fedmsg integration will not do here. The messages from such builds are not particularly helpful. I have reported the issue upstream, but I'm really in no position to go fix the Java code.

The reporting from the notification plugin works well, though.

April 25, 2016

FLISoL Venezuela 2016 in Numbers

First of all, CONGRATULATIONS to those who organized each one of the FLISoL’s that were held at Venezuela and THANK YOU to the hundreds of visitors that went to each one of those locations, without you, none of this would have been possible. From my position as National Coordinator I was able to see how 17 cities from our country joined the largest OpenSource celebration from LATAM, at large locations and small ones, with months of planning and also just days. What matters is to multiply that knowledge that can help many.

As some might now, one of my tasks was to help with the social accounts, handle our @flisolve twitter account, add the cities to flisol.org.ve and help those who found difficult to edit the official wiki. I wanted to share with you some numbers about how FLISoL was, receptivity and movement around the networks.

<figure class="wpb_wrapper vc_figure">
IMG_7217
</figure>

Twitter

April Stats

Top Twitt
<figure class="wpb_wrapper vc_figure">
toptwit
</figure>

Total Views at April

Impressions 66500 clicks

Followers April

New Followers 134 -

Total Interactions - April

Published Twits 361 -
Mentions 640 -

FLISoL Stats - April 23

Total Interactions - April 23

Total Views - April 23

Impressions 13854 clicks
<canvas height="101" width="101"></canvas>

Trending Topic

Total Interactions - April 23

Published Twits 83 -
Click to Links 96 -
Likes 39 -
Reply 29 -

Retwits 171 -
Mentions 210 -
Interactions 162 -

Twits per Hour

<canvas class="vc_line-chart-canvas" height="1" width="1"></canvas>
  • Twits

Flisol.org.ve

April Stats

Website Stats

Total Views April

Views 453 clicks

Form Contacts

Emails 95 -

Most Viewed Pages

Home 120 -
Location 93 -
Contact 89 -

Flisol.info

April Stats

Total Cities

Cities 17 -

Wiki Edits

Edits 164 -

This post has a nicer formatting that can be seen at it's original source at tatica.org , so feel free to hit the link and read better version!
New badge: Docs FAD 2016 !
Docs FAD 2016You teamed up with Docs folks at the 2016 FAD!
Blender and XDG App

For a while now, I've been wanting to play with XDG App.

And regularly, I was reading people on IRC saying that someone should make an XDG App bundle for Blender.

So I figured I'd have a go at building the latest Blender release 2.77.

This being my first XDG App, and Blender being such a big project, I must admit it wasn't the easiest packaging-related thing I've ever done.

But thanks to Alexander's help on IRC, and after much testing from Alexandre, I now have something that actually seems to work.

If you want to give it a try, follow these steps:

$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg
$ xdg-app --user remote-add --gpg-import=./gnome-sdk.gpg gnome https://sdk.gnome.org/repo/
$ xdg-app --user install gnome org.freedesktop.Platform 1.4
$ xdg-app --user remote-add --no-gpg-verify bochecha https://www.daitauha.fr/static/xdg-app/repo-apps
$ xdg-app --user install bochecha org.blender.app

At this point, you should be able to run Blender from the XDG App bundle:

$ xdg-app run org.blender.app

It should also work if you try running it from the GNOME Shell application picker.

Since that's my first XDG App, it is very possible that I missed a few things, so do let me know how it works for you and if something is missing.

Refreshed Look of Fedora Developer Portal

I have just deployed a new version of Fedora Developer Portal. The most visible part is refreshed look with more uniform layout. I have also compressed all the images in titles (from ~1.2MB to ~50kB in average) – so the loading should be much faster.

There is also new section called Start a project. This section contains/will contain content from a problem-to-be-solved perspective like ‘Create a Website’ or ‘Work with Arduino’. This new section is the opposite of Languages and Databases, which is a ‘language-specific’ section.

As I have already mentioned, the new section is partly empty. This is another new feature – we have also published a selection of unfinished/empty pages, that we would like to include. And this is your turn! You can help us by contributing content. All information is about contributing is on our new site for contributors – also linked from the portal.

If you would like to get involved, get in touch with us on our mailing list or ping us on irc #developer-portal at freenode.

Installing a VM Fedora 23 Cloud Atomic with KVM – Part 1

Step 1 :

Download Fedora Cloud Atomic

Step 2:

Creates files to use in generation of init.iso
user-data , here you can configure your ssh keys and password for the user fedora
Download user-data
meta-data, here you can configure instance-id name and fqdn of vm
Download meta-data
init.iso will be used in order to use in boot process to configure the mentioned data
Download init.so
If you want to generate the init.iso just run this command:

~/l/r/v/cloud.init genisoimage -output cloudinit.iso -volid cidata -joliet  -rock user-data meta-data 
I: -input-charset not specified, using utf-8 (detected in locale settings)
Total translation table size: 0
Total rockridge attributes bytes: 331
Total directory bytes: 0
Path table size(bytes): 10
Max brk space used 0
183 extents written (0 MB)
~/l/r/v/cloud.init 

Steps to create the vm:

Selección_101 Selección_100 Selección_099 Selección_098 Selección_097 Selección_096 Selección_095 Selección_094 Selección_093 Selección_092 Selección_091

Steps to Connect:

Then connect to the server with the correct ip address , user fedora and the correct password typed in user-data. If you use my init.iso, the password is atomicfedora.

Step to detect network interface:

Now, you need to change the ip address from dhcp to fixed, please edit the name of your interface.

[fedora@fedora ~]$ pwd
/home/fedora
[fedora@fedora ~]$ hostname -f
fedora.atomic.cloud
[fedora@fedora ~]$ ip a s |grep -v lo | egrep -i "^2:*"|awk  '{print $2}'|sed 's/://'
ens3
[fedora@fedora ~]$ 

 

Step to configure fixed network:

Fill some values with your correct ip range

[fedora@fedora network-scripts]$ cat ifcfg-ens3 
# Generated by dracut initrd
NAME="ens3"
DEVICE="ens3"
ONBOOT=yes
NETBOOT=yes
UUID="6092d4c8-9536-47b1-bf42-9567bd570863"
IPV6INIT=no
TYPE=Ethernet
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
PEERDNS=yes
PEERROUTES=yes
DNS1=192.168.122.1
IPADDR=192.168.122.233
GATEWAY=192.168.122.1
NETMASK=255.255.255.0
[fedora@fedora network-scripts]$ 

Then down an up interface:

[fedora@fedora network-scripts]$ sudo su -
-bash-4.3# ifdown ens3 && ifup ens3
Device 'ens3' successfully disconnected.
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/1)
-bash-4.3# 

Then try to do network testing, ping, dns, etc..

 

Step to check Atomic:

-bash-4.3# atomic host status
  TIMESTAMP (UTC)         VERSION    ID             OSNAME            REFSPEC                                                
* 2016-04-19 19:04:34     23.106     05052ae3bb     fedora-atomic     fedora-atomic:fedora-atomic/f23/x86_64/docker-host     
-bash-4.3# 

Get some info about the next container to install and info of our docker configuration.

-bash-4.3# atomic info fedora/cockpitws
INSTALL: /usr/bin/docker run -ti --rm --privileged -v /:/host IMAGE /container/atomic-install
UNINSTALL: /usr/bin/docker run -ti --rm --privileged -v /:/host IMAGE /cockpit/atomic-uninstall
RUN: /usr/bin/docker run -d --privileged --pid=host -v /:/host IMAGE /container/atomic-run --local-ssh
-bash-4.3# docker info
Containers: 0
Images: 0
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: atomicos-docker--pool
 Pool Blocksize: 524.3 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file: 
 Metadata file: 
 Data Space Used: 62.39 MB
 Data Space Total: 2.147 GB
 Data Space Available: 2.085 GB
 Metadata Space Used: 40.96 kB
 Metadata Space Total: 8.389 MB
 Metadata Space Available: 8.348 MB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.109 (2015-09-22)
Execution Driver: native-0.2
Logging Driver: journald
Kernel Version: 4.4.6-301.fc23.x86_64
Operating System: Fedora 23 (Twenty Three)
CPUs: 1
Total Memory: 993.1 MiB
Name: fedora.atomic.cloud
ID: Z6GY:ZKDF:QAS5:SBFC:OO2J:75AW:KDJG:GGMN:CR3V:CAUS:SGMI:RJTE
-bash-4.3# 

Run atomic host upgrade, in order to upgrade the system.

-bash-4.3# atomic  host upgrade
Updating from: fedora-atomic:fedora-atomic/f23/x86_64/docker-host

Receiving metadata objects: 

 

Next part will show you how to configure docker storage backend and install cockpit container as a service.

 

Links

Reference 1

Reference 2

One hour hacks: Remote LUKS over SSH

I have a GNU/Linux server which I mount a few LUKS encrypted drives on. I only ever interact with the server over SSH, and I never want to keep the LUKS credentials on the remote server. I don’t have anything especially sensitive on the drives, but I think it’s a good security practice to encrypt it all, if only to add noise into the system and for solidarity with those who harbour much more sensitive data.

This means that every time the server reboots or whenever I want to mount the drives, I have to log in and go through the series of luksOpen and mount commands before I can access the data. This turned out to be a bit laborious, so I wrote a quick script to automate it! I also made sure that it was idempotent.

I decided to share it because I couldn’t find anything similar, and I was annoyed that I had to write this in the first place. Hopefully it saves you some anguish. It also contains a clever little bash hack that I am proud to have in my script.

Here’s the script. You’ll need to fill in the map of mount folder names to drive UUID’s, and you’ll want to set your server hostname and FQDN to match your environment of course. It will prompt you for your root password to mount, and the LUKS password when needed.

Example of mounting:

james@computer:~$ rluks.sh 
Running on: myserver...
[sudo] password for james: 
Mount/Unmount [m/u] ? m
Mounting...
music: mkdir ✓
LUKS Password: 
music: luksOpen ✓
music: mount ✓
files: mkdir ✓
files: luksOpen ✓
files: mount ✓
photos: mkdir ✓
photos: luksOpen ✓
photos: mount ✓
Done!
Connection to server.example.com closed.

Example of unmounting:

james@computer:~$ rluks.sh 
Running on: myserver...
[sudo] password for james: 
Sorry, try again.
[sudo] password for james: 
Mount/Unmount [m/u] ? u
Unmounting...
music: umount ✓
music: luksClose ✓
music: rmdir ✓
files: umount ✓
files: luksClose ✓
files: rmdir ✓
photos: umount ✓
photos: luksClose ✓
photos: rmdir ✓
Done!
Connection to server.example.com closed.
james@computer:~$

It’s worth mentioning that there are many improvements that could be made to this script. If you’ve got patches, send them my way. After all, this is only a: one hour hack.

Happy hacking,

James

PS: One day this sort of thing might be possible in mgmt. Let me know if you want to help work on it!


Tested the Libre Office software.
Today I tested Libre Office suite. The last version is 5.1.2.2 and has many features.
I like to used because I can export my work under Microsoft Office and also to open document.
You can download it from libreoffice official page.
This software can be used with many ope systems, like:
You have LibreOffice come with Impress, Calc , Writer, Math and Draw to deal with any type of issue.  I don't like for example the way to rename the macro. You need to go to : Tools -Macros -Organize Macros -LibreOffice Basic.. the Organizer button and finally double click to the name of the macros , rename and close. I think this come from permisions of the macros.
Also You can deal with macros what do you want. For example in this example I delete some fields from one document - one matrix make from A3 to B10.
<xmp>  Dim myDoc As Object
Dim mySheet As Object
Dim myCell As Object
Dim myRange As Object
Dim myFlags As Long

myDoc = ThisComponent
mySheet = myDoc.Sheets(0) ' Refers to Sheet1 as in 0, 1, 2 etc
myRange = mySheet.getCellRangeByName("A3:B10")

myFlags = com.sun.star.sheet.CellFlags.VALUE + _
com.sun.star.sheet.CellFlags.DATETIME + _
com.sun.star.sheet.CellFlags.STRING + _
com.sun.star.sheet.CellFlags.ANNOTATION + _
com.sun.star.sheet.CellFlags.FORMULA + _
com.sun.star.sheet.CellFlags.HARDATTR + _
com.sun.star.sheet.CellFlags.STYLES + _
com.sun.star.sheet.CellFlags.OBJECTS + _
com.sun.star.sheet.CellFlags.EDITATTR

myRange.clearContents(myFlags)</xmp>
This is a simple type of macro using LibreOffice Basic language. You can also use Python , BeanShell, JavaScript.
So if you like this software and you want to survive this software, you can make a donation of at least 5 USD.

LibreOffice is Free Software and is made available free of charge. Your donation, which is purely optional, supports our worldwide community.If you like the software, please consider a donation.
Introducing the extra wallpapers for Fedora 24

In the Fedora 24 alpha release, you could preview an early version of the default wallpaper for Fedora 24. Each release, the Fedora Design team collaborates with the Fedora community to release a set of 16 additional backgrounds to install and use on Fedora. The Fedora Design team takes submissions from the wider community, then votes on the top 16 to include in the next release.

Voting is now closed on the supplemental wallpapers for Fedora 24. The results are available for all to see on the wallpaper voting app. In the Fedora 24 cycle, the Fedora Design team received 133 valid submissions from many existing and new contributors to supplemental wallpapers.

Take a look at Fedora 24 wallpapers

Out of the 133 submissions, the following 16 wallpapers were chosen for inclusion in Fedora 24:

Aurora over Iceland by Helena Bartosova -- CC-BY-SA Sunrise in Florida II by afsilva -- CC-BY-SA Lady Musgrave Blue by Lyle Wang -- CC-BY-SA jellyfish by Allan Lyngby Lassen -- CC-BY-SA old railroad by nask0 -- CC-BY-SA Argentina Glacier by wesleyotugo -- CC0 Iceberg in greenland by lhirlimann -- CC-BY-SA Paisaje by diegoestrada -- CC0 Tree in Winter by Franz Dietrich -- CC-BY-SA waves by ali4129 -- CC-BY-SA Morning Dew on Leaves by sethtrei -- CC-BY-SA zen by hhlp -- CC-BY-SA Blue Deep by alyaj2a -- Free Art By the lake -- CC0 mistogan by espasmo -- CC0 Ice Lake by Oscar Osta --CC-BY-SA

 

Crouton Fedora new version available!

Hey everyone! New version of my Crouton fork for Fedora is available at:

https://github.com/nmilosev/crouton-fedora/

I’ve managed to clean the scripts even further, and now it is using Docker images from Koji to set itself up instead of downloading RPM’s. You save a lot of data and a lot of time. It now can install a base Fedora system in less than a minute. See it in action:

http://webmshare.com/74VrR

Have fun and send patches! :)

Flisol 2016 – Santiago-Cusco-Curuzu Cuatia

Este año me toco compartir el Flisol en Santiago de Chile, una hermosa ciudad llena de gente con mucha energia y compañerismo. Tambien en paralelo pude dar una charla para la gente de Cusco (Peru) y un video que envie a la gente de Curuzu Cuatia (Argentina))

11416172_2805642762797731_4434570062838836329_n 12974355_1769816169907850_6593522860474068800_n 13029450_1004069192974171_2056566601971888692_o

Durante la previa del evento pude participar en algunas ocasiones no en toda su plenitud , pero  asi pude ir integrandome de a poco. Y coomo todo el dia llegaba y los nervios y las ganas de estar ahi ya empezando a recibir la gente y esperar las aulas llenas y gente que venga a preguntar todo !!

13086913_1004074799640277_1396794641612837604_ophoto_2016-04-25_09-23-06 13086806_1003994529648304_5047867377136278754_o 13040919_1003994296314994_5765772339540422260_o 13055237_1003994309648326_2157868769992284092_o 13040888_10154146792554583_2705745852955557831_o 13063251_10154146938294583_1440762702554847896_o 13041098_1003994749648282_6412128628468137104_o 13063324_1003994732981617_8055017190689548777_o

El evento transcurrio con mucha fuerza, como siempre momentos donde hubo mas gentes y momentos donde se dispersaban mas, a pesar de que se dio en un lugar no tan cercano la convocatoria fue mas que positiva, el reencuentro con amigos tambien lo fue :)…

13062490_10154146493709583_7434897854136450354_n

Y para ir terminando un saludo a todos los que participamos y los que se acercaron, por mas eventos, por mas software libre..

13055690_10154146746674583_5002337334689096108_oDSCF1188

 

April 24, 2016

Can we train our way out of security flaws?
I had a discussion with some people I work with smarter than myself about training developers. The usual training suggests came up, but at the end of the day, and this will no doubt enrage some of you, we can't train developers to write secure code.

It's OK, my twitter handle is @joshbressers, go tell me how dumb I am, I can handle it.

So anyhow, training. It's a great idea in theory. It works in many instances, but security isn't one of them. If you look at where training is really successful it's for things like how to use a new device, or how to work with a bit of software. Those are really single purpose items, that's the trick. If you have a device that really only does one thing, you can train a person how to use it; it has a finite scope. Writing software has no scope. To quote myself from this discussion:

You have a Turing complete creature, using a Turing complete machine, writing in a Turing complete language, you're going to end up with Turing complete bugs.

The problem with training in this situation is that you can't train for infinite permutations. By its very definition, training can only cover a finite amount of content. Programming by definition requires you to draw on an infinite amount of content. The two are mutually exclusive.

Since you've made it this far, let's come to an understanding. Firstly, training, even how to write software is not a waste of time. Just because you can't train someone to write secure software you can teach them to understand the problem (or a subset of it). The tech industry is notorious for seeing everything as all or none. It's a sliding scale.

So what's the point?

My thoughts on this matter are one of how can we think about the challenges in a different way. Sometimes you have to understand the problem and the tools you have to find better solutions for it. We love to worry about how to teach everyone how to be more secure, when in reality it's all about many layers with small bits of security in each spot.

I hate car analogies, but this time it sort of makes sense.

We don't proclaim the way to stop people getting killed in road accidents is to train them to be better drivers. In fact I've never heard anyone claim this is the solution. We have rules that dictate how to road is to be used (which humans ignore). We have cars with lots of safety features (which humans love to disable). We have humans on the road to ensure the rules are being followed. We have safety built into lots of roads, like guard rails and rumble strips. At the end of the day even with layers of safety built in, there are accidents, lots of accidents, and almost no calls for more training.

You know what's currently the talk about how to make things safer? Self driving cars. It's ironic that software may be the solution to human safety. The point though is that every system reaches a point where the best you can ever do is marginal improvements. Cars are there, software is there. If we want to see substantial change we need new technology that changes everything.

In the meantime, we can continue to add layers of safety for software, this is where most effort seems to be today. We can leverage our existing knowledge and understanding of problems to work on making things marginally better. Some of this could be training, some of this will be technology. What we really need to do is figure out what's next though.

Just as humans are terrible drivers, we are terrible developers. We won't fix auto safety with training any more than we will fix software security with training. Of course there are basic rules everyone needs to understand which is why some training is useful. We're not going see any significant security improvements without some sort of new technology breakthrough. I don't know what that is, nobody does yet. What is self driving software development going to look like?

Let me know what you think. I'm @joshbressers on Twitter.
Going to Bitcamp 2016

Over the weekend of April 9th – 10th, the Fedora Project Ambassadors of North America attended the Bitcamp 2016 hackathon at the University of Maryland. But what is Bitcamp? The organizers describe it as the following.

Bitcamp is a place for exploration. You will have 36 hours to delve into your curiosities, learn something new, and make something awesome. With world-class mentors and hundreds of fellow campers, you’re in for an amazing time. If you’re ready for an adventure, see you by the fire!

The Fedora Project attended as an event sponsor this year. At the event, we held a table in the hacker arena. The Ambassadors offered mentorship and help to Bitcamp 2016 programmers, gave away some free Fedora swag, and offered an introduction to Linux, open source, and our community. This report recollects some highlights from the event.

Bitcamp 2016: The Fedora Ambassadors of Bitcamp 2016

The Fedora Ambassadors at Bitcamp 2016. Left to right: Chaoyi Zha (cydrobolt), Justin W. Flory (jflory7), Mike DePaulo (mikedep333), Corey Sheldon (linuxmodder)

Getting to Bitcamp 2016

Bitcamp 2016: Chaoyi Zha (cydrobolt) helping hackers with code

Fedora Ambassador Chaoyi Zha (cydrobolt) helps two other students working on their projects.

I left Rochester, New York around 4:00pm after my classes for the day had finished. Bitcamp check-in started at 7:00pm on Friday, April 8th. It was about a six hour drive for me to get there, and I got to Maryland right around 9:30pm.

Once I arrived, walking in was a crazy experience. Tables upon tables of hackers were lined up bu the hundreds. Most were already working on brainstorming. I meandered my way through the crowds to the Fedora table where Corey Sheldon, Mike DePaulo, and Chaoyi Zha were set up.

Meeting the hackers

Bitcamp 2016: Corey Sheldon (linuxmodder) helps a student install Fedora

Fedora Ambassador Corey Sheldon (linuxmodder) works with a student trying to set up dual-boot on his laptop.

Many other students came up to the table before the hackathon officially began. We interacted with several students and helped establish ourselves as mentors as well. Additionally, we also had a badge that attendees could scan to get added to their FAS account!

Once the event officially began, teams of people began working on their projects. Many people had grand ideas of projects to cram into the one weekend. For a brief time, the Ambassadors had a chance to rest from answering questions and helping people with their own hardware.

The hackers began settling into a groove for the evening.

Spending the night

Bitcamp 2016: Over 1,000 hackers attended at the University of Maryland

Over a thousand hackers were present at Bitcamp 2016.

As the day turned into night, the home stretch of the hackathon was beginning. Those with firm ideas were deeply focused on their projects. Others were taking their plans back to the drawing board to overcome unexpected difficulties. Things began settling down for the night. The same cycle repeated itself for both Friday and Saturday nights.

Around this time, we had waves of interested hackers in Fedora, open source software, and Linux approach the table. This time was great for personalized, one-on-one conversations with visitors. Many excellent connections happened during this time!

Mentoring

Bitcamp 2016: Mike DePaulo (mikedep333) at the Fedora Bitcamp 2016 table

Fedora Ambassador Mike DePaulo (mikedep333) demonstrated his triple-boot MacBook with OS X, Windows, and Fedora at Bitcamp 2016.

During Bitcamp, there were several opportunities and connections made between Fedora Ambassadors and university students.

For most of one night, Corey worked with one student who was aiming to do a full dual-boot installation on his laptop with Windows 10 UEFI. For a mixed variety of issues, he was unable to get Fedora working properly on his system. With the help of Corey, he was able to install and use Fedora on his laptop. He was very excited to finally get it working and was hoping to use it for development work in both classwork and personal projects. He was also a repeat visitor from BrickHack and remembered some of the booth members from the last hackathon.

Chaoyi traveled around the hacker space and worked with students looking to get help on web development projects. Chaoyi was able to give advice and help for students working with HTML, JavaScript / NodeJS, and Python. He traveled around the room for most of both nights teaching and showing students how to work on their projects and promoting the benefits of doing their work open source.

Bitcamp 2016: whatcanidoforfedora.org was a popular tool

The whatcanidoforfedora.org site proved a useful tool for students looking to contribute to open source.

Mike also helped several students at Bitcamp, and like at BrickHack, his triple-booted MacBook with OS X, Windows, and Fedora was a popular item. Students with Macs often came and asked him about his setup and how he got it working. Mike was also able to help answer questions about developing in Fedora and share his experience working with tools available in Fedora for working on his projects for work and for fun.

Many students were looking for help with how to better get experience working on software for their future careers. As a student familiar with open source, I enjoyed talking to these students about how open source was a great resource for them. I explained how open source is a great way to get real world experience without working an “official” job, showed how they could make an impact on the world and start doing things, and why we do open source. It was gratifying the see these students get something out of our discussions and build something awesome in the open by the end of hackathon.

Bitcamp 2016: 3D printed Fedora Badges

We 3-D printed a few Fedora Badges using STL files at another vendor’s table.

Overall, I feel like the Fedora Project’s impact was notable and concentrated at the event. I am extremely thankful and fortunate to have been sponsored to attend Bitcamp as an Ambassador for the Fedora Project.

The post Going to Bitcamp 2016 appeared first on Justin W. Flory's Blog.

Today ...
I wrote about one game to another blog here.
My concern for today is about that blogspot and how to improve it.
Maybe will need to make some changes because I have just one subscriber :) .
I will try to make more good  for subscribers or maybe to promote over internet.
Also I don't waht to delete the blogspot because has some great posts.

If you have some ideas to improve that blog just comment bellow.
Thank you. Regards.


runc and libcontainer on Fedora 23/24

In this post, I will post my notes on how I got runc and then using libcontainer on Fedora. The first step is to install golang:

$ sudo dnf -y install golang
$ go version
go version go1.6 linux/amd64

We will set GOPATH=~/golang/ and then do the following:

$ mkdir -p ~/golang/github.com/opencontainers
$ cd ~/golang/github.com/opencontainers
$ git clone https://github.com/opencontainers/runc.git
$ cd runc

$ sudo dnf -y install libseccomp-devel
$ make
$ sudo make install

At this stage, runc should be installed and ready to use:

$ runc --version
runc version 0.0.9
commit: 89ab7f2ccc1e45ddf6485eaa802c35dcf321dfc8
spec: 0.5.0-dev

Now we need a rootfs that we will use for our container, we will use the "busybox" docker image - pull it and export a tar archive:

$ sudo dnf -y install docker
$ sudo systemctl start docker
$ docker pull busybox
$ sudo docker export $(sudo docker create busybox) > busybox.tar
$ mkdir ~/rootfs
$ tar -C ~/rootfs -xf busybox.tar

Now that we have a rootfs, we have one final step - generate the spec for our container:

$ runc spec

This will generate a config.json (config) file and then we can start a container using the rootfs above:

$ sudo /usr/local/bin/runc start test
/ # ps
     PID   USER     TIME   COMMAND
 1 root       0:00 sh
 8 root       0:00 ps
/# exit

Getting started with libcontainer

runc is built upon libcontainer. This means that wcan write our own Golang programs which will start a container and do stuff in it. An example program is available here (thanks to the fine folks on #opencontainers on Freenode for helpful pointers). It starts a container using the above rootfs, runs ps in it and exits.

Once you have saved it somewhere on your go path (or go get https://github.com/amitsaha/libcontainer_examples/), we will first need to get all the dependent packages:

$ cd ~/golang/src/github.com/amitsaha/libcontainer_examples
$ go get
$ sudo GOPATH=/home/asaha/golang go run example1.go /home/asaha/rootfs/
 [sudo] password for asaha:
 PID   USER     TIME   COMMAND
 1 root       0:00 ps
LinuxWochen, MiniDebConf Vienna and Linux Presentation Day

Over the coming week, there are a vast number of free software events taking place around the world.

I'll be at the LinuxWochen Vienna and MiniDebConf Vienna, the events run over four days from Thursday, 28 April to Sunday, 1 May.

At MiniDebConf Vienna, I'll be giving a talk on Saturday (schedule not finalized yet) about our progress with free Real-Time Communications (RTC) and welcoming 13 new GSoC students (and their mentors) working on this topic under the Debian umbrella.

On Sunday, Iain Learmonth and I will be collaborating on a workshop/demonstration on Software Defined Radio from the perspective of ham radio and the Debian Ham Radio Pure Blend. If you want to be an active participant, an easy way to get involved is to bring an RTL-SDR dongle. It is highly recommended that instead of buying any cheap generic dongle, you buy one with a high quality temperature compensated crystal oscillator (TXCO), such as those promoted by RTL-SDR.com.

Saturday, 30 April is also Linux Presentation Day in many places. There is an event in Switzerland organized by the local local FSFE group in Basel.

DebConf16 is only a couple of months away now, Registration is still open and the team are keenly looking for additional sponsors. Sponsors are a vital part of such a large event, if your employer or any other organization you know benefits from Debian, please encourage them to contribute.

Fedora @ GNOME.Asia

I attended GNOME.Asia 2016 last week held in New Delhi from 22 April - 23 April. It was held in a university, MRIU, Faridabad, and we thought it would be good opportunity to spread awareness about Fedora among students and faculty, so we organized a Fedora booth there. It was my first time hosting any booth, and it was more fun that I had thought it would be. There were a rush of students curious about knowing fedora, and we happily spread awareness about Linux/Fedora.

Many students asked about ISO images and we gave them away. Some students had few trouble/doubts installing Fedora, or linux in general, and we helped them kickstart their fedora sessions.

There were interesting discussions also with faculty who welcomed us and wanted us to have an introductory sessions on Fedora from next academic session. They were also interested in installing Fedora on all their systems in laboratories. We felt pleased to have got such a positive response from all the people in the university and appreciate their curiousity in knowing more and more about Fedora.

Besides university students and faculty, we also interacted with people from PyDelhi, local python group, and discussed the possibility of organizing something related to Fedora in collaboration with PyDelhi group in Delhi region.

We have taken a note of challenges that Fedora faces. For example, not being known among students and faculty as compared to other distributions like Ubuntu. I hope with our sessions planned, both in university and outside in Delhi, thanks to various people we met in booth, we would be able to make some progress in right direction in this region. So, overall I think booth was a huge success able to accomplish everything we had in mind.

I would like to thank all the people involved, pjp, pravins, prth who helped attending people in the booth and make it a success in the end. It would not have been possible without their help to make sure that we attend to all the people and their queries.

Lastly, thanks to Fedora to sponsoring me to be able to get to GNOME.Asia, and my employer, Collabora to allow me to attend the event. And also the university and GNOME organizing committee to allow Fedora booth there.

April 23, 2016

Simple animated clock with SVG file.
Today I make a simple clock using animated svg file. I make also one tutorial to show you how is working. Just take a look at Make clock with animated SVG file.<html><head> <script> var minutehand = document.getElementById ("minutes"); var hourhand = document.getElementById ("hours"); function updateTime () { var date = new Date (); var m = date.getMinutes() * 6 + date.getSeconds() / 10; var h = date.getHours() * 30 + m / 12; minutehand.setAttribute ("transform", "rotate(" + m + ")"); hourhand.setAttribute ("transform", "rotate(" + h + ")"); window.setTimeout ("updateTime()", 1000); } updateTime (); </script> </head><body> <embed height="150" src="clock.svg" width="150"></body></html>
Linux Fest North West Day 0
We had about 250 at Fedora Game Night, gave away shirts to table winners and ran out of early sign-in badges.
Fedora BTRFS+Snapper – The Fedora 24 Edition

History

In the past I have configured my personal computers to be able to snapshot and rollback the entire system. To do this I am leveraging the BTRFS filesystem, a tool called snapper, and a patched version of Fedora's grub2 package. The patches needed from grub2 come from the SUSE guys and are documented well in this git repo.

This setup is not new. I have fully documented the steps I took in the past for my Fedora 22 systems in two blog posts: part1 and part2. This is a condensed continuation of those posts for Fedora 24.

NOTE: I'm using Fedora 24 alpha, but everything should be the same for the released version of Fedora 24.

Setting up System with LUKS + LVM + BTRFS

The manual steps for setting up the system are detailed in the part1 blog post from Fedora 22. This time around I have created a script that will quickly configure the system with LUKS + LVM + BTRFS. The script will need to be run in an Anaconda environment just like the manual steps were done in part1 last time.

You can easily enable ssh access to your Anaconda booted machine by adding inst.sshd to the kernel command line arguments. After booting up you can scp the script over and then execute it to build the system. Please read over the script and modify it to your liking.

Alternatively, for an automated install I have embedded that same script into a kickstart file that you can use. The kickstart file doesn't really leverage Anaconda at all because it simply runs a %pre script and then reboots the box. It's basically like just telling Anaconda to run a bash script, but allows you to do it in an automated way. None of the kickstart directives at the top of the kickstart file actually get used.

Installing and Configuring Snapper

After the system has booted for the first time, let's configure the system for doing snapshots. I still want to be able to track how much size each snapshot has taken so I'll go ahead and enable quota support on BTRFS. I covered how to do this in a previous post:

[root@localhost ~]# btrfs quota enable /
[root@localhost ~]# btrfs qgroup show /
qgroupid         rfer         excl 
--------         ----         ---- 
0/5           1.08GiB      1.08GiB

Next up is installing/configuring snapper. I am also going to install the dnf plugin for snapper so that rpm transactions will automatically get snapshotted:

[root@localhost ~]# dnf install -y snapper python3-dnf-plugins-extras-snapper
...
Complete!
[root@localhost ~]# snapper --config=root create-config /
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------+------+---------+-------------+---------
single | 0 |       |      | root |         | current     |         
[root@localhost ~]# snapper list-configs
Config | Subvolume
-------+----------
root   | /        
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 57 top level 5 path .snapshots

So we used the snapper command to create a configuration for BTRFS filesystem mounted at /. As part of this process we can see from the btrfs subvolume list / command that snapper also created a .snapshots subvolume. This subvolume will be used to house the COW snapshots that are taken of the system.

Next, we'll workaround a bug that is causing snapper to have the wrong SELinux context on the .snapshots directory:

[root@localhost ~]# restorecon -v /.snapshots/
restorecon reset /.snapshots context system_u:object_r:unlabeled_t:s0->system_u:object_r:snapperd_data_t:s0

Finally, we'll add an entry to fstab so that regardless of what subvolume we are actually booted in we will always be able to view the .snapshots subvolume and all nested subvolumes (snapshots):

[root@localhost ~]# echo '/dev/vgroot/lvroot /.snapshots btrfs subvol=.snapshots 0 0' >> /etc/fstab

Taking Snapshots

OK, now that we have snapper installed and the .snapshots subvolume in /etc/fstab we can start creating snapshots:

[root@localhost ~]# btrfs subvolume get-default /
ID 5 (FS_TREE)
[root@localhost ~]# snapper create --description "BigBang"
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description | Userdata
-------+---+-------+---------------------------------+------+---------+-------------+---------
single | 0 |       |                                 | root |         | current     |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang     |         
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 64 top level 5 path .snapshots
ID 260 gen 64 top level 259 path .snapshots/1/snapshot
[root@localhost ~]# ls /.snapshots/1/snapshot/
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

We made our first snapshot called BigBang and then ran a btrfs subvolume list / to view that a new snapshot was actually created. Notice at the top of the output of the sections that we ran a btrfs subvolume get-default /. This outputs what the currently set default subvolume is for the BTRFS filesystem. Right now we are booted into the root subvolume but that will change as soon as we decide we want to use one of the snapshots for rollback.

Since we took a snapshot let's go ahead and make some changes to the system by updating the kernel:

[root@localhost ~]# dnf update -y kernel
...
Complete!
[root@localhost ~]# rpm -q kernel
kernel-4.5.0-0.rc7.git0.2.fc24.x86_64
kernel-4.5.2-300.fc24.x86_64
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description                   | Userdata
-------+---+-------+---------------------------------+------+---------+-------------------------------+---------
single | 0 |       |                                 | root |         | current                       |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang                       |         
single | 2 |       | Sat 23 Apr 2016 01:08:18 PM UTC | root | number  | /usr/bin/dnf update -y kernel |

So we updated the kernel and the snapper dnf plugin automatically created a snapshot for us. Let's reboot the system and see if the new kernel boots properly:

[root@localhost ~]# reboot 
...
[dustymabe@media ~]$ ssh root@192.168.122.188 
Warning: Permanently added '192.168.122.188' (ECDSA) to the list of known hosts.
root@192.168.122.188's password: 
Last login: Sat Apr 23 12:18:55 2016 from 192.168.122.1
[root@localhost ~]# 
[root@localhost ~]# uname -r
4.5.2-300.fc24.x86_64

Rolling Back

Say we don't like that new kernel. Let's go back to the earlier snapshot we made:

[root@localhost ~]# snapper rollback 1
Creating read-only snapshot of current system. (Snapshot 3.)
Creating read-write snapshot of snapshot 1. (Snapshot 4.)
Setting default subvolume to snapshot 4.
[root@localhost ~]# reboot

snapper created a read-only snapshot of the current system and then a new read-write subvolume based on the snapshot we wanted to go back to. It then sets the default subvolume to be the newly created read-write subvolume. After reboot you'll be in the newly created read-write subvolume and exactly back in the state you system was in at the time the snapshot was created.

In our case, after reboot we should now be booted into snapshot 4 as indicated by the output of the snapper rollback command above and we should be able to inspect information about all of the snapshots on the system:

[root@localhost ~]# btrfs subvolume get-default /
ID 263 gen 87 top level 259 path .snapshots/4/snapshot
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description                   | Userdata
-------+---+-------+---------------------------------+------+---------+-------------------------------+---------
single | 0 |       |                                 | root |         | current                       |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang                       |         
single | 2 |       | Sat 23 Apr 2016 01:08:18 PM UTC | root | number  | /usr/bin/dnf update -y kernel |         
single | 3 |       | Sat 23 Apr 2016 01:17:43 PM UTC | root |         |                               |         
single | 4 |       | Sat 23 Apr 2016 01:17:43 PM UTC | root |         |                               |         
[root@localhost ~]# ls /.snapshots/
1  2  3  4
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 88 top level 5 path .snapshots
ID 260 gen 81 top level 259 path .snapshots/1/snapshot
ID 261 gen 70 top level 259 path .snapshots/2/snapshot
ID 262 gen 80 top level 259 path .snapshots/3/snapshot
ID 263 gen 88 top level 259 path .snapshots/4/snapshot

And the big test is to see if the change we made to the system was actually reverted:

[root@localhost ~]# uname -r
4.5.0-0.rc7.git0.2.fc24.x86_64
[root@localhost ~]# rpm -q kernel
kernel-4.5.0-0.rc7.git0.2.fc24.x86_64

Enjoy!

Dusty

KeepassX 2

Es ist schon ein wenig verwirrend, was bei Fedora mit keepassx passiert ist. Zuerst wurde für Fedora 23 ein Update auf Version 2.0 veröffentlicht, das dann jedoch wiederum nach einigem Protest zurückgezogen wurde.

Aus mir nicht nachvollziehbaren Gründen wurde dazu das Epoch des Paketes auf 1 erhöht, was spätestens beim Upgrade auf eine neuere Fedora-Version dazu führt, das keepassx ein Zwangs-Downgrade auf Version 0.44 verpasst bekommt. Da die Datenbanken von keepassx 0.44 und 2.0 blöderweise nicht kompatibel sind, schaut jeder, der bereits das Upgrade auf Version 2 gemacht hat, blöd aus der Wäsche, da er nicht mehr an seine gespeicherten Passwörter herankommt, sofern er nicht noch die alte Datenbank im 0.44-Format hat.

Da ich zu diesen Leuten gehörte, die nach dem Upgrade auf Fedora 24 blöd aus der Wäsche geschaut haben, habe ich mir das keepassx Paket in der Version 2.0 geschnappt, in keepassx2 umbenannt und so konfiguriert, das es das keepassx-Paket von Fedora ersetzt. Dadurch ist auch sichergestellt, das Fedora einem auch zukünftig kein neues Zwangs-Downgrade auf 0.4x reinwürgt.

Für alle Leidensgenossen oder die, die gerne auf Version 2.x upgraden möchte, habe ich mein keepassx-Paket in einem COPR bereitgestellt.

PS: Dem keepassx-Maintainer sollte man für diese, an Dämlichkeit kaum zu überbietende, Aktion eigentlich 500 mal „Ich darf kein Zwangs-Downgrade machen, wenn sich das Format der Programmdatenbank geändert hat“ an eine Tafel schreiben lassen! Mindestens! 👿

Visiting Spain and first travelling to Europe

My recent trip to Spain, it was one of my great experience and excited travelling time and moments.

IMG_1879

 

During my very first trip to Spain or Europe continents for a conference that brings people from all over the world to talk about freedom of expression and technology. It was a working conference, combining hands on training, policy discussions, and impromptu hacking on established tools as well as prototypes. It was such as great learning experience and tons of knowledge that I could bring back to my country (Cambodia). I still keen to continue working and research on these matters and applying to where that can make a great impacts to the community.

IMG_1862.JPG

Apart from the conference, I could enjoy the:

Visiting Valencia: It is a old and great city. It is not a big city but consist all friendly people and great old architecture buildings in the old town.

IMG_2430

 

IMG_1841.JPG

Visiting Madrid

I was lucky enough that one of my old colleague who was working with me in Cambodia host me warmly at his house. I felt like a home stay enjoying many great Spanish foods and city tour. IMG_2159

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_171" style="width: 4032px">IMG_2198<figcaption class="wp-caption-text">Oldest city gate</figcaption></figure>

Foods

Of course, many people who travelling to Spain would love to stay the local foods and they would never forgot about them.

My Spanish friend teach me how to do this

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_170" style="width: 255px">IMG_2152<figcaption class="wp-caption-text">Fresh Strawberry with sugar</figcaption></figure>

IMG_2074.JPG

Yeah, real football match . I made it!

IMG_2053

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_191" style="width: 4032px">IMG_2260.JPG<figcaption class="wp-caption-text">The Royal Palace</figcaption></figure>

Location: Spain (Madrid and Valencia)

Date: March 2016


Fedora QA goings-on: Test Days, Fedora 24 Beta testing, LinuxFest NorthWest and more!

I’m on a train and I haven’t been blogging enough lately, so I figured I’d write something up!

We’ve had a couple of great Test Days in the last couple of weeks: i18n Test Day and Live Media Writer Test Day. Thanks a lot to everyone who came out and tested – the attendance was awesome and we got a huge amount of valuable feedback from both events.

This train I’m on is heading to beautiful Bellingham, WA, where I’ll be attending the awesome LinuxFest NorthWest, along with several other Fedora and RH folks. There’ll be a Fedora booth as always, where I’ll be hanging out some of the time, and I’m also giving a joint openQA presentation with openSUSE’s Richard Brown, which should be really awesome. That’s on Sunday afternoon at 3pm in CC-208, please do come along if you can!

Speaking of openQA, we got a big shiny new box to use for hosting openQA workers, so the production openQA instance now has 18 workers. Which means tests run much faster. Mmmmm, fast tests. We’ll be doing interesting stuff with this extra capacity soon, I hope! Both production and staging are also now running a recent git snapshot of openQA, which tweaks a few things here and there and saves me maintaining >20 backported patches.

Fedora 24 has been moving along pretty well recently; we got a big stable push with a bunch of important fixes in it done today, so I’m hoping that in tomorrow’s compose, 32-bit images will be working again and so will the Atomic installer image. There are also some useful anaconda fixes, so we should be able to get down to completing the Beta validation tests for tomorrow’s nightly compose and finding any remaining lurking blockers.

I’ve also been keeping an eye on Rawhide and trying to get major bugs fixed lately; there’ve been some interesting ones like these, but I’m hoping we’ll hit a clear patch soon…

We have several interesting automation projects ATM. On the openQA side, I’m working on initial desktop testing, while jsedlak is working on ARM testing and adding KDE and Server upgrade tests. On the taskotron side, there’s some interesting work going on to add package ABI diffing using libabigail, where the Fedora QA team is working together with Sinny Kumari and Dodji Seketeli – some awesome collaboration going on there!

Fedora 24 Beta is coming up fast: the Go/No-Go meeting is next Thursday, so we’ll be working hard this coming week to try and complete the Beta validation tests and shepherd fixes for the known and surely-yet-to-come blocker bugs. Interested in helping out with this or any of the other fun QA stuff going on? Come help us!

GSoC-2016

“Woaaahhh! It is accepted” that was me when I saw my proposal got accepted. I had goosebumps, it’s a very big achievement for me to get through GSoC.

This started way back when I went for the last years BangPypers meetup, I met Sayan there , wearing that Dgplug T-shirt, he gave all the links he could for starting with open source.

I enrolled myself for Dgplug Summer Training, and that was it, I learnt so many things and got amazing supportive mentors like Sayan, Kushal and Pierre(pingou).

My connection with fedora projects started from then and there, and cut back now I am into GSoC because of Pagure. This makes me feel so humble and excited , it is just that you need to keep moving and working hard , things will fall in place.

FSMK has been a great support for me, introducing me to FOSS and telling us to spread the movement. Vignesh has been a constant support and motivator for me , he has been someone who actually pushed me beyond my limits to make things work.

Mentors at Jnaapti, Gautham Pai Sir and Shreelakshmi Ma’m, they have been a constant source of inspiration, they have seen  the urge to learn in me and they have been working on me to make me better.

I don’t know what to say, I am just overwhelmed with the result.

I will be spending my summer working on Pagure and making it the best code reviewing system out there.

I will put the proposal link as soon as it is uploaded to fedora-wiki.

EDIT: The link to fedora-wiki proposal .


April 22, 2016

Data science and Fedora
Fedora Project

Fedora Project

I’ve decided to use Fedora as my default GNU/Linux operating system to develop and test data science stuffs. Fedora is pretty nice because it has regular releases and includes most updated mainstream packages.

Data science projects changes really fast, it requires most updated packages and I think Fedora can do it very well. I have also applied as a Fedora contributor, I would like to thank all the Fedora Community for warm welcome message! I hope to make relevant contribution to the project. By the way, this is my first post at the Fedora Planet! 🙂

 

O post Data science and Fedora apareceu primeiro em Christiano Anderson.

Quick kernel hacking with QEMU + buildroot

For much of my development work, I typically use QEMU. QEMU is often used in conjunction with KVM for virtualization of complete images and hardware. A full image is overkill for what I want. 99% of the time, I want to boot a kernel I just built and get to a shell so I can run some commands. buildroot is a project primarily designed to create an embedded Linux distribution. It's also useful for creating a quick stripped down system.

Getting this set up is fairly fast. Grab a kernel from kernel.org:

$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
$ cd linux

The kernel comes with a set of config files in tree:

$ ls arch/x86/configs/
i386_defconfig  kvm_guest.config  tiny.config  x86_64_defconfig  xen.config

These contain a minimal set of config options to boot. I typically just start with the x86_64_defconfig

$ make x86_64_defconfig

Then build

$ make

You can add -j(number of cpus -1) to speed up. Compared to building a Fedora kernel, this will finish quickly. This gives you a kernel ready to boot but you still need a root file system.

Start by cloning buildoot

$ git clone git://git.buildroot.net/buildroot
$ cd buildroot

Buildroot uses the same interface as the kernel for configuration (ncurses based, make sure you have ncurses-devel installed)

$ make menuconfig

Start out by setting the appropriate architecture (assuming x86_64)

Target options -> Target Architecture -> x86_64

And then set the file system

Filesystem images -> cpio the root filesystem

install some dependencies

$ dnf install perl-Thread-Queue

and build

$ make

This will take some time, mostly because buildroot has to build a lot of things (gcc). Once that's done, you should have components necessary to boot in QEMU. The script I use for booting is based on one that 0-day testing spit out to me when I submitted a bad patch:

#!/bin/bash

kernel=$1
initrd=$2

if [ -z $kernel ]; then
    echo "pass the kernel argument"
    exit 1
fi

if [ -z $initrd ]; then
    echo "pass the initrd argument"
    exit 1
fi

kvm=(
    qemu-system-x86_64
    -enable-kvm
    -cpu kvm64,+rdtscp
    -kernel $kernel
    -m 300
    -device e1000,netdev=net0
    -netdev user,id=net0
    -boot order=nc
    -no-reboot
    -watchdog i6300esb
    -rtc base=localtime
    -serial stdio
    -vga qxl
    -initrd $initrd
    -spice port=5930,disable-ticketing
    -s
)

append=(
    hung_task_panic=1
    earlyprintk=ttyS0,115200
    systemd.log_level=err
    debug
    apic=debug
    sysrq_always_enabled
    rcupdate.rcu_cpu_stall_timeout=100
    panic=-1
    softlockup_panic=1
    nmi_watchdog=panic
    oops=panic
    load_ramdisk=2
    prompt_ramdisk=0
    console=tty0
    console=ttyS0,115200
    vga=normal
    root=/dev/ram0
    rw
    drbd.minor_count=8
)

"${kvm[@]}" --append "${append[*]}"

And invoke it with ./qemu-cmd.sh ~/linux/arch/x86/boot/bzImage ~/buildroot/output/images/rootfs.cpio. You may need to poke options in your BIOS to make kvm work (I had to do so on one laptop). If all goes well, you should end up with a login prompt. Enter username root to login (this is the default for buildroot).

If all does not go well you can use gdb to help you along. Enable CONFIG_DEBUG_INFO and CONFIG_GDB_SCRIPTS in the kernel. In another terminal

$ gdb path/to/vmlinux
(gdb) target remote localhost:1234
(gdb)

You are now attached. I haven't used this too much for actual runtime debugging. I mostly use it for grabbing dmesg output when I crash the kernel before the console gets initialized

(gdb) lx-dmesg

Note you may need to add add-auto-load-safe-path path/to/kernel/scripts/gdb/vmlinux-gdb.py to .gdbinit.

This setup is easily expandable for testing other architectures. I often test arm and arm64. My arm boot tricks are really hacky but arm64 is relatively standard. Fedora provides cross toolchains

$ dnf install gcc-aarch64-linux-gnu.x86_64

and building a cross compiled kernel is not too difficult.

$ make ARCH=arm64 defconfig
$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -j8

For buildroot, you change the architecture to 'aarch64 little endian' and rebuild. I use a much simpler command for booting arm64:

qemu-system-aarch64 \
    -s \
    -machine virt \
    -cpu cortex-a57 \
    -smp 4  \
    -machine type=virt \
    -nographic \
    -m 2048 \
    -kernel ~/arm64_kernel/arch/arm64/boot/Image \
    --append "console=ttyAMA0" \
    -initrd ~/buildroot/output/images/rootfs.cpio

To avoid needing to rebuild buildroot each time I change architectures, I typically save the rootfs in a folder somewhere.

This setup isn't perfect. Getting extra files (scripts, modules) into the rootfs is kind of a pain. I also never touch networking so I have no idea if that actually works. It works well enough for me and might be useful to others (I make no guarantees about it working).