Fedora People

Shift on Stack: api_port failure

Posted by Adam Young on January 19, 2020 12:55 AM

I finally got a right-sized flavor for an OpenShift deployment: 25 GB Disk, 4 VCPU, 16 GB Ram. With that, I tore down the old cluster and tried to redeploy. Right now, the deploy is failing at the stage of the controller nodes querying the API port. What is going on?

Here is the reported error on the console:

<figure class="wp-block-image"></figure>

The IP address of is attached to the following port:

$ openstack port list | grep "0.0.5"
| da4e74b5-7ab0-4961-a09f-8d3492c441d4 | demo-2tlt4-api-port       | fa:16:3e:b6:ed:f8 | ip_address='', subnet_id='50a5dc8e-bc79-421b-aa53-31ddcb5cf694'      | DOWN   |

That final “DOWN” is the port state. It is also showing as detached. It is on the internal network:

<figure class="wp-block-image"></figure>

Looking at the installer code, the one place I can find a reference to the api_port is in the template data/data/openstack/topology/private-network.tf used to build the value openstack_networking_port_v2. This value is used quite heavily in the rest of the installers’ Go code.

Looking in the terraform data built by the installer, I can find references to both the api_port and openstack_networking_port_v2. Specifically, there are several object of type openstack_networking_port_v2 with the names:

$ cat moc/terraform.tfstate  | jq -jr '.resources[] | select( .type == "openstack_networking_port_v2") | .name, ", ", .module, "\n" '
api_port, module.topology
bootstrap_port, module.bootstrap
ingress_port, module.topology
masters, module.topology

On a baremetal install, we need an explicit A record for api-int.<cluster_name>.<base_domain>. That requirement does not exist for OpenStack, however, and I did not have one the last time I installed.

api-int is the internal access to the API server. Since the controllers are hanging trying to talk to it, I assume that we are still at the stage where we are building the control plane, and that it should be pointing at the bootstrap server. However, since the port above is detached, traffic cannot get there. There are a few hypotheses in my head right now:

  1. The port should be attached to the bootstrap device
  2. The port should be attached to a load balancer
  3. The port should be attached to something that is acting like a load balancer.

I’m leaning toward 3 right now.

The install-config.yaml has the line:
octaviaSupport: “1”

But I don’t think any Octavia resources are being used.

$ openstack loadbalancer pool list

$ openstack loadbalancer list

$ openstack loadbalancer flavor list
Not Found (HTTP 404) (Request-ID: req-fcf2709a-c792-42f7-b711-826e8bfa1b11)

No censorship! Please!

Posted by Luigi Votta on January 18, 2020 10:14 PM
If I use an IT instrument as a blog, subscribed in a Planet I pretend immediatly to view on Planet!!! As my post has no need to censorship!

Ubuntu as Fedora!

Posted by Luigi Votta on January 18, 2020 09:35 PM
At least if you are in an UBUNTU environment and like to build the docs, as I was in Samba:
Go in to docs directory and than:
(mine was a docs-xml)

sudo apt install build-essential
sudo apt install xsltproc
sudo apt install fop
sudo apt install docbook 

sudo make all

DOCS are very very important to READ!

Fedora program update: 2020-03

Posted by Fedora Community Blog on January 17, 2020 07:50 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora this week.

I will not hold office hours next week due to travel, but if you’ll be at DevConf.CZ, you can catch me in person.


  • Orphaned packages seeking maintainers
  • Long term FTBFS packages will be retired in February.
  • Spins and labs must send a keepalive by 22 January.
  • The start of the Fedora 32 mass rebuild is delayed until 27 January. The mass rebuild window will still end on 11 February, so no subsequent schedule dates are changed.


<figure class="wp-block-table">
Open Source Summit NAAustin, TX, US22–24 Jun 2020closes 16 Feb

Help wanted


Fedora 32


  • 2020-01-21 Submission deadline for Self-Contained Changes
  • 2020-01-22 Spins keepalive deadline
  • 2020-01-27 Mass rebuild begins


<figure class="wp-block-table">
Enable fstrim.timer by defaultSystem-WideAccepted
Use update-alternatives for /usr/bin/cc and /usr/bin/c++System-WideAccepted
LTO by default for package buildsSystem-WideAccepted
Adopting sysusers.d formatSystem-WideAccepted
Move fonts language Provides to LangpacksSystem-WideAccepted
Golang 1.14System-WideAccepted
Restart services at end of rpm transactionSystem-WideAccepted
Systemd presets for user unitsSystem-WideAccepted
Enable EarlyOOMSystem-WideReady for FESCo
Mono 6.6Self-ContainedAccepted
Provide OpenType Bitmap FontsSelf-ContainedReady for FESCo
Additional buildroot to test x86-64 micro-architecture updateself-ContainedReady for FESCo
Reduce installation media size by improving the compression ratio of SquashFS filesystemSystem-WideReady for FESCo
Deprecate python-noseSelf-ContainedAnnounced

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

Fedora 33


<figure class="wp-block-table">
Binutils 2.34System-WideAccepted
Retire python26Self-ContainedReady for FESCo
Retire python34Self-ContainedReady for FESCo

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

CPE update

  • Anitya is being prepared for a new release (kanban board, GitHub milestone)
  • RabbitMQ support for jms-messaging-plugin is merged
  • UI around the Bugzilla override support in src.fedoraproject.org was reworked
  • Erlang stack in Fedora was rebuilt so it woudl work on the s390x architecture
  • OSBS aarch64 is in staging
    • Deploy OpenShift cluster on aarch64 boxes
    • Configured OSBS on the x86_64 and aarch64 OpenShift cluster in staging
    • Test build is failing

The post Fedora program update: 2020-03 appeared first on Fedora Community Blog.

Project roadmap 2020

Posted by Kiwi TCMS on January 17, 2020 05:40 PM

Hello testers, the Kiwi TCMS team sat down together last week and talked about what we feel is important for us during the upcoming year. This blog post outlines our roadmap for 2020!

roadmap image 2020

Project sustainability

The big goal towards which we are striving is to turn Kiwi TCMS into a sustainable open source project. For now this means several key areas:

1) Team
2) Technical
3) Community


Right now we have a core team with 6 newcomers on-boarding. Engineering performance is all over the place with some people contributing too much while others contributing too little. More importantly there is no consistent pace of contributions which makes planning timely completion of technical tasks impossible.

At the moment we do operate as a bunch of disconnected people who happen to talk to each other from time to time.

We are going to adjust our internal processes and how we on-board new members. In fact we did our first "scrum-like" meeting this week and agreed to change our existing practice and strive to become better as a team!

Goal: to have a cohesive team at the end of the year which operates with a predictable capacity.

Goal: 1 PR/week/person as broad measure of individual performance.


The areas shown on the picture above will receive more priority.

Goal: complete remaining Telemetry features.

Goal: complete bug-tracker integration milestone.

Goal: all pylint issues resolved.

Goal: migrate all remaining legacy templates to Patternfly UI. See patternfly-migration milestone.

Goal: where FE sends AJAX requests to BE views replace with JSON RPC API instead.

Extra: start tackling the JavaScript mess that we have. This depends and is related to Patternfly migration and overall refactoring.

Extra: make it easier for downstream installations to extend and override parts of Kiwi TCMS in order for users to adjust the system to their own needs. The system is pretty flexible as-is but there have been requests, both online and offline, to provide some extra features! We'll start looking into them, likely making partial progress in the next 12 months.


Last year Kiwi TCMS had massive success at every single conference that we've been to. Both project and team have been well received. While we are going to continue being part of various communities around the world we are trying to limit extensive travel and focus on functionality and partnerships which will increase Kiwi TCMS eco-system, make the project even more popular and drive further adoption!

Goal: extended GitHub integration via kiwitcms-github-app plugin.

Goal: release the following test automation framework plugins for Kiwi TCMS:

For more information see test-automation-plugins milestone.

Ongoing: work with our partners from the proprietary and open source worlds. This is hard to quantify and lots of it doesn't actually depend on the team. However we are continuing to talk to them regularly. Expect new feedback to become available under GitHub Issues.

Extra: see what we can do about testing productivity! This has always been part of our mission but we have not been able to produce anything worth sharing. We do have ideas in this space but we are generally looking for partnerships and collaborations. It is very likely that there will not be very much progress on this front because it is hard to define it properly :-(.


At the end of the day most of these goals compliment each other and help drive all of them to completion. Many of the still on-boarding people have expressed desire to improve their Python & Django skills. Working to resolve issues in the above specific areas will give them this opportunity! I expect they will show good progress on their respective tasks so we can write more about them on this blog.

Happy testing!

Plug and play support for (Gaming) keyboards with a builtin LCD panel

Posted by Hans de Goede on January 17, 2020 01:39 PM
A while ago as a spin-off of my project to improve support for Logitech wireless keyboards and mice I have also done some work on improving support for (Gaming) keyboards with a builtin LCD panel.

Specifically if you have a Logitech MX5000, G15, G15 v2 or G510 and you want the LCD panel to show something somewhat useful then on Fedora 31 you can now install the lcdproc package and it will automatically recognize the keyboard and show "top" like information on it. No need to manually write an LCDd.conf or anything, this works fully plug and play:

sudo dnf install lcdproc
sudo udevadm trigger

If you have a MX5000 and you do not want the LCD panel to show "top" like info, you may still want to install the mx5000tools package, this will automatically send the system time to the keyboard, after which it will display the time.

Once the 5.5 kernel becomes available as an update for Fedora you will also be able to use the keys surrounding the LCD panel to control the lcdproc menu-s on the LCD panel. The 5.5 kernel will also export key backlight brightness control through the standardized /sys/class/leds API, so that you can control it from e.g. the GNOME control-center's power-settings and you get a nice OSD when toggling the brightnesslevel using the key on the keyboard.

The 5.5 kernel will also make the "G" keys send standard input-events (evdev events), once userspace support for the new key-codes these send has landed, this will allow e.g. binding them to actions in GNOME control-center's keyboard-settings. But only under Wayland as the new keycodes are > 255 and X11 does not support this.

Fedora CoreOS out of preview

Posted by Fedora Magazine on January 17, 2020 10:40 AM

The Fedora CoreOS team is pleased to announce that Fedora CoreOS is now available for general use. Here are some more details about this exciting delivery.

Fedora CoreOS is a new Fedora Edition built specifically for running containerized workloads securely and at scale. It’s the successor to both Fedora Atomic Host and CoreOS Container Linux and is part of our effort to explore new ways of assembling and updating an OS. Fedora CoreOS combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.  For more on the Fedora CoreOS philosophy, goals, and design, see the announcement of the preview release.

Some highlights of the current Fedora CoreOS release:

  • Automatic updates, with staged deployments and phased rollouts
  • Built from Fedora 31, featuring:
    • Linux 5.4
    • systemd 243
    • Ignition 2.1
  • OCI and Docker Container support via Podman 1.7 and Moby 18.09
  • cgroups v1 enabled by default for broader compatibility; cgroups v2 available via configuration

Fedora CoreOS is available on a variety of platforms:

  • Bare metal, QEMU, OpenStack, and VMware
  • Images available in all public AWS regions
  • Downloadable cloud images for Alibaba, AWS, Azure, and GCP
  • Can run live from RAM via ISO and PXE (netboot) images

Fedora CoreOS is under active development.  Planned future enhancements include:

  • Addition of the next release stream for extended testing of upcoming Fedora releases.
  • Support for additional cloud and virtualization platforms, and processor architectures other than x86_64.
  • Closer integration with Kubernetes distributions, including OKD.
  • Aggregate statistics collection.
  • Additional documentation.

Where do I get it?

To try out the new release, head over to the download page to get OS images or cloud image IDs.  Then use the quick start guide to get a machine running quickly.

How do I get involved?

It’s easy!  You can report bugs and missing features to the issue tracker. You can also discuss Fedora CoreOS in Fedora Discourse, the development mailing list, in #fedora-coreos on Freenode, or at our weekly IRC meetings.

Are there stability guarantees?

In general, the Fedora Project does not make any guarantees around stability.  While Fedora CoreOS strives for a high level of stability, this can be challenging to achieve in the rapidly evolving Linux and container ecosystems.  We’ve found that the incremental, exploratory, forward-looking development required for Fedora CoreOS — which is also a cornerstone of the Fedora Project as a whole — is difficult to reconcile with the iron-clad stability guarantee that ideally exists when automatically updating systems.

We’ll continue to do our best not to break existing systems over time, and to give users the tools to manage the impact of any regressions.  Nevertheless, automatic updates may produce regressions or breaking changes for some use cases. You should make your own decisions about where and how to run Fedora CoreOS based on your risk tolerance, operational needs, and experience with the OS.  We will continue to announce any major planned or unplanned breakage to the coreos-status mailing list, along with recommended mitigations.

How do I migrate from CoreOS Container Linux?

Container Linux machines cannot be migrated in place to Fedora CoreOS.  We recommend writing a new Fedora CoreOS Config to provision Fedora CoreOS machines.  Fedora CoreOS Configs are similar to Container Linux Configs, and must be passed through the Fedora CoreOS Config Transpiler to produce an Ignition config for provisioning a Fedora CoreOS machine.

Whether you’re currently provisioning your Container Linux machines using a Container Linux Config, handwritten Ignition config, or cloud-config, you’ll need to adjust your configs for differences between Container Linux and Fedora CoreOS.  For example, on Fedora CoreOS network configuration is performed with NetworkManager key files instead of systemd-networkd, and time synchronization is performed by chrony rather than systemd-timesyncd.  Initial migration documentation will be available soon and a skeleton list of differences between the two OSes is available in this issue.

CoreOS Container Linux will be maintained for a few more months, and then will be declared end-of-life.  We’ll announce the exact end-of-life date later this month.

How do I migrate from Fedora Atomic Host?

Fedora Atomic Host has already reached end-of-life, and you should migrate to Fedora CoreOS as soon as possible.  We do not recommend in-place migration of Atomic Host machines to Fedora CoreOS. Instead, we recommend writing a Fedora CoreOS Config and using it to provision new Fedora CoreOS machines.  As with CoreOS Container Linux, you’ll need to adjust your existing cloud-configs for differences between Fedora Atomic Host and Fedora CoreOS.

Welcome to Fedora CoreOS.  Deploy it, launch your apps, and let us know what you think!

All systems go

Posted by Fedora Infrastructure Status on January 17, 2020 09:05 AM
Service 'The Koji Buildsystem' now has status: good: Everything seems to be working.

Major service disruption

Posted by Fedora Infrastructure Status on January 17, 2020 08:08 AM
Service 'The Koji Buildsystem' now has status: major: koji is not responsive, the problem is under investigation

Software tips for nerds

Posted by Jakub Kadlčík on January 17, 2020 12:00 AM

This article should rather be called What software I started using in 2019 but I just didn’t like that title. It is going to be the first post in a yearly series on this topic.

Admittedly, I have unconventional preferences on the user interface of applications, that I use daily. It manifests itself in a strong Vim modal editing addiction, tiling window manager and not having a mouse on my desk. I figured, that this series might be useful for other weirdos like me. Also, it will be fun to look back and see the year by year progression.

See /r/unixporn if you are aroused by these kinds of pictures.

My daily driver is Lenovo X1 Carbon, that I use almost exclusively for DevOps tasks. It has only 13” display so I usually go one maximized application per workspace. This is very conveniently achievable by using a tiling window manager, in my case Qtile.

Now, what changed in 2019? Quite a lot …

Vim for everything

I use Vim for almost a decade now, which is probably the longest I’ve sticked to some application. During that time, I repeatedly tried to use it as an IDE but inevitably failed each time. Let’s remember eclim as my Java IDE. I work almost exclusively on projects written in Python, which can be beautifully done in Vim but because of a gap in my skills, I was reliant on PyCharm. Thankfully, not anymore.

My biggest issue was misusing tabs instead of buffers and poor navigation within projects. Reality check, do you open one file per tab? This is a common practice in other text editors, but please know that this is not the purpose of tabs in Vim and you should be using buffers instead. Please, give them a chance and read Buffers, buffers, buffers.

Regarding project navigation, have you ever tried shift shift search in PyCharm or other JetBrains IDE? It’s exactly that thing, that you wouldn’t even imagine but after using it for the first time, you don’t understand how you lived without. What it does is, that it interactively fuzzy-finds files and tags (classes, functions, etc) that matches your input, so you can easily open them. In my opinion, this unquestionably defeats any other way of project navigation like using a file manager, NerdTree, or find in the command line.

Fortunately, both of these problems can be solved by fzf.vim, which quickly became one of my most favorite Vim plugins. Please read this section about fzf plugin.

I am forever grateful to Ian Langworth for writing VIM AFTER 11 YEARS, EVERYTHING I MISSED IN “VIM AFTER 11 YEARS” and VIM AFTER 15 YEARS articles. If you are a Vim user, those are an absolute must-read.


Although I spend a considerable amount of my work time in a terminal, I couldn’t care less what terminal emulator I use. The majority of them support the exact same features and in the end, you are just sitting there and typing commands into a black screen.

After migrating from PyCharm to Vim, my time spent in the terminal increased even more. Thinking about it, I use terminal and web browser. That’s it. Up until this point, I’ve been using gnome-terminal because it comes preinstalled with Fedora. While it is a perfectly fine piece of software, for me, it’s customizability sucks. I just don’t want to configure my core tools by clicking in a settings window. It has limitations, it is harder to manage in git, and so on.

Ultimately, the last straw that made me abandon gnome-terminal was its inability to use third-party color schemes. Among many others, there is a great project called base16 which defines palettes of colors for creating beautiful schemes and then provides configurations for countless applications. For most applications, the process is very simple. Put a color scheme file into an expected directory, then edit the config file and specify, which scheme you want to use. Unfortunately, it doesn’t work that way for gnome-terminal. You need to run a hundred-line bash script and possibly do other shenanigans and hope for the best.

Currently, my terminal emulator of choice is Urxvt (aka rxvt-unicode). Here comes my sales pitch - Urxvt is an old, ugly-looking application with a horrible user interface. How about that? Joking aside, it looks terrifying at first sight. However, it can be easily configured through ~/.Xresources and with very little effort, it looks as beautiful as the terminal can get.

Color scripts from stark/Color-Scripts


Tmux is a terminal multiplexer, tmux is a replacement for GNU Screen, tmux is steroids for your terminal, tmux is the greatest thing since free porn. You. Want. Tmux!

It allows you to:

  • horizontally or vertically split the terminal window
  • run a command, close the terminal window, and then attach to back to it when needed
  • throw your mouse to trash
  • add support for tabs, scrolling, searching, and everything that may be missing in lightweight terminal emulators

Recommended read - Boost Your Productivity In The Terminal With tmux


The IRC protocol supports only plain-text communication. Sharing images, videos or audio can’t be done directly, but rather by uploading it somewhere and embedding a link. That being said, there is no reason to not use a terminal client. I’ve finally migrated from HexChat (and previously XChat) to Weechat.

My weechat configuration looks quite chaotic in a small window like this. It is optimized for fullscreen.


RSS is a well-known method for subscribing news feeds from various websites. It appears to be stupidly simple to use and it is the most effective way to keep an eye on interesting articles. So why I haven’t been able to use it? My typical pattern was - Why don’t I use RSS? Then installing a client, subscribing some feeds, being happy, then forgetting that I have an RSS client. And then going back to step one.

Newsboat with a panel indicator seems to do the trick for me.


What is your current system of taking notes and managing to-do lists? Or do you even? For the longest time, my approach was to remember everything. Surprisingly enough, it worked fine throughout high school, college and all my previous jobs. However now, as I am growing older and my responsibilities increase, the beloved “if I don’t remember it, it wasn’t that important” philosophy is just not sufficient anymore. Also, repeatedly excusing myself during a work meeting, that I forgot to do something, was just unprofessional.

Well, where to keep notes and to-do lists? There is a gazillion of tools for desktops, smartphones, and everything. Apparently, they are even still making sketchbooks. Like … from paper. Who would know? Anyway, I had some criteria.

  1. In my team, we have week-long sprints, so I want to track tasks for that period. While I want to write detailed information about them, as well as fragmenting them into sub-tasks, I am not interested in specifying attributes such as explicit deadlines, locations, projects, etc.
  2. There may be tasks, that need to be done some specific day, so it would be useful to have also a page for each day.
  3. No mouse! Tools that require me to click on buttons to create tasks, or dragging them with a mouse to reorder is a hard no-go. This is not negotiable.

I’ve been using Joplin for some time. It allows you to simply create notebooks and notes within them. Each note is a markdown page and you can do whatever you want in it. Joplin was serving its purpose, but I didn’t really enjoy using it. First, it is written in Javascript and bundled with Electron, so it is kinda slow and clumsy. I was also missing my Vim key bindings (it has Vim mode now, so it shouldn’t be a problem anymore), and finally, it used a different color scheme than the rest of my system. Which, simply speaking, bothered me. This could have probably been solved by using its CLI interface, but then I discovered and migrated to Vimwiki.

I would say, it has the same exact features, but it is a Vim plugin.

Notes and long term ideas on the left, weekly plan on the right

GTK: OSX a11y support

Posted by Alberto Ruiz on January 16, 2020 05:49 PM

Everybody knows that I have always been a firm believer in Gtk+’s potential to be a great cross platform toolkit beyond Linux. GIMP and Inkscape, as an example, are loyal users that ship builds for those platforms. The main challenge is the short amount of maintainers running, testing and improving those platforms.

Gtk+ has a few shortcomings one of them, one of the biggest ones is lack of a11y support outside of Linux. Since I have regular access to a modern OSX machine I decided to give this a go (and maybe learn son Obj-C in the process).

So I started by having a look at how ATK works and how it relates to the GTK DOM, my main goal was to have a GTK3 module that would walk through the toplevels and build an OSX accessibility tree.

So my initial/naive attempt is in this git repo, which you can build by installing gtk from brew.

Some of the shortcomings that I have found to actually test this and move forward:

  • Running gtk3-demo creates an NSApp that has no accessibility enabled, you can tell because the a11y inspector that comes with XCode won’t show metadata even for the window decorator controls. I have no idea how to enable that manually, it looks like starting an actual NSApp, like Inkscape and GIMP do would give you part of that.
  • Inkscape and GIMP have custom code to register themselves as an acutal NSApp as well as override XDG env vars in runtime to set the right paths. I suspect this is something we could move to G and GtkApplication.
  • The .dylib I generate with this repo will not actually load on Inkscape for some reason, so as of now I am stuck with having to somehow build a replacement gtk dylib for Inkscape with my code instead of through an actual module.

So this is my progress thus far, I think once I get to a point where I can iterate over the concept, it would be easier to start sketching the mapping between ATK and NSAccessibility. I would love feedback or help, so if you are interested please reach out by filing an issue on the gitlab project!

duplicity for EPEL-8

Posted by Gwyn Ciesla on January 16, 2020 02:44 PM

Finally, it’s on it’s way to updates-testing. This is 0.8.09, with Python 3. Please test, give karma/feedback, and enjoy!

Kiwi TCMS 7.3

Posted by Kiwi TCMS on January 15, 2020 11:00 PM

We're happy to announce Kiwi TCMS version 7.3!

IMPORTANT: this is a critical security update for CVE-2019-19844: Potential account hijack via password reset form!

Also migrates to Django 3.0 and includes several other improvement and bug-fixes!

You can explore everything at https://public.tenant.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

Docker images:

kiwitcms/kiwi       latest  4026ee62e488    556 MB
kiwitcms/kiwi       6.2     7870085ad415    957 MB
kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955 MB
kiwitcms/kiwi       6.1     b559123d25b0    970 MB
kiwitcms/kiwi       6.0.1   87b24d94197d    970 MB
kiwitcms/kiwi       5.3.1   a420465852be    976 MB

Changes since Kiwi TCMS 7.2


  • Update Django from 2.2.8 to 3.0.2


  • Update python-gitlab from 1.13.0 to 1.15.0
  • Update pygithub from 1.44.1 to 1.45
  • Update django-grappelli from 2.13.2 to 2.13.3
  • Bump django-uuslug from 1.1.9 to 1.2.0
  • Bump django-attachments from 1.4.1 to 1.5
  • Bump django-vinaigrette from 1.2.0 to 2.0.1
  • Update marked to version 0.8.0
  • Update prismjs to version 1.19.0
  • Generalize existing kiwitcms.telemetry.plugins handling code by renaming the entry point to kiwitcms.plugins
  • Refactor views to class based (Svetlozar Stoyanov)
  • Teach Kiwi TCMS to automatically report bugs to GitHub when the user selects such action. Fall back to opening a new browser window for manually entering the bug if something goes wrong


  • When migrating from the older Bug model to LinkReference skip bugs which are attached directly to test cases instead of test executions. See SO #59321756
  • Remove AutoField.max_length because it is ignored by Django 3


  • TestCase.update() method now allows to update the author field. Fixes Issue #630

Bug fixes

  • Modify template pass object as test_plan. Fixes Issue #1307 (Ed Oswald S. Go)
  • Enable version selection in test plan search page. Fixes Issue #1276
  • Apply percentage rounding for completed test executions. Fixes Issue #1230
  • Fix a logical bug in conditional expression when deciding whether or not reporting bugs to selected issue tracker is disabled


  • Add code of conduct. Fixes Issue #1185 (Rosen Sasov)
  • Add test for KIWI_DONT_ENFORSE_HTTPS. Closes Issue #1274
  • Replace ugettext_lazy with gettext_lazy for Django 3
  • Remove BaseCaseSearchForm.bug_id field
  • Refactor testcase edit view to class-based
  • Happy New Year pylint

GitHub integration

The hosted version of Kiwi TCMS ships with additional GitHub integration. See GitHub App announcement and github-app for more information!

Upcoming conferences

The next two events we are going to participate are:

If you are around come and say "Happy testing"!

How to upgrade

Backup first! If you are using Kiwi TCMS as a Docker container then:

cd path/containing/docker-compose/
docker-compose down
docker pull kiwitcms/kiwi
docker pull centos/mariadb
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

WHERE: docker-compose.yml has been updated from your private git repository! The file provided in our GitHub repository is an example. Not for production use!

WARNING: kiwitcms/kiwi:latest and docker-compose.yml will always point to the latest available version! If you have to upgrade in steps, e.g. between several intermediate releases, you have to modify the above workflow:

# starting from an older Kiwi TCMS version
docker-compose down
docker pull kiwitcms/kiwi:<next_upgrade_version>
edit docker-compose.yml to use kiwitcms/kiwi:<next_upgrade_version>
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate
# repeat until you have reached latest

Happy testing!

All systems go

Posted by Fedora Infrastructure Status on January 15, 2020 10:54 PM
New status good: Everything seems to be working. for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package Database, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on January 15, 2020 09:04 PM
New status scheduled: Everything seems to be working. for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package Database, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

Self Service Speedbumps

Posted by Adam Young on January 15, 2020 05:18 PM

The OpenShift installer is fairly specific in what it requires, and will not install into a virtual machine that does not have sufficient resources. These limits are:

  • 16 GB RAM
  • 4 Virtual CPUs
  • 25 GB Disk Space

This is fairly frustrating if your cloud provider does not give you a flavor that matches this. The last item specifically is an artificial limitation as you can always create an additional disk and mount it, but the installer does not know to do that.

In my case, there is a flavor that almost matches; it has 10 GB of Disk space instead of the required 25. But I cannot use it.

Instead, I have to use a larger flavor that has double the VCPUs, and thus eats up more of my VCPU quota….to the point that I cannot afford more than 4 Virtual machines of this size, and thus cannot create more than one compute node; OpenShift needs 3 nodes for the control plane.

I do not have permissions to create a flavor on this cloud. Thus, my only option is to open a ticket. Which has to be reviewed and acted upon by an administrator. Not a huge deal.

This is how self service breaks down. A non-security decision (link disk size with the other characteristics of a flavor) plus Access Control rules that prevent end users from customizing. So the end user waits for a human to respond

In my case, that means that I have to provide an alternative place to host my demonstration, just in case things don’t happen in time. Which costs my organization money.

This is not a ding on my cloud provider. They have the same OpenStack API as anyone else deploying OpenStack.

This is not a ding on Keystone; create flavor is not a project scoped operation, so I can’t even blame my favorite bug.

This is not a ding on the Nova API. It is reasonable to reserve the ability to create Flavors to system administrators. If instances have storage attached, to provide it in reasonable sized chunks.

My problem just falls at the junction of several different zones of responsibility. It is the overlap that causes the pain in this case. This is not unusual

Would it be possible to have a more granular API, like “create customer flavor” that built a flavor out of pre-canned parts and sizes? Probably. That would solve my problem. I don’t know if this is a general problem, though.

This does seem like it is something that could be addressed by a GitOps type approach. In order to perform an operation like this, I should be able to issue a command that gets checked in to git, confirmed, and posted for code review. An administrator could then confirm or provide an alternative approach. This happens in the ticketing system. It is human-resource-intensive. If no one says “yes” the default is no…and thing just sits there.

What would be a better long term solution? I don’t know. I’m going to let this idea set for a while.

What do you think?


Posted by Bodhi on January 15, 2020 12:11 PM

This is a bugfix release.

Bug fixes

  • Fix the Fedora Messaging exception caught for publish backoff (#3871).
  • Only pass scalar arguments to celery tasks to avoid lingering database
    transactions (#3902).
  • Fix bug title escaping to prevent a JS crash while editing updates
  • Fix potential race condition with the celery worker accessing an update
    before the web request was commited. (#3858).


The following developers contributed to this release of Bodhi:

  • Aurélien Bompard
  • Clement Verna
  • Mattia Verga

Running syslog-ng in BastilleBSD

Posted by Peter Czanik on January 15, 2020 11:39 AM

Bastille is a container management system for FreeBSD. If you are coming from a Linux world, it is a bit like Docker or Podman / Buildah from Red Hat, at least some of its functionality. I learned about BastilleBSD right before my Christmas holidays. Currently my primary work platform is Linux and I am just preparing to learn about Kubernetes and Openshift. I planned not to do anything work related during my holidays – which is quite difficult, if your hobby heavily overlaps with your work. Having some strong FreeBSD roots (started to use FreeBSD in 1994), BastilleBSD arrived just on time to be a good excuse to do something IT related :-)

Getting started

Before you begin, make sure that you have FreeBSD 12.1 installed. You should get started by completing the Getting Started guide of BastilleBSD. It helps you to upgrade your system to have the latest security updates, set up the pkg package system and install BastilleBSD. The easiest way is to install it using a package:

pkg install bastille

If you are just testing bastille, it is not really necessary, but you can enable containers automatically at boot with the following command:

sysrc bastille_enable=YES

For added flexibility and security, you can use the ZFS filesystem. As I was testing in a limited virtual environment, I rather skipped this possibility. BastilleBSD works fine without it.

When you come to the packet filter configuration, make sure that you configure the external interface correctly. In my case, I had to change “vtnet0” to “em0”.

Once firewall configuration was ready, I bootstrapped the 12.1 release for my containers:

bastille bootstrap 12.1-RELEASE update

After running this command, your system is finally ready to create containers. I tested all the bastille commands listed in the getting started guide: create, start, list, console, stop and destroy.

Now that you are done with the setup and basic testing, you are ready for the next step: running syslog-ng in bastille!

Syslog-ng in bastille

As I am not too creative when it comes to inventing names and IP addresses, I just used the examples from the getting started guide. If you are more creative, replace “alcatraz” with a better name and “” with a different IP address.

First, create a new container:

bastille create alcatraz 12.1-RELEASE

Here alcatraz is the name of the container, the 12.1-RELEASE is the FreeBSD release it is based on, and at the end of the command line, the IP address is where the container will run. Obviously, if your local network is running on IP addresses, then you should rather pick from or

Start the new container:

bastille start alcatraz

It was a bit of a surprise to me that further configuration only worked when the container was started. Once the container is started, you can start configuring it:

Disable syslogd:

bastille sysrc alcatraz syslogd_enable="NO"

Install syslog-ng:

bastille pkg alcatraz install syslog-ng

Start syslog-ng automagically when starting the container:

bastille sysrc alcatraz syslog_ng_enable="YES"

Edit the syslog-ng configuration of the container. Add a tcp source listening on port 514 and disable any log paths sending logs to /dev/console as it is unavailable in the container. Here is a diff of changes:

root@fb121:~ # diff /usr/local/etc/syslog-ng.conf /usr/local/bastille/jails/alcatraz/root/usr/local/etc/syslog-ng.conf
< 	     udp(); internal(); };
> 	     udp(); tcp(port(514)); internal(); };
< log { source(src); filter(f_err); destination(console); };
< log { source(src); filter(f_kern); filter(f_warning); destination(console); };
< log { source(src); filter(f_auth); filter(f_notice); destination(console); };
< log { source(src); filter(f_mail); filter(f_crit); destination(console); };
> #log { source(src); filter(f_err); destination(console); };
> #log { source(src); filter(f_kern); filter(f_warning); destination(console); };
> #log { source(src); filter(f_auth); filter(f_notice); destination(console); };
> #log { source(src); filter(f_mail); filter(f_crit); destination(console); };

And finally restart the container for the configuration to take effect:

bastille restart alcatraz

We are almost there. One thing is still missing: firewall configuration. While Linux container tools do this automagically, some extra work is needed when using BastilleBSD. The container is using an IP address on an internal network. If we want to send log messages to port 514 of the container from another host, we need to forward the connection from the public IP address of the host to the internal IP of the container.

Open /etc/pf.conf in your favorite text editor and add a line like this below the commented-out example:

rdr pass inet proto tcp from any to any port {514} ->

514 is the port we specified in the syslog-ng configuration and the IP address is what we used when we started the container. You should change any of them if you used a different value earlier. Once you saved it, reload the PF configuration:

service pf restart


Most likely due to the PF configuration, you cannot test your syslog-ng container from the host running BastilleBSD. You get a “connection refused” when you try to connect to port 514 of the public IP address of your host. So, you need to use a second host for testing. If you do not have syslog(-ng) running there, you can use loggen or telnet to test syslog-ng in BastilleBSD. In my case, the IP address of the host running BastilleBSD is For testing, I used:

telnet 514

Just enter some random text, hit enter, and repeat it a couple of times:

Connected to
Escape character is '^]'.
this is a test
another one

Close the connection and check the logs. You should see something similar in /usr/local/bastille/jails/alcatraz/root/var/log/messages:

root@fb121:~ # tail /usr/local/bastille/jails/alcatraz/root/var/log/messages
Jan 10 14:15:38 alcatraz syslog-ng[1732]: Syslog connection accepted; fd='23', client='AF_INET(', local='AF_INET('
Jan 10 14:15:47 this is a test
Jan 10 14:16:12 another one

What is next?

Once I was ready with the first version of this blog, I sent it to Christer Edwards – author of BastilleBSD – for a review. Within a few hours, I received some very useful feedback. And not just feedback, he also prepared a template, making the above tasks a lot easier.

The getting started guide includes a section on networking. The method described there involves private IP addresses and port forwarding. It is quite complex, on the other hand it works everywhere, as it does not require additional external IP addresses for the host. There is an easier method as well, which is using IP aliases on the host. Learn more about it from the documentation: https://docs.bastillebsd.org/en/latest/chapters/networking.html

When it comes to testing in my blogs, I always focus on functional testing of syslog-ng. Here are a few more ways to test and troubleshoot the freshly created container:

root@fb121:~ # bastille cmd alcatraz sockstat -4
root     syslog-ng  871   19 udp4       *:*
root     syslog-ng  871   20 tcp4       *:*

root@fb121:~ # bastille cmd alcatraz ps -auxw
root  870  0.0  0.3 19256 5340  -  IJ   07:46   0:00.01 /usr/local/sbin/syslog-ng -p /var/run/syslog.pid
root  871  0.0  0.4 23956 8560  -  IsJ  07:46   0:00.16 /usr/local/sbin/syslog-ng -p /var/run/syslog.pid
root  927  0.0  0.1 11408 2560  -  IsJ  07:46   0:00.01 /usr/sbin/cron -J 60 -s
root 1083  0.0  0.1 11684 2720  0  R+J  07:50   0:00.00 ps -auxw

The previous two examples ran troubleshooting tools in the container called alcatraz from the command line of the host. If you expect to run more commands, using the console of the container might be more convenient:

root@fb121:~ # bastille console alcatraz
Last login: Fri Jan 10 14:45:32 on pts/1

Welcome to FreeBSD!

Release Notes, Errata: https://www.FreeBSD.org/releases/
Security Advisories:   https://www.FreeBSD.org/security/
FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
FreeBSD FAQ:           https://www.FreeBSD.org/faq/
Questions List: https://lists.FreeBSD.org/mailman/listinfo/freebsd-questions/
FreeBSD Forums:        https://forums.FreeBSD.org/

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

Edit /etc/motd to change this login announcement.
root@alcatraz:~ # sockstat -4
root     syslog-ng  871   19 udp4       *:*
root     syslog-ng  871   20 tcp4       *:*

Using BastilleBSD templates

As mentioned earlier, based on the original version of my blog, Christer Edwards prepared a syslog-ng template for BastilleBSD: https://gitlab.com/bastillebsd-templates/syslog-ng This means that instead of building a syslog-ng container from the ground up, you can easily use a template.

First, create a new container, configure the firewall for it if necessary and start it (as mentioned earlier: you can work only on running containers). Next, download the syslog-ng template:

bastille bootstrap https://gitlab.com/BastilleBSD-Templates/syslog-ng

And apply it to the freshly created and started container (replace TARGET with the name of the container):

bastille template TARGET BastilleBSD-Templates/syslog-ng

You should be able to test syslog-ng the same way as described above.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

Develop GUI apps using Flutter on Fedora

Posted by Fedora Magazine on January 15, 2020 08:00 AM

When it comes to app development frameworks, Flutter is the latest and greatest. Google seems to be planning to take over the entire GUI app development world with Flutter, starting with mobile devices, which are already perfectly supported. Flutter allows you to develop cross-platform GUI apps for multiple targets — mobile, web, and desktop — from a single codebase.

This post will go through how to install the Flutter SDK and tools on Fedora, as well as how to use them both for mobile development and web/desktop development.

Installing Flutter and Android SDKs on Fedora

To get started building apps with Flutter, you need to install

  • the Android SDK;
  • the Flutter SDK itself; and,
  • optionally, an IDE and its Flutter plugins.

Installing the Android SDK

Flutter requires the installation of the Android SDK with the entire Android Studio suite of tools. Google provides a tar.gz archive. The Android Studio executable can be found in the android-studio/bin directory and is called studio.sh. To run it, open a terminal, cd into the aforementioned directory, and then run:

$ ./studio.sh

Installing the Flutter SDK

Before you install Flutter you may want to consider what release channel you want to be on.

The stable channel is least likely to give you a headache if you just want to build a mobile app using mainstream Flutter features.

On the other hand, you may want to use the latest features, especially for desktop and web app development. In that case, you might be better off installing either the latest version of the beta or even the dev channel.

Either way, you can switch between channels after you install using the flutter channel command explained later in the article.

Head over to the official SDK archive page and download the latest installation bundle for the release channel most appropriate for your use case.

The installation bundle is simply a xz-compressed tarball (.tar.xz extension). You can extract it wherever you want, given that you add the flutter/bin subdirectory to the PATH environment variable.

Installing the IDE plugins

To install the plugin for Visual Studio Code, you need to search for Flutter in the Extensions tab. Installing it will also install the Dart plugin.

The same will happen when you install the plugin for Android Studio by opening the Settings, then the Plugins tab and installing the Flutter plugin.

Using the Flutter and Android CLI Tools on Fedora

Now that you’ve installed Flutter, here’s how to use the CLI tool.

Upgrading and Maintaining Your Flutter Installations

The flutter doctor command is used to check whether your installation and related tools are complete and don’t require any further action.

For example, the output you may get from flutter doctor right after installing on Fedora is:

Doctor summary (to see all details, run flutter doctor -v):

[✓] Flutter (Channel stable, v1.12.13+hotfix.5, on Linux, locale it_IT.UTF-8)

[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)

    ✗ Android licenses not accepted.  To resolve this, run: flutter doctor --android-licenses

[!] Android Studio (version 3.5)

    ✗ Flutter plugin not installed; this adds Flutter specific functionality.

    ✗ Dart plugin not installed; this adds Dart specific functionality.

[!] Connected device

    ! No devices available

! Doctor found issues in 3 categories.

Of course the issue with the Android toolchain has to be resolved in order to build for Android. Run this command to accept the licenses:

$ flutter doctor --android-licenses

Use the flutter channel command to switch channels after installation. It’s just like switching branches on Git (and that’s actually what it does). You use it in the following way:

$ flutter channel <channel_name>

…where you’d replace <channel_name> with the release channel you want to switch to.

After doing that, or whenever you feel the need to do it, you need to update your installation. You might consider running this every once in a while or when a major update comes out if you follow Flutter news. Run this command:

$ flutter upgrade

Building for Mobile

You can build for Android very easily: the flutter build command supports it by default, and it allows you to build both APKs and newfangled app bundles.

All you need to do is to create a project with flutter create, which will generate some code for an example app and the necessary android and ios folders.

When you’re done coding you can either run:

  • flutter build apk or flutter build appbundle to generate the necessary app files to distribute, or
  • flutter run to run the app on a connected device or emulator directly.

When you run the app on a phone or emulator with flutter run, you can use the R button on the keyboard to use stateful hot reload. This feature updates what’s displayed on the phone or emulator to reflect the changes you’ve made to the code without requiring a full rebuild.

If you input a capital R character to the debug console, you trigger a hot restart. This restart doesn’t preserve state and is necessary for bigger changes to the app.

If you’re using a GUI IDE, you can trigger a hot reload using the bolt icon button and a hot restart with the typical refresh button.

Building for the Desktop

To build apps for the desktop on Fedora, use the flutter-desktop-embedding repository. The flutter create command doesn’t have templates for desktop Linux apps yet. That repository contains examples of desktop apps and files required to build on desktop, as well as examples of plugins for desktop apps.

To build or run apps for Linux, you also need to be on the master release channel and enable Linux desktop app development. To do this, run:

$ flutter config --enable-linux-desktop

After that, you can use flutter run to run the app on your development workstation directly, or run flutter build linux to build a binary file in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the linux/ directory:

$ flutter create .

Building for the Web

Starting with Flutter 1.12, you can build Web apps using Flutter with the mainline codebase, without having to use the flutter_web forked libraries, but you have to be running on the beta channel.

If you are (you can switch to it using flutter channel beta and flutter upgrade as we’ve seen earlier), you need to enable web development by running flutter config –enable-web.

After doing that, you can run flutter run -d web and a local web server will be started from which you can access your app. The command returns the URL at which the server is listening, including the port number.

You can also run flutter build web to build the static website files in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the web/ directory:

$ flutter create .

Packages for Installing Flutter

Other distributions have packages or community repositories to install and update in a more straightforward and intuitive way. However, at the time of writing, no such thing exists for Flutter. If you have experience packaging RPMs for Fedora, consider contributing to this GitHub repository for this COPR package.

The next step is learning Flutter. You can do that in a number of ways:

  • Read the good API reference documentation on the official site
  • Watching some of the introductory video courses available online
  • Read one of the many books out there today. [Check out the author’s bio for a suggestion! — Ed.]

Photo by Randall Ruiz on Unsplash.

All systems go

Posted by Fedora Infrastructure Status on January 14, 2020 11:51 PM
Service 'The Koji Buildsystem' now has status: good: Everything seems to be working.

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on January 14, 2020 09:02 PM
Service 'The Koji Buildsystem' now has status: scheduled: scheduled buildsystem outage

Major service disruption

Posted by Fedora Infrastructure Status on January 14, 2020 09:01 PM
Service 'The Koji Buildsystem' now has status: major: Build systems will be offline for reboots

Fedora 31 : Install Yii framework.

Posted by mythcat on January 14, 2020 08:27 PM
Today I will show you how to install the Yii framework:
Yii is a fast, secure, and efficient PHP framework. Flexible yet pragmatic. Works right out of the box. Has reasonable defaults.
First, let's install the PHP , see the official webpage.
Fedora for PHP development.
[mythcat@desk ~]$ sudo dnf install php-cli
Is this ok [y/N]: y
Downloading Packages:
(1/2): php-common-7.3.13-1.fc31.x86_64.rpm 794 kB/s | 760 kB 00:00
(2/2): php-cli-7.3.13-1.fc31.x86_64.rpm 2.2 MB/s | 2.8 MB 00:01
Total 1.6 MB/s | 3.5 MB 00:02
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : php-common-7.3.13-1.fc31.x86_64 1/2
Installing : php-cli-7.3.13-1.fc31.x86_64 2/2
Running scriptlet: php-cli-7.3.13-1.fc31.x86_64 2/2
Verifying : php-cli-7.3.13-1.fc31.x86_64 1/2
Verifying : php-common-7.3.13-1.fc31.x86_64 2/2

php-cli-7.3.13-1.fc31.x86_64 php-common-7.3.13-1.fc31.x86_64

Install these packages: PHPUnit for unit tests or Composer.
[mythcat@desk ~]$ sudo dnf install phpunit composer
Install mysqli extension:
[mythcat@desk ~]$ sudo dnf install php-mysqli

You can start the server:
[mythcat@desk ~]$ sudo php --server localhost:8080 --docroot  .
PHP 7.3.13 Development Server started at Tue Jan 14 21:58:19 2020
Listening on http://localhost:8080
Document root is /home/mythcat
Press Ctrl-C to quit.
[Tue Jan 14 21:58:40 2020] [::1]:32988 [404]: / - No such file or directory
[Tue Jan 14 21:58:40 2020] [::1]:32990 [404]: / - No such file or directory
[Tue Jan 14 21:58:42 2020] [::1]:33014 [404]: /favicon.ico - No such file or directory
[Tue Jan 14 21:59:48 2020] [::1]:33080 [404]: / - No such file or directory
The composer command has these arguments:
[mythcat@desk ~]$ composer
/ ____/___ ____ ___ ____ ____ ________ _____
/ / / __ \/ __ `__ \/ __ \/ __ \/ ___/ _ \/ ___/
/ /___/ /_/ / / / / / / /_/ / /_/ (__ ) __/ /
\____/\____/_/ /_/ /_/ .___/\____/____/\___/_/
Composer version 1.9.1 2019-11-01 17:20:17

command [options] [arguments]

-h, --help Display this help message
-q, --quiet Do not output any message
-V, --version Display this application version
--ansi Force ANSI output
--no-ansi Disable ANSI output
-n, --no-interaction Do not ask any interactive question
--profile Display timing and memory usage information
--no-plugins Whether to disable plugins.
-d, --working-dir=WORKING-DIR If specified, use the given directory as working directory.
--no-cache Prevent use of the cache
-v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output,
 2 for more verbose output and 3 for debug

Available commands:
about Shows the short information about Composer.
archive Creates an archive of this composer package.
browse Opens the package's repository URL or homepage in your browser.
check-platform-reqs Check that platform requirements are satisfied.
clear-cache Clears composer's internal package cache.
clearcache Clears composer's internal package cache.
config Sets config options.
create-project Creates new project from a package into given directory.
depends Shows which packages cause the given package to be installed.
diagnose Diagnoses the system to identify common errors.
dump-autoload Dumps the autoloader.
dumpautoload Dumps the autoloader.
exec Executes a vendored binary/script.
global Allows running commands in the global composer dir ($COMPOSER_HOME).
help Displays help for a command
home Opens the package's repository URL or homepage in your browser.
i Installs the project dependencies from the composer.lock file if present,
or falls back on the composer.json.
info Shows information about packages.
init Creates a basic composer.json file in current directory.
install Installs the project dependencies from the composer.lock file if present,
or falls back on the composer.json.
licenses Shows information about licenses of dependencies.
list Lists commands
outdated Shows a list of installed packages that have updates available, including
their latest version.
prohibits Shows which packages prevent the given package from being installed.
remove Removes a package from the require or require-dev.
require Adds required packages to your composer.json and installs them.
run Runs the scripts defined in composer.json.
run-script Runs the scripts defined in composer.json.
search Searches for packages.
show Shows information about packages.
status Shows a list of locally modified packages, for packages installed from source.
suggests Shows package suggestions.
u Upgrades your dependencies to the latest version according to composer.json,
and updates the composer.lock file.
update Upgrades your dependencies to the latest version according to composer.json,
and updates the composer.lock file.
upgrade Upgrades your dependencies to the latest version according to composer.json,
and updates the composer.lock file.
validate Validates a composer.json and composer.lock.
why Shows which packages cause the given package to be installed.
why-not Shows which packages prevent the given package from being installed.
Let's install the Yii framework:
[mythcat@desk ~]$ composer create-project --prefer-dist yiisoft/yii2-app-basic basic

Installing yiisoft/yii2-app-basic (2.0.31)
- Installing yiisoft/yii2-app-basic (2.0.31): Downloading (100%)
Created project in basic
Loading composer repositories with package information
Updating dependencies (including require-dev)
Writing lock file
Generating autoload files
> yii\composer\Installer::postCreateProject
chmod('runtime', 0777)...done.
chmod('web/assets', 0777)...done.
chmod('yii', 0755)...done.
> yii\composer\Installer::postInstall
Let's run it:
[mythcat@desk ~]$ cd basic/
[mythcat@desk basic]$ ./yii serve
Server started on http://localhost:8080/
Document root is "/home/mythcat/basic/web"
Quit the server with CTRL-C or COMMAND-C.
[Tue Jan 14 22:20:22 2020] [::1]:34072 [200]: /
[Tue Jan 14 22:20:22 2020] [::1]:34074 [200]: /assets/dd70c73/css/bootstrap.css
[Tue Jan 14 22:20:22 2020] [::1]:34076 [200]: /css/site.css
[Tue Jan 14 22:20:22 2020] [::1]:34078 [200]: /assets/afa8d426/jquery.js
[Tue Jan 14 22:20:22 2020] [::1]:34080 [200]: /assets/3235fb02/yii.js
[Tue Jan 14 22:20:22 2020] [::1]:34082 [200]: /assets/dd70c73/js/bootstrap.js
[Tue Jan 14 22:20:23 2020] [::1]:34084 [200]: /index.php?r=debug%2Fdefault%2Ftoolbar&tag=5e1e228626c18

F31-20200113 updated Lives released

Posted by Ben Williams on January 14, 2020 05:08 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated F31-20200116 Live ISOs, carrying the 5.4.8-200 kernel.

This set of updated isos will save considerable amounts  of updates after install.  ((for new installs.)(New installs of Workstation have 1GB+ of updates)).

A huge thank you goes out to irc nicks dowdle, Southern-Gentleman for testing these iso.

We would also like to thank Fedora- QA  for running the following Tests on our ISOs.:



As always our isos can be found at  http://tinyurl.com/Live-respins .  

Introducing GVariant schemas

Posted by Alexander Larsson on January 14, 2020 03:02 PM

GLib supports a binary data format called GVariant, which is commonly used to store various forms of application data. For example, it is used to store the dconf database and as the on-disk data in OSTree repositories.

The GVariant serialization format is very interesting. It has a recursive type-system (based on the DBus types) and is very compact. At the same time it includes padding to correctly align types for direct CPU reads and has constant time element lookup for arrays and tuples. This make GVariant a very good format for efficient in-memory read-only access.

Unfortunately the APIs that GLib has for accessing variants are not always great. They are based on using type strings and accessing children via integer indexes. While this is very dynamic and flexible (especially when creating variants) it isn’t a great fit for the case where you have serialized data in a format that is known ahead of time.

Some negative aspects are:

  • Each g_variant_get_child() call allocates a new object.
  • There is a lot of unavoidable (atomic) refcounting.
  • It always uses generic codepaths even if the format is known.

If you look at some other binary formats, like Google protobuf, or Cap’n Proto they work by describing the types your program use in a schema, which is compiled into code that you use to work with the data.

For many use-cases this kind of setup makes a lot of sense, so why not do the same with the GVariant format?

With the new GVariant Schema Compiler you can!

It uses a interface definition language where you define the types, including extra information like field names and other attributes, from which it generates C code.

For example, given the following schema:

type Gadget {
  name: string;
  size: {
    width: int32;
    height: int32;
  array: []int32;
  dict: [string]int32;

It generates (among other things) these accessors:

const char *    gadget_ref_get_name   (GadgetRef v);
GadgetSizeRef   gadget_ref_get_size   (GadgetRef v);
Arrayofint32Ref gadget_ref_get_array  (GadgetRef v);
const gint32 *  gadget_ref_peek_array (GadgetRef v,
                                       gsize    *len);
GadgetDictRef   gadget_ref_get_dict   (GadgetRef v);

gint32 gadget_size_ref_get_width  (GadgetSizeRef v);
gint32 gadget_size_ref_get_height (GadgetSizeRef v);

gsize  arrayofint32_ref_get_length (Arrayofint32Ref v);
gint32 arrayofint32_ref_get_at     (Arrayofint32Ref v,
                                    gsize           index);

gboolean gadget_dict_ref_lookup (GadgetDictRef v,
                                 const char   *key,
                                 gint32       *out);

Not only are these accessors easier to use and understand due to using C types and field names instead of type strings and integer indexes, they are also a lot faster.

I wrote a simple performance test that just decodes a structure over an over. Its clearly a very artificial test, but the generated code is over 600 times faster than the code using g_variant_get(), which I think still says something.

Additionally, the compiler has a lot of other useful features:

  • You can add a custom prefix to all generated symbols.
  • All fixed size types generate C struct types that match the binary format, which can be used directly instead of the accessor functions.
  • Dictionary keys can be declared sorted: [sorted string] { ... } which causes the generated lookup function to use binary search.
  • Fields can declare endianness: foo: bigendian int32 which will be automatically decoded when using the generated getters.
  • Typenames can be declared ahead of time and used like foo: []Foo, or declared inline: foo: [] 'Foo { ... }. If you don’t name the type it will be named based on the fieldname.
  • All types get generated format functions that are (mostly) compatible with g_variant_print().

Let’s write a new vision statement for Fedora

Posted by Fedora Community Blog on January 14, 2020 01:00 PM
Fedora community elections

This is part one of a four-part series recapping the Fedora Council’s face-to-face meeting in November 2019.

A few years ago, the Fedora Council set out to update the project’s guiding statements. At the time, we were particularly focused on the mission statement, because we felt that what we had previously was too broad to be actionable. The result of that is:

Fedora creates an innovative platform for hardware, clouds, and containers that enables software developers and community members to build tailored solutions for their users.

… which we quite like. But, this focus has somewhat of a downside: it’s functional but not particularly inspiring. It talks about what we’re doing, but not much about the why. So, this year, we worked on a new vision statement to serve as the proverbial “banner on a hilltop” that we can use to rally our existing community and to attract new contributors.

Because this statement needs to reflect the actual vision of the collective Fedora community, the Council sees the draft we came up with as a starting point for a conversation as we work towards a final version. Our draft is:

We envision a world where free and open source software is accessible and usable. In this world, software is built by communities that are inclusive, welcoming, and encourage experimentation. The Fedora Project will be a reference for everyone who shares this vision.

This statement reflects our values, the four foundations of Freedom, Friends, Features, and First.

We talked a lot about Fedora’s Freedom foundation. As a project, we want everyone to live in a universe of free and open source software; the user should be in control of their computing. But we also recognize the reality that we have to lead people there, not push them. People have hardware that requires closed drivers, and sometimes the software they need for their jobs or life isn’t open either. We want people to be able to use open source for everything, but often the real world doesn’t let them. We need to provide a path so people can get to the ideal, not demand that they teleport there or else. We want our vision statement to encourage a productive approach rather than to act as a weapon.

We also want the statement to reflect our community approach — the Friends foundation. Fedora isn’t bits and bytes. Fedora is our people, and we want the statement to include our vision of a healthy community. As the saying goes, none of us is as smart as all of us. A welcoming and inclusive project produces better results.

And finally, we want to keep our focus on innovation, both by incorporating the latest upstream code and in the work we do to build our releases. While long-term support is important, it’s not our focus — and many other communities do a great job providing this already. Fedora advances the state of the art in Linux operating systems. We try new things, many of them succeed, but some do not — we learn from those and move on.

So, what do you think? Does this statement accomplish these goals? Is there something big that’s missing? Is there wording we can improve? Let’s work together to refine this draft and define Fedora’s vision for the 2020s! Give us your feedback on the council-discuss thread in the next few weeks. We want to ratify a final statement in February.

The post Let’s write a new vision statement for Fedora appeared first on Fedora Community Blog.

Goals – an experimental new tool which generalizes “make”

Posted by Richard W.M. Jones on January 14, 2020 12:26 PM

For the past few weeks I’ve been working on a new tool called goals which generalizes make. I did a quick talk at Red Hat about this tool which you can download from the link below:

<video class="wp-video-shortcode" controls="controls" height="281" id="video-7897-2" preload="metadata" width="500"><source src="http://oirase.annexia.org/rwmj.wp.com/2020-01-goals-preview.mp4?_=2" type="video/mp4">http://oirase.annexia.org/rwmj.wp.com/2020-01-goals-preview.mp4</video>

Video link (MP4 format, 31 minutes, 217 MB)

There are also my written notes from the talk here.

Creating password input widget in PyQt

Posted by Kushal Das on January 14, 2020 05:40 AM

One of the most common parts of writing any desktop tool and taking password input is about having a widget that can show/hide password text. In Qt, we can add a QAction to a QLineEdit to do the same. The only thing to remember, that the icons for the QAction, must be square in aspect ratio; otherwise, they look super bad.

The following code creates such a password input, and you can see it working at the GIF at the end of the blog post. I wrote this for the SecureDrop client project.

class PasswordEdit(QLineEdit):
    A LineEdit with icons to show/hide password entries
    CSS = '''QLineEdit {
        border-radius: 0px;
        height: 30px;
        margin: 0px 0px 0px 0px;

    def __init__(self, parent):
        self.parent = parent

        # Set styles

        self.visibleIcon = load_icon("eye_visible.svg")
        self.hiddenIcon = load_icon("eye_hidden.svg")

        self.togglepasswordAction = self.addAction(self.visibleIcon, QLineEdit.TrailingPosition)
        self.password_shown = False

    def on_toggle_password_Action(self):
        if not self.password_shown:
            self.password_shown = True
            self.password_shown = False

Setting a firewalld zone for libvirt network

Posted by Lukas "lzap" Zapletal on January 14, 2020 12:00 AM

Setting a firewalld zone for libvirt network

All interfaces managed by libvirt (virbr0, virbr1 etc) are being put into frewalld zone called “libvirt”. This zone allows NAT however it’s set quite strict. Only few services are opened like ssh, http, https, dns, dhcp and icmp. In case you want to add a new service, the command is:

firewall-cmd --zone=libvirt --add-port=12345/tcp

Alternatively, if you don’t care and use this network for some testing and you need to access all ports of the gateway, it’s possible to change the zone from “libvirt” to “trusted”. You cannot do this using firewall-cmd however as it will not survive reboot:

firewall-cmd --zone=trusted --change-interface=virbr0 --permanent # WILL NOT WORK

The trick here is to edit the network definition and add zone="trusted" attribute:

  <forward mode="nat">
      <port start="1024" end="65535"/>
  <bridge name="virbr0" zone="trusted" stp="on" delay="0"/>
  <mac address="52:54:00:5e:a1:12"/>
  <domain name="nat.lan"/>
  <ip address="" netmask="">
      <range start="" end=""/>

That’s all I have for today.

Fedora program update: 2020-02

Posted by Fedora Community Blog on January 13, 2020 03:31 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora this last week.

I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.



<figure class="wp-block-table">
OSCONPortland, OR, US13–16 Jul 2020closes 14 Jan
Open Source Summit NAAustin, TX, US22–24 Jun 2020closes 16 Feb

Help wanted

  • The COPR team wants your feedback on the work to happen in 2020. Vote online for your preferences.

Upcoming meetings


Fedora 32


  • 2020-01-21 Submission deadline for Self-Contained Changes
  • 2020-01-22 Spins keepalive deadline
  • 2020-01-22 Mass rebuild begins


<figure class="wp-block-table">
Drop Optical Media Release CriterionSystem-WideAccepted
Ruby 2.7System-WideAccepted
Enable fstrim.timer by defaultSystem-WideReady for FESCo
Use update-alternatives for /usr/bin/cc and /usr/bin/c++System-WideReady for FESCo
LTO by default for package buildsSystem-WideAccepted
Adopting sysusers.d formatSystem-WideReady for FESCo
Move fonts language Provides to LangpacksSystem-WideReady for FESCo
GCC10System-WideReady for FESCo
Golang 1.14System-WideReady for FESCo
Restart services at end of rpm transactionSystem-WideReady for FESCo
Systemd presets for user unitsSystem-WideReady for FESCo
Enable EarlyOOMSystem-WideReady for FESCo
Mono 6.6Self-ContainedReady for FESCo
Provide OpenType Bitmap FontsSelf-ContainedAnnounced
Additional buildroot to test x86-64 micro-architecture updateself-ContainedAnnounced

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

Fedora 33


<figure class="wp-block-table">
Binutils 2.34System-WideReady for FESCo
Retire python26Self-ContainedAnnounced
Retire python34Self-ContainedAnnounced

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

The post Fedora program update: 2020-02 appeared first on Fedora Community Blog.

How to setup a DNS server with bind

Posted by Fedora Magazine on January 13, 2020 09:00 AM

The Domain Name System, or DNS, as it’s more commonly known, translates or converts domain names into the IP addresses associated with that domain. DNS is the reason you are able to find your favorite website by name instead of typing an IP address into your browser. This guide shows you how to configure a Master DNS system and one client.

Here are system details for the example used in this article:

dns01.fedora.local     ( ) - Master DNS server
client.fedora.local    ( ) - Client 

DNS server configuration

Install the bind packages using sudo:

$ sudo dnf install bind bind-utils -y

The /etc/named.conf configuration file is provided by the bind package to allow you to configure the DNS server.

Edit the /etc/named.conf file:

sudo vi /etc/named.conf

Look for the following line:

listen-on port 53 {; };

Add the IP address of your Master DNS server as follows:

listen-on port 53 {;; };

Look for the next line:

allow-query  { localhost; };

Add your local network range. The example system uses IP addresses in the 192.168.1.X range. This is specified as follows:

allow-query  { localhost;; };

Specify a forward and reverse zone. Zone files are simply text files that have the DNS information, such as IP addresses and host-names, on your system. The forward zone file makes it possible for the translation of a host-name to its IP address. The reverse zone file does the opposite. It allows a remote system to translate an IP address to the host name.

Look for the following line at the bottom of the /etc/named.conf file:

include "/etc/named.rfc1912.zones";

Here, you’ll specify the zone file information directly above that line as follows:

zone "dns01.fedora.local" IN {
type master;
file "forward.fedora.local";
allow-update { none; };

zone "1.168.192.in-addr.arpa" IN {
type master;
file "reverse.fedora.local";
allow-update { none; };

The forward.fedora.local and the file reverse.fedora.local are just the names of the zone files you will be creating. They can be called anything you like.

Save and exit.

Create the zone files

Create the forward and reverse zone files you specified in the /etc/named.conf file:

$ sudo vi /var/named/forward.fedora.local

Add the following lines:

$TTL 86400
@   IN  SOA     dns01.fedora.local. root.fedora.local. (
        2011071001  ;Serial
        3600        ;Refresh
        1800        ;Retry
        604800      ;Expire
        86400       ;Minimum TTL
@       IN  NS          dns01.fedora.local.
@       IN  A 
dns01           IN  A
client          IN  A

Everything in bold is specific to your environment. Save the file and exit. Next, edit the reverse.fedora.local file:

$ sudo vi /var/named/reverse.fedora.local

Add the following lines:

$TTL 86400
@   IN  SOA     dns01.fedora.local. root.fedora.local. (
        2011071001  ;Serial
        3600        ;Refresh
        1800        ;Retry
        604800      ;Expire
        86400       ;Minimum TTL
@       IN  NS          dns01.fedora.local.
@       IN  PTR         fedora.local.
dns01           IN  A
client          IN  A
160     IN  PTR         dns01.fedora.local.
136     IN  PTR         client.fedora.local.

Everything in bold is also specific to your environment. Save the file and exit.

You’ll also need to configure SELinux and add the correct ownership for the configuration files.

sudo chgrp named -R /var/named
sudo chown -v root:named /etc/named.conf
sudo restorecon -rv /var/named
sudo restorecon /etc/named.conf

Configure the firewall:

sudo firewall-cmd --add-service=dns --perm
sudo firewall-cmd --reload

Check the configuration for any syntax errors

sudo named-checkconf /etc/named.conf

Your configuration is valid if no output or errors are returned.

Check the forward and reverse zone files.

$ sudo named-checkzone forward.fedora.local /var/named/forward.fedora.local

$ sudo named-checkzone reverse.fedora.local /var/named/reverse.fedora.local

You should see a response of OK:

zone forward.fedora.local/IN: loaded serial 2011071001

zone reverse.fedora.local/IN: loaded serial 2011071001

Enable and start the DNS service

$ sudo systemctl enable named
$ sudo systemctl start named

Configuring the resolv.conf file

Edit the /etc/resolv.conf file:

$ sudo vi /etc/resolv.conf

Look for your current name server line or lines. On the example system, a cable modem/router is serving as the name server and so it currently looks like this:


This needs to be changed to the IP address of the Master DNS server:


Save your changes and exit.

Unfortunately there is one caveat to be aware of. NetworkManager overwrites the /etc/resolv.conf file if the system is rebooted or networking gets restarted. This means you will lose all of the changes that you made.

To prevent this from happening, make /etc/resolv.conf immutable:

$ sudo chattr +i /etc/resolv.conf 

If you want to set it back and allow it to be overwritten again:

$ sudo chattr -i /etc/resolv.conf

Testing the DNS server

$ dig fedoramagazine.org
; <<>> DiG 9.11.13-RedHat-9.11.13-2.fc30 <<>> fedoramagazine.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8391
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 6

 ; EDNS: version: 0, flags:; udp: 4096
 ; COOKIE: c7350d07f8efaa1286c670ab5e13482d600f82274871195a (good)
 ;fedoramagazine.org.        IN  A

 fedoramagazine.org.    50  IN  A

 fedoramagazine.org.    86150   IN  NS  ns05.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns02.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns04.fedoraproject.org.

 ns02.fedoraproject.org.    86150   IN  A
 ns04.fedoraproject.org.    86150   IN  A
 ns05.fedoraproject.org.    86150   IN  A
 ns02.fedoraproject.org.    86150   IN  AAAA    2610:28:3090:3001:dead:beef:cafe:fed5
 ns05.fedoraproject.org.    86150   IN  AAAA    2001:4178:2:1269:dead:beef:cafe:fed5

 ;; Query time: 830 msec
 ;; WHEN: Mon Jan 06 08:46:05 CST 2020
 ;; MSG SIZE  rcvd: 266

There are a few things to look at to verify that the DNS server is working correctly. Obviously getting the results back are important, but that by itself doesn’t mean the DNS server is actually doing the work.

The QUERY, ANSWER, and AUTHORITY fields at the top should show non-zero as it in does in our example:

;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 6

And the SERVER field should have the IP address of your DNS server:


In case this is the first time you’ve run the dig command, notice how it took 830 milliseconds for the query to complete:

;; Query time: 830 msec

If you run it again, the query will run much quicker:

$ dig fedoramagazine.org 
;; Query time: 0 msec

Client configuration

The client configuration will be a lot simpler.

Install the bind utilities:

$ sudo dnf install bind-utils -y

Edit the /etc/resolv.conf file and configure the Master DNS as the only name server:

$ sudo vi /etc/resolv.conf

This is how it should look:


Save your changes and exit. Then, make the /etc/resolv.conf file immutable to prevent it from be overwritten and going back to its default settings:

$ sudo chattr +i /etc/resolv.conf

Testing the client

You should get the same results as you did from the DNS server:

$ dig fedoramagazine.org
; <<>> DiG 9.11.13-RedHat-9.11.13-2.fc30 <<>> fedoramagazine.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8391
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 6

 ; EDNS: version: 0, flags:; udp: 4096
 ; COOKIE: c7350d07f8efaa1286c670ab5e13482d600f82274871195a (good)
 ;fedoramagazine.org.        IN  A

 fedoramagazine.org.    50  IN  A

 fedoramagazine.org.    86150   IN  NS  ns05.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns02.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns04.fedoraproject.org.

 ns02.fedoraproject.org.    86150   IN  A
 ns04.fedoraproject.org.    86150   IN  A
 ns05.fedoraproject.org.    86150   IN  A
 ns02.fedoraproject.org.    86150   IN  AAAA    2610:28:3090:3001:dead:beef:cafe:fed5
 ns05.fedoraproject.org.    86150   IN  AAAA    2001:4178:2:1269:dead:beef:cafe:fed5

 ;; Query time: 1 msec
 ;; WHEN: Mon Jan 06 08:46:05 CST 2020
 ;; MSG SIZE  rcvd: 266

Make sure the SERVER output has the IP Address of your DNS server.

Your DNS server is now ready to use and all requests from the client should be going through your DNS server now!

Episode 178 - Are CVEs important and will ransomware put you out of business?

Posted by Open Source Security Podcast on January 13, 2020 12:03 AM

Josh and Kurt talk about a discussion on Twitter about if discovering CVE IDs is important for a resume? We don't think it is. We also discuss the idea of ransomware putting a company out of business. Did it really? Possibly but it probably won't create any substantial change in the industry.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/12707411/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

    Installing Unifi Controller on CentOS 8

    Posted by Lukas "lzap" Zapletal on January 13, 2020 12:00 AM

    Installing Unifi Controller on CentOS 8

    This tutorial covers installation of UniFi from RPMFusion repository on OpenJDK with MongoDB from official community repository. It kinda works, but I’ve hit few bugs here and there, therefore I’d suggest you to use “tarball” installation with Oracle JDK instead. I plan to do new blogpost on this topic soon - search my blog.

    First, enable PowerTools CentOS repository:

    dnf install dnf-plugins-core
    yum config-manager --set-enabled PowerTools

    You can enable all the official repositories as they contain pretty useful stuff, but it’s not needed:

    yum config-manager --set-enabled PowerTools --set-enabled centosplus --set-enabled extras

    Then enable EPEL and very much needed RPMFusion:

    dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
    dnf -y install --nogpgcheck https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
    dnf -y install --nogpgcheck https://download1.rpmfusion.org/free/el/rpmfusion-free-release-8.noarch.rpm
    dnf -y install --nogpgcheck https://download1.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-8.noarch.rpm

    And you guessed it:

    dnf -y install unifi

    There’s also unifi-lts Long Term Support release available if you want to avoid frequent updates. Due to licensing issues, MongoDB has been removed from Fedora and CentOS8, install MongoDB Community edition:

    dnf -y install https://repo.mongodb.org/yum/redhat/8/mongodb-org/4.2/x86_64/RPMS/mongodb-org-server-4.2.2-1.el8.x86_64.rpm

    Enable and start the unifi service. Note the mongod service does not need to be started, unifi process starts its own instance:

    systemctl enable --now unifi

    Beware, logs are in non-standard location:

    tail -f /usr/share/unifi/logs/server.log

    Visit the web UI for the initial configuration: https://nuc.home.lan:8443 and have fun!

    Kiwi TCMS is migrating from OAuth to GitHub App

    Posted by Kiwi TCMS on January 12, 2020 04:31 PM

    Hello testers, Kiwi TCMS is migrating from its OAuth backend to the so called "GitHub App" backend in order to enable further integration with GitHub's PR flow as stated previously in our yearly goals. This blog post outlines the differences between the old and the new!

    The old OAuth application only had access to your username, name and email for authentication purposes. Its authorization screen looked like so:

    OAuth login screen

    GitHub Apps on the other hand are designed for more granular access and tighter integration with the GitHub platform. This type of application still allows you to perform 1-click login into https://public.tenant.kiwitcms.org. If this is your first time logging into Kiwi TCMS after the migration you will see the following screen:

    App login screen

    Notice how the heading, information section and action button are slightly different! The important section is Resources on your account! We still only need your name, username and email address! Existing Kiwi TCMS accounts (from before the migration) will continue to work and they will still have access to all of their data previously created. Authorization of this new GitHub app (e.g. login only) does not give it permissions to access your repositories and act on your behalf.

    To permit this GitHub App to access your repositories and/or act on your behalf you must Install it first. That is tell Kiwi TCMS GitHub integration code what kind of resources from your GitHub account it is allowed to access. You may install into your personal GitHub account or an organizational account! You may do this by following the Install & Authorize button on our home page or directly from https://github.com/apps/kiwi-tcms! The screen should look like this:

    App installation screen

    Initially we ask for read-only access to a few resources so Kiwi TCMS can start receiving webhooks from GitHub and synchronize information about your repositories into our database. This is documented both on the app installation screen itself (required by GitHub) and on https://github.com/kiwitcms/github-app!

    Further ideas about integration between GitHub and Kiwi TCMS, including the original idea about status checks from Issue #700, can be found at https://github.com/kiwitcms/github-app/issues.

    Help us grow

    After this migration we're back to zero! The thousands of authorizations we've had on our legacy OAuth app can't be migrated to the new app. This also means our listing on GitHub Marketplace will be taken down and we have to qualify through the entire process from scratch.

    Please help us get back on track! Here's what we ask you to do (in this order):

    Thank you! Happy testing and happy new year!

    Deploy PhotoPrism in CentOS 8

    Posted by Lukas "lzap" Zapletal on January 12, 2020 12:00 AM

    Deploy PhotoPrism in CentOS 8

    PhotoPrism is a great web photo library and great fit for browsing and managing photos on a home NAS server I’ve recently built. It has some unique features like tagging photo using TensorFlow, however the most appealing features for me are slick interface, easy use and administration, support for RAW/HEIC formats (conversion to JPEG), ability to configure “read only” mode (my originals are never touched) and “photo stream” approach (all photos in one huge pile with search/tagging capabilities).

    This is a tutorial how to deploy this app on CenOS 8 in a root-less container with SELinux. Before starting, it’s important to plan permissions and SELinux file labels. On my NAS system, I have a user and group called “data” with UID/GID 1000 which is the initial regular user on CentOS (and most distributions) after installation. I keep all my photos and files under this user and group (Samba, NFS).

    # ls -laZ /mnt/int/data/photo/ -d
    drwxrwxr-x. 4 data data system_u:object_r:public_content_rw_t:s0 70 Jan 12 21:20 /mnt/int/data/photo/

    Note the SELinux file label is set to public_content_rw_t because I want my services (Samba, NFS, PhotoPrism) to be able to read and write data. If you don’t want PhotoPrism to write into original folder, use public_content_r_t label. Here is how to set SELinux on a directory and all its subdirectories and folders permanently:

    # semanage fcontext -a -t public_content_rw_t "/mnt/int(/.*)?"
    # restorecon -RvvF /mnt/int

    For Samba and NFS configuration, see more details in my previous article. Do not turn off SELinux, get it right this time! Install podman first and SELinux development files:

    # dnf install podman policycoreutils-devel

    Compile a custom policy rule to read and write files and directories labelled as public_content_rw_t:

    # mkdir selinux
    # cd selinux
    # cat >photoprism.te <<EOF
    policy_module(photoprism, 1.0)
    require {
            type container_t;
    # make -f /usr/share/selinux/devel/Makefile && semodule -i photoprism.pp

    No other actions need to be done, the module is now loaded permanently, there is no service to restart as SELinux applies the new rules immediately. It will also survive restarts. It’s easy, isn’t it?

    Before we start, small recap. The photos I want to browse via PhotoPrism are stored on 8th gen Intel NUC on the internal 2TB drive in directory /mnt/int/photos/phone and I have a separate LVM volume on NVMe SSD for thumbnails mounted as /mnt/fast where I want to store PhotoPrism thumbnails and database. Run this command as the user that has UNIX permissions to read (or write) photos directory and with read/write permissions to thumbnails and database directories:

    $ podman run -d --name photoprism \
      --userns=keep-id -p 2342:2342 \
      -v /mnt/int/photo_ovl/merged/telefon:/home/photoprism/Pictures/Originals \
      -v /mnt/fast/thumbs:/home/photoprism/.cache/photoprism \
      -v /mnt/fast/photoprismdb:/home/photoprism/.local/share/photoprism/resources/database \

    When starting for the first time, you can omit the -d option to start the container in the foreground to see log messages. To stop the container:

    $ podman stop photoprism

    To start it again (use -a to attach to the photoprism process to see logs):

    $ podman start photoprism

    Note in the run command above, I used --userns=keep-id argument which tells podman to keep UID/GID. Remember when I told you my “data” user has UID/GID 1000? PhotoPrism application also runs as 1000, therefore it nicely maps to the host OS. This is the most simple option, alternatively this argument can be removed and podman will automatically map UID/GID according OS mapping tables. In my case, the container process would have UID 100999. In that case, modify UNIX permissions or/and ACLs in a way that this user (or group) can read (and/or write) to the directories.

    Alternatively, a mapping can be provided. Let’s say

    $ podman run -d ...
      --uidmap 0:100000:5000

    The --uidmap optiom tells podman to map a range of 5000 UIDs inside the container, starting with UID 100000 outside the container (so the range is 100000-104999) to a range starting at UID 0 inside the container (so the range is 0-4999). This can be tricky to understand, that’s why I opted-in for keep-id approach.

    For experimenting with UIDs and GIDs, you don’t need to use PhotoPrism container, just grab a temporary shell within Fedora, then create a user “test” and try to access required files. Remeber SELinux is turned on, if you see permission errors and you think you have your UNIX/ACL permissions right, check audit.log. Note the container will be automatically removed on exit:

    $ podman run --rm -it -v /mnt/fast:/mnt fedora:31 /bin/bash
    > useradd test
    > su test
    > touch /mnt/TEST
    > exit

    One trick before you proceed, if you don’t want PhotoPrism to convert HEIC/RAW files into the originals folder increasing the overall size of the originals, use overlay fs:

    # mount -t overlay overlay -o lowerdir=/mnt/int/data/photo,upperdir=/mnt/int/photo_ovl/upper,workdir=/mnt/int/photo_ovl/work /mnt/int/photo_ovl/merged

    This command creates an overlay folder /mnt/int/photo_ovl/merged with source /mnt/int/data/photo. The other two named work and upper must be just empty directories. Then use the /mnt/int/photo_ovl/merged foder for the -v podman option to use it instead of the original directory. All files created by PhotoPrism will not be stored in the original folder but in the upper overlay which can be removed at any point.

    Now, access your server via http://nuc.home.lan:2342, go to Settings (password is set to “photoprism” by default) and initiate reindex process.

    If you need to add more SELinux rules in case you haven’t labelled files correctly, here is how to do this. The trick is to disable dontaudit rules:

      # semodule -BD

    Then the required rule can be easily found and added to the policy:

    # sepolgen-ifgen
    # audit2allow -RaM photoprism
    # semodule -i photoprism.pp
    # semodule -B

    Although podman comes with systemd unit generator command podman generate systemd, it is required to configure headless systemd login sessions in order to achieve management of the service via systemd. This is currently (CentOS 8.1) quite some work and I believe it is not worth the effort for sever-based deployment.

    That’s all, the module is active from now on. This will persist restarts. SELinux is easy if you know what to do. Granted those dontaudit rules can be tricky as the denials won’t appear until you temporarily diable this behavior.

    I look forward to new developments of this software. I would love to see WebP previews support to save some more space on the cache volume, Android and iOS client apps are in development but they are not available yet on app stores (I don’t have iOS development account yet I would love to start testing it). Browsing photos on my iPad is really exciting. I really hope the authors get pricing model right and develop a sustainable offering for both DYI and regular users.

    On the Passing of Neal Peart

    Posted by Adam Young on January 11, 2020 11:57 PM

    I’m a nerdy male of Jewish, Eastern European Descent. I was born in 1971. My parents listened to John Denver, Simon and Garfunkle, Billy Joel, Mac Davis, Anne Murray and Carly Simon. My Uncle Ben started me on Saxophone in second grade.

    <figure class="wp-block-image"><figcaption>Image From “The Buffalo News”</figcaption></figure>

    Second grade was also the year that I moved from one side of Stoughton to the other, to 86 Larson Road, 3 houses up from the Grabers. While Brian Graber was my age, and destined to be one of my best friends through high school, it was his older brother, Stephen, that introduced me to two things that would change my life; the game of Dungeons and Dragons, and the band Rush.

    <figure class="wp-block-image"></figure>

    I said Nerdy.

    I can’t say enough about how D&D got me into history, reading, and all that propelled me through life.

    The soundtrack to that life was provided by Rush. Why? The stories that they told. Xanadu. The Trees, 2112. Hemispheres.

    But the song that grabbed me? The Spirit of Radio.

    <figure class="wp-block-image"></figure>

    I even had the name wrong for all my life: I thought it was the Spirit of THE Radio. But that was not the tag line from the radio station that Neal used when he was inspired to write the song.

    Invisible airways Crackle with Life

    Bright Antennae Bristle with the Energy

    Emotional Feedback on a Timeless Wavelength

    Bearing a gift beyond price, Almost Free.

    The Spirit of Radio

    The opening Riff is still my favorite thing to play on Guitar.

    The chords, all four of them, are simple, and yet just enough of a variation from the “Same Three Chords” to give the song its own sound.

    “Begin the Day with a Friendly voice, companion unobtrusive.”

    How man trips to school started with the radio? In 1986, when my sister was driving me, and Brian, and her friend to Heidi, the radio was our start of the day.

    You can chose a read guide in some celestial voice

    If you chose not to decide you still have made a choice

    You can chose from phantoms fears and kindness that can kill

    I will chose a path that’s clear

    I will chose freewill.


    My Mother was (and is) a huge Robert Frost fan. We often talked about how he was sometimes belittled by other poets for being to “pretty” in his rhymes. Was Neal Peart a poet? A philosopher? He certainly got me started on Ayn Rand (a stage I moved beyond, eventually) but also taught me the term “Free Will.”

    <figure class="wp-block-image"></figure>

    Leaving my homeland
    Playing a lone hand
    My life begins today

    Fly By Night

    My cousin Christopher Spelman came up for a week in July when I was 12 and he ended up staying all summer. We listened to Rush endlessly, discussed lyrics and drum technique. It was part of the cement that held our life long friendship together. I remember riding down the “Lazy River” trying to figure out which song sounded like “da DAHn da da DUH DUH…” and finally realizing it was the bridge from “Fly By Night.”

    <figure class="wp-block-image"></figure>

    Any escape might help to smooth
    The unattractive truth
    But the suburbs have no charms to soothe
    The restless dreams of youth


    I didn’t like the synthesized 80s. I followed Rush through them, but wished they would write music like I had heard from their earlier albums. But they had all grown. Neal moved from story telling to philosophy. Many of his lyrics on “Presto” could have been written for me as an adolescent.

    <figure class="wp-block-image"><figcaption>Counterparts</figcaption></figure>

    As the years went by, we drifted apart
    When I heard that he was gone
    I felt a shadow cross my heart
    But he’s nobody’s

    Nobodys Hero

    And so I grew apart from my favorite band. Music, the pillar of my life in High School, took a second seat during my Army years. By the time I emerged, Rush was in hiatus. I had my own stories.

    Keep on riding North and West
    Then circle South and East
    Show me beauty, but there is no peace
    For the ghost rider

    Ghost Rider

    And that was 20 plus years ago. They went fast. IN the past several years, I’ve shared a love with my elder son for rock music, with Rush holding a special place in our discussions. The song “Ghost Rider” grabbed his imagination. We’ve both read the graphic novel of “Clockwork Angles.” He was the first person I told when I heard the news. The second was My friend Steve, a bassist I jam with far less frequently, and a member of our weekly D20 Future game night…a direct descendant of that D&D obsession from my elementary school years.

    Living in a fisheye lens
    Caught in the camera eye
    I have no heart to lie
    I can’t pretend a stranger
    Is a long awaited friend


    Neal held a certain fascination for me. He was an introvert, a technician, a writier, and a perfectionist. The nicest of people, he seemed to have to learn how to protect himself. I don’t think he wrote anything more personal than “Limelight” where he explained to his fanbase the feelings he had from fame. We all knew him, but we were strangers, not long-awaited friends.

    Suddenly, you were gone
    From all the lives you left your mark upon


    As I process the lost of one of the most important artists in my life, I am listening to their albums. I started in chronology, but I was drawn to different songs. “Afterimage” the song where he says goodbye to an old friend, was the first that came to my mind. And so many others. Most people think of him as drummer (and that he was indeed) but I think of him as a lyricist, and I keep seeking out his words. Lucky for me, he wrote so many of them. I am mostly drawn now to the ones I know less well, from the end of their career, the last few albums.

    Geddy Lee is the voice of Rush, and we hear the words in his performances, but they are Neal’s words. And they have meant a lot to me.

    <figure class="wp-block-image"></figure>

    Samba and NFS shared folder on CentOS 8

    Posted by Lukas "lzap" Zapletal on January 11, 2020 12:00 AM

    Samba and NFS shared folder on CentOS 8

    Setting up a shared (guest) read-write folder across Samba and NFS was a piece of cake on CentOS 8. I’ve also thrown Avahi daemon into the mix so all three platforms we have in our family can easily access our data. Here are my notes.

    First of, I’ve put my ethernet into trusted zone because I don’t want to deal with firewall in my home network. I am running one on router. You should probably not do this.

    firewall-cmd --zone=trusted --change-interface=eno1
    firewall-cmd --zone=trusted --change-interface=eno1 --permanent

    For the record, my new NAS server is simply an 8th gen Intel NUC with a SSD for the OS and 2 TB Seagate drive for data. I don’t have much content just photos basically, unfortunately the bay is 9mm and there are no bigger 2.5-sized disks available. My plan is to extend it later with an external 4-6 TB USB3 or Thunderbolt HDD. I’ve configured LVM and the internal drive is mounted at /mnt/int. I don’t use RAID on my home NAS servers because I believe it’s not necessary - I can live with couple of days downtime until new disk arrives. Remember: RAID is not a backup! I regularly backup all the data to a remote location.

    Overall goal is simple: have a single shared folder between Samba and NFS mounted read-write with SELinux turned on with minimum configuration as possible.

    I’ve installed Samba, NFS server, SELinux utilities and Avahi daemon:

    dnf install samba samba-client nfs-utils policycoreutils-python-utils avahi

    Configuration of Samba could have been more simple as many of these values are probably default ones, I was just experimenting a bit and it won’t hurt for sure. This is /etc/samba/smb.conf:

        netbios name = NUC
        workgroup = WORKGROUP
        local master = yes
        security = user
        passdb backend = tdbsam
        guest account = nobody
        map to guest = Bad User
        logging = systemd
        log level = 0
        load printers = no
        comment = Data
        path = /mnt/int/data
        browseable = yes
        writeable = yes
        public = yes
        read only = no
        guest ok = yes
        guest only = yes
        force create mode = 0664
        force directory mode = 0775
        force user = nobody
        force group = nobody

    The important SELinux “trick” was to configure file context correctly, so both NFS and Samba can access read and write it:

    semanage fcontext -a -t public_content_rw_t "/mnt/int(/.*)?"
    restorecon -RvvF /mnt/int

    Both services also need to be allowed to write content:

    setsebool -P allow_smbd_anon_write=1
    setsebool -P allow_nfsd_anon_write=1

    Configuration of NFSv4 in RHEL 8 (CentOS 8) is super easy. If you remember the pain of configuring firewalls with with older NFS versions like me, you want to disable those services completely. This step is optional if you want to allow legacy NFS clients:


    Stop and disable NFS services which are not needed for NFSv4:

    systemctl mask --now rpc-statd.service rpcbind.service rpcbind.socket

    End of the optional step. Configuration of NFS server is super easy (compare to Samba):

    /mnt/int/data *(rw,async,all_squash,anonuid=65534,anongid=65534)

    Since NFS server comes preinstalled in the default server CentOS8 installation profile, I restarted it.

    systemctl restart nfs-server

    And enabled Samba and Avahi services:

    systemctl enable --now nmb.service smb.service nfs-server.service avahi-daemon.service

    That’s really all. Testing is easy, install samba-client package and do:

    smbclient -U guest //nuc/data

    To test NFS, just mount the directory:

    mount -t nfs nuc:/mnt/int/data /mnt/nuc

    Hope the article helped you to achieve shared folder at home. This is not recommended setup for work or coffeeshops. And remember to do regular backups (not copies), because people can accidentaly rename, overwrite or delete files!

    Drop me a comment or share via Twitter please. Have a good one.

    How to get rid of Activate web console in CentOS 8

    Posted by Lukas "lzap" Zapletal on January 11, 2020 12:00 AM

    How to get rid of Activate web console in CentOS 8

    I guess you’ve seen this already:

    $ ssh root@nas.local
    Activate the web console with: systemctl enable --now cockpit.socket
    Last login: Sat Jan 11 13:05:33 2020 from
    # _

    That’s why you are here, is it? Well, easy help:

    rm -f /etc/issue.d/cockpit.issue /etc/motd.d/cockpit

    Problem solved. If you want it back, simply reinstall cockpit package. I am not sure if upgrade would put them back or not tho.

    GNOME 3.34.3 in Fedora 31 updates-testing

    Posted by Kalev Lember on January 09, 2020 06:03 PM

    Just a quick heads up that GNOME 3.34.3 just hit Fedora 31 updates-testing repo. It’s a fairly small update; mostly just gnome-shell/mutter fixes and translation updates to leaf applications.

    If you are a GNOME user, please install the update from updates-testing and give it a quick spin and leave karma in the feedback section at https://bodhi.fedoraproject.org/updates/FEDORA-2020-194da76ba0


    Copr: review of 2019 and vote for features in 2020

    Posted by Fedora Community Blog on January 09, 2020 03:13 PM

    I want to sum up what happened in Copr during 2019. At the end of this post, you can see our TODO list and cast your vote on what we should focus on in 2020.

    In 2019

    In the last year, we:

    • Added native AARCH64 builders
    • Added emulated ARMhfp builders
    • Released eight new versions of Mock including features as Jinja templates in configs, Dynamic BuildRequires, subscription-manager support, which enables us to build on top of RHEL, Fedora Toolbox support, and container image support which allows building using incompatible RPM. To give credit, some of these Mock’s features were contributed by community members. Thank you!
    • Removed outdated chroots, which allowed us to reclaim terabytes of disk space. At the same time, we give you the option to keep those old repos if you want them.
    • Provided an RSS feed
    • Added project discussions by integration with https://discussion.fedoraproject.org
    • Added the ability to mark your project as temporary, and we delete it automatically after a specified amount of days. This is great for CI projects.
    • Migrated from fedmsg to fedora-messaging
    • Provided anonymized DB dump so that you can play with our data
    • Added the ability to pin your favorite projects
    • Used more builders thanks to Amazon providing AWS builders for free
    • Contributed to speed up createrepo_c, because that one is our biggest bottleneck. For projects like Cran or python rebuilds, the createrepo task runs longer than the build of packages and cannot be parallelized
    • Added per-package config option to blacklist the package from building against particular chroots
    • Added support for multilib projects
    • Refined modularity support, module dependency, module_hotfixes flags, …
    • Added the ability to set Copr permissions via API and CLI
    • Started removing old builds automatically, per option that only keeps a maximum number of builds per given package

    In addition to the work we did, the community has done great work using Copr:

    For 2020

    What are our plans for 2020? We have some mandatory tasks:

    • Migrate to new datacenter together with the whole Fedora infrastructure
    • Install and use new and bigger storage

    Yet we have quite a long list of RFEs and tasks to do. You are our customers, so I would like to hear your opinion on what is crucial for you. Please cast your vote for these options:

    •  Allow more parallel builds – everyone wants faster builds. I am afraid we cannot speed up the build itself, but we can focus on allowing us to run more builds in parallel to handle peaks
    • Mock development – we spend a lot of time on Mock development. We utilize those new features in Copr, but they are useful even for your local workflow with Mock. Should we spend more time on longstanding RFEs?
    • Build Flatpak application from your project – we have a viable idea how to build Flatpak app from your project with just a few clicks, and we can upload the result to some registry. For example, to Quay
    • New commands for our API and copr-cli
    • Run lints like rpmlint or rpm-inspect after each build and give you hints on how to improve your spec files
    • Automatically rebuild PyPI and Rubygems – we already did this in the past, but we did not rebuild it for new Fedoras
    • Allow you to vote for quality of the project with thumbs up/down and high quality repos automatically promote and will enable them as one big repo of “editors pick”
    • Focus on rpm spec generators – See the user documentation for what we support right now
    •  Add emulated architecture s390x – while we would like to add native architecture, it will likely not happen next year, but we can do builds using QEMU
    • Add RHEL as a target – right now you can build on top of CentOS, but we can allow you to build on top of RHEL
    • Better automatic builds triggered by GitHub, Gitlab, aka mimic the Copr CI we have for Pagure – git server sends request – copr replies back with build status
    • Runtime dependency config between copr projects, so dnf copr enable <user/foo> enables other projects transitively
    • Something else – is something blocking you from using Copr? Please share it with us.

    Please cast your vote or join the discussion on the devel mailing list.

    The post Copr: review of 2019 and vote for features in 2020 appeared first on Fedora Community Blog.

    PHP version 7.3.14RC1 and 7.4.2RC1

    Posted by Remi Collet on January 09, 2020 02:00 PM

    Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

    RPM of PHP version 74.2RC1 are available as SCL in remi-test repository and as base packages in the remi-php74-test repository for Fedora 29-31 and Enterprise Linux 7-8.

    RPM of PHP version 7.3.13RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30-31 or remi-php73-test repository for Fedora 29 and Enterprise Linux.

    emblem-notice-24.pngPHP version 7.2 is now in security mode only, so no more RC will be released.

    emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

    Parallel installation of version 7.4 as Software Collection:

    yum --enablerepo=remi-test install php74

    Parallel installation of version 7.3 as Software Collection:

    yum --enablerepo=remi-test install php73

    Update of system version 7.4:

    yum --enablerepo=remi-php74,remi-php74-test update php\*

    or, the modular way (Fedora and RHEL 8):

    dnf module reset php
    dnf module enable php:remi-7.4
    dnf --enablerepo=remi-modular-test update php\*

    Update of system version 7.3:

    yum --enablerepo=remi-php73,remi-php73-test update php\*

    or, the modular way (Fedora and RHEL 8):

    dnf module reset php
    dnf module enable php:remi-7.3
    dnf --enablerepo=remi-modular-test update php\*

    Notice: version 7.4.2RC1 is in Fedora rawhide and version 7.3.14RC1 in updates-testing for Fedora 31, for QA.

    emblem-notice-24.pngEL-8 packages are built using RHEL-8.0

    emblem-notice-24.pngEL-7 packages are built using RHEL-7.7

    emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

    Software Collections (php73, php74)

    Base packages (php)

    Keeping syslog-ng portable

    Posted by Peter Czanik on January 08, 2020 02:13 PM

    I define syslog-ng, as an “Enhanced logging daemon with a focus on portability and high-performance central log collection”. One of the original goals of syslog-ng was portability: running the same application on a wide variety of architectures and operating systems. After one of my talks mentioning syslog-ng, I was asked how we ensure that syslog-ng stays portable when all the CI infrastructure focus on 64bit x86 architecture and Linux.

    For many years, before a new pull request was merged to the syslog-ng master branch on GitHub, the code was compiled and tested by Travis – on Ubuntu, x86. Changes were merged only if all tests ran successfully. But this is only one operating system on one architecture. Luckily, not any more.

    Recently Mac builds were added. While still x86, it is running on a completely different operating system. Automatic test cases are also executed for OS X binaries.

    A few years ago I also started to do regular git snapshot builds for RPM distros. Of course, not from every commit, but at least two-three times a month. As the test framework used by syslog-ng is not packaged yet for RPM distributions, these are just compile tests, but they cover many different platforms.

    On the operating systems side:

    • all active openSUSE Leap releases, Tumbleweed and SLES 12 & 15 from the SUSE world

    • all active Fedora releases, Rawhide, and CentOS 7 & 8 from the Red Hat family

    This ensures that old operating systems are kept being supported and bleeding edge does not break syslog-ng. If there is a problem, it can be debugged and fixed before a release.

    On the architecture side, syslog-ng is compiled not just on 64bit x86, but also on 32-bit x86 and ARMv7, and various 64bit architectures: AMD64, ARMv8, POWER and POWER little endian, and even s390. These are just compile tests, but I do occasional manual testing as well. Of course, that does not allow catching all problems, like an Aarch64 bug on Fedora Rawhide, when I do testing on a more conservative openSUSE Leap. If someone reports a problem and I have the hardware, I can still install the problematic OS and do some testing. I have a wide variety of ARMv7 development boards, but recently I do most of the testing on a SoftIron OverDrive box. It is Aarch64 and I received it as a courtesy of ARM.

    Not this often, but I also test syslog-ng git snapshots on FreeBSD. Mostly on AMD64, but sometimes also on Aarch64. Just to make sure that one more operating system outside of Linux and OS X is regularly tested. Why FreeBSD? First of all, I keep using FreeBSD almost from the day it was born, even a few months earlier before I started to use Linux. And it is also the largest platform outside Linux where syslog-ng is used, including some appliances built around FreeBSD.

    Travis announced support for ARM just recently: https://blog.travis-ci.com/2019-10-07-multi-cpu-architecture-support. It needed some extra work on the syslog-ng side, but now each pull request is also tested on ARM before merging. This is not just a simple compile test – as I do most of the time – but it includes unit tests as well.

    Does this approach work? Yes, it seems to work. For example, syslog-ng compiles on all architectures supported by Debian. That also includes MIPS that I only tested with syslog-ng once. And I learned about a new architecture just by checking on which CPU architecture the BMW i3 is using to run syslog-ng :) It is the SuperH.

    If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

    How to setup multiple monitors in sway

    Posted by Fedora Magazine on January 08, 2020 09:00 AM

    Sway is a tiling Wayland compositor which has mostly the same features, look and workflow as the i3 X11 window manager. Because Sway uses Wayland instead of X11, the tools to setup X11 don’t always work in sway. This includes tools like xrandr, which are used in X11 window managers or desktops to setup monitors. This is why monitors have to be setup by editing the sway config file, and that’s what this article is about.

    Getting your monitor ID’s

    First, you have to get the names sway uses to refer to your monitors. You can do this by running:

    $ swaymsg -t get_outputs

    You will get information about all of your monitors, every monitor separated by an empty line.

    You have to look for the first line of every section, and for what’s after “Output”. For example, when you see a line like “Output DVI-D-1 ‘Philips Consumer Electronics Company’”, the output ID is “DVI-D-1”. Note these ID’s and which physical monitors they belong to.

    Editing the config file

    If you haven’t edited the Sway config file before, you have to copy it to your home directory by running this command:

    cp -r /etc/sway/config ~/.config/sway/config

    Now the default config file is located in ~/.config/sway and called “config”. You can edit it using any text editor.

    Now you have to do a little bit of math. Imagine a grid with the origin in the top left corner. The units of the X and Y coordinates are pixels. The Y axis is inverted. This means that if you, for example, start at the origin and you move 100 pixels to the right and 80 pixels down, your coordinates will be (100, 80).

    You have to calculate where your displays are going to end up on this grid. The locations of the displays are specified with the top left pixel. For example, if we want to have a monitor with name HDMI1 and a resolution of 1920×1080, and to the right of it a laptop monitor with name eDP1 and a resolution of 1600×900, you have to type this in your config file:

    output HDMI1 pos 0 0
    output eDP1 pos 1920 0

    You can also specify the resolutions manually by using the res option: 

    output HDMI1 pos 0 0 res 1920x1080
    output eDP1 pos 1920 0 res 1600x900

    Binding workspaces to monitors

    Using sway with multiple monitors can be a little bit tricky with workspace management. Luckily, you can bind workspaces to a specific monitor, so you can easily switch to that monitor and use your displays more efficiently. This can simply be done by the workspace command in your config file. For example, if you want to bind workspace 1 and 2 to monitor DVI-D-1 and workspace 8 and 9 to monitor HDMI-A-1, you can do that by using:

    workspace 1 output DVI-D-1
    workspace 2 output DVI-D-1
    workspace 8 output HDMI-A-1
    workspace 9 output HDMI-A-1

    That’s it! These are the basics of multi monitor setup in sway. A more detailed guide can be found at https://github.com/swaywm/sway/wiki#Multihead

    Configuring HDD to spin down in Linux via SMART

    Posted by Lukas "lzap" Zapletal on January 08, 2020 12:00 AM

    Configuring HDD to spin down in Linux via SMART

    I had a chat with Standa Graf from Red Hat about the idle3ctl WD disks utility, looks like some WD disks do actually follow the SMART/APM parameter and the spindown timeout can be changed using the standardized utility. Standa showed me a nifty trick to do this automatically and he claims it works for all his internal and external (USB) drives. I am gonna share the trick here:

    cat >/etc/udev/rules.d/69-hdparm.rules <<EOF
    ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", RUN+="/usr/sbin/smartctl --set apm,128 --set lookahead,on --set wcache,on --set standby,241  /dev/%k"

    Unfortunately, my WD model (RED 2TB) does not accept the APM parameter and I need to stick with the idle3ctl utility:

    # /usr/sbin/smartctl --set apm,128 --set lookahead,on --set wcache,on --set standby,242 /dev/sdc
    smartctl 7.0 2019-03-31 r4903 [x86_64-linux-5.3.15-300.fc31.x86_64] (local build)
    Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
    APM enable failed: scsi error badly formed scsi parameters
    Read look-ahead enabled
    Write cache enabled
    Standby timer set to 242 (01:00:00, a vendor-specific minimum applies)

    Thought it’s useful. Thanks Standa!

    Cockpit 210 and Cockpit-podman 12

    Posted by Cockpit Project on January 08, 2020 12:00 AM

    Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 210.

    Overview: Add CPU utilization to usage card

    Display CPU usage information in the usage card of the Overview page. For detailed CPU usage information, please consult the graphs page.


    Dashboard: Support SSH identity unlocking when adding new machines

    Cockpit now offers an inline feature to unlock SSH identities when adding a new machine to the Dashboard.


    SElinux: Introduce an Ansible automation script

    SELinux modifications can be reviewed and exported also as Ansible tasks. These exported tasks can be applied to other servers.

    SElinux autoscript ansible

    Machines: Support “bridge” type network interfaces

    Virtual machines can now use “bridge” networking. VMs with bridged networking have full incoming and outgoing network access on a LAN, just like a physical machine.

    VM NIC bridge

    Machines: Support bus type disk configuration

    Adding and editing VM disks now supports different bus types, such as SATA, SCSI, USB, or Virtio.

    Using Virtio is generally the best option, for performance reasons. Other bus types may be used if an operating system doesn’t support Virtio. (For example: Windows does not support Virtio by default; additional drivers need to be installed.) Another valid reason to choose something other than Virtio is if the guest OS expects the disk to show up as another type of device.

    Disk choose bus type

    Podman: Configure CPU share for system containers

    System containers now have a configuration option to adjust CPU shares.

    By default, all containers have the same proportion of CPU cycles. A container’s CPU share weighting can be changed relative to the weighting of all other running containers. For more information, please refer to podman run’s documentation, under “cpu-shares”.

    Cockpit podman CPU limit

    Try it out

    Cockpit 210 and Cockpit-podman 12 are available now:


    Fedora 30

    Fedora 31

    Fedora Firefox team at 2019

    Posted by Martin Stransky on January 07, 2020 02:43 PM

    logoI think the last year was the strongest one in whole Fedora Firefox team history. We have been always contributed at Mozilla but in 2019 we finished some major outstanding projects at upstream and also ship them at Fedora.

    The first finished project I’d like to mention is disabled system titlebar by default on Gnome. Firefox UI on Linux finally matches Windows/MacOS and provides similar user experience. We also implement various tweaks like styled and HiDPI titlebar button rendering and left/right button placement.

    A rather small by code changes but highly impacted was gcc optimization with PGO/LTO.  In cooperation with Jakub Jelinek and SuSE guys we managed to match and even slightly outperform default Mozilla Firefox binaries which are built with clang. I’m going to post more accurate numbers in some follow up post as was already published by a Czech  linux magazine.

    Firefox Gnome search provider is another small but useful feature we introduced last year. It’s not integrated at upstream yet because it needs an update for an upcoming async history lookup API at Firefox side but we ship it as tech preview to get more user feedback.

    And then there’s our biggest project so far – Firefox with native Wayland backend. Fedora 31 ships it by default for Gnome which closes an initial developer phase and we can focus on polishing, bug fixing and adding more features now. It’s the biggest project we have been working on so far and also extends the Gtk2 to Gtk3 transition. There are also many people from and outside of Mozilla who helped with it and some of them are brand new contributors to Firefox which is awesome.

    The Wayland backend is going to get more and more features in the future. We’re investigating possible advantages of DMA-BUF backend which can be used for HW accelerated video playback or direct WebGL rendering. We need to address missing Xvfb on Wayland to run tests on Wayland and build Firefox with PGO/LTO there. We’re also going to look at other Wayland compositors like Plasma and Sway to make sure Firefox works fine there – so many challenges and a lot of fun are waiting for fearless fox hackers! 😉

    Values for WD idle time in Linux

    Posted by Lukas "lzap" Zapletal on January 07, 2020 12:00 AM

    Values for WD idle time in Linux

    There is a tool called idle3ctl which can disable, get or set the idle3 timer on Western Digital hard drives. Idle3ctl can be used as an alternative to the official wdidle3.exe proprietary utility, without the need to reboot in a DOS environement. Idle3ctl is an independant project, unrelated in any way to Western Digital Corp.

    To set idle3 timer raw value, option -s must be used. Value must be an integer between 1 and 255. The idle3 timer is set in 0.1s for the 1-128 range, and in 30s for the 129-255 range. Example:

    # idle3ctl -s190 /dev/sdc
    Idle3 timer set to 190 (0xbe)
    Please power cycle your drive off and on for the new
    setting to be taken into account. A reboot will not be enough!
    # idle3ctl -g /dev/sdc
    Idle3 timer set to 190 (0xbe)

    Here is the snag, I had to do some math to be able to figure out the correct value. So I created a table for you, so you don’t have to! Enjoy.

    All systems go

    Posted by Fedora Infrastructure Status on January 06, 2020 06:25 PM
    Service 'Package Updates Manager' now has status: good: Everything seems to be working.

    Minor service disruption

    Posted by Fedora Infrastructure Status on January 06, 2020 06:25 PM
    Service 'Package Updates Manager' now has status: minor: a

    All systems go

    Posted by Fedora Infrastructure Status on January 06, 2020 05:26 PM
    Service 'Package Updates Manager' now has status: good: Everything seems to be working.