Fedora People

Celebrating Code of Conduct violations

Posted by Daniel Pocock on October 24, 2020 11:15 AM

When Toastmasters was formed in 1924, women were not permitted to join. When Helen Blanchard joined in 1970, she used a man's name, Homer, on the application form. Simply by joining, Blanchard was breaking a Code of Conduct, at least in the way that free software organizations use that term. This was not a little white lie, it was an act of courage. Fortunately the rules were changed, Blanchard became the first female president of Toastmasters International. Why couldn't a woman call herself a Toastmaster?

When Australia federated in 1901, it wasn't considered necessary for aboriginal Australians to vote. When volunteers in free software organizations ask about voting rights and financial transparency, we are accused of running a campaign of harassment. Nonetheless, after 66 years, Australia granted voting rights to the aboriginal population in 1967. In comparison, women in Switzerland had to continue their campaign for the right to vote up to 1991. Swiss men felt they were victims of a campaign of harassment. During the period when Australia's aboriginal population were prohibited from voting, a number of communities allowed them to vote anyway, in free-software-speak, breaking the Code of Conduct for aboriginals at that time.

Appenzell, Switzerland

One of the most remarkable stories of a Code of Conduct violation could be the petition for gay rights at the Salamanca Market in Tasmania. This wasn't a petition for gay marriage, they were simply asking to revoke laws that made homosexuality illegal. A massive police response was organized and 120 people were arrested for running a booth about diversity. Gay relationships and men holding hands were a violation of Tasmania's Code of Conduct. The charges against their petitions and leaflets were as overzealous as the accusations of spamming and trolling in free software organizations.

In the world of free software, we have now overreached in the other direction. For example, in one organization, a volunteer was expelled in the week before Christmas when he used the wrong pronoun for a transgender developer. That type of expulsion is every bit as extreme as the arrests at Hobart's Salamanca market. Just as gay Tasmanians work and pay taxes, the volunteer in question had spent 10 years packaging the LaTeX software used widely in the academic world.

Fedora is still considering the Community Publishing Platforms proposal. It is important to keep in mind that many of these reforms that we take for granted today didn't come about through submission and obedience. It is more important than ever to look for constructive alternatives to the Code of Conduct mania that has encouraged a culture of accusations and intimidation in free software organizations.

Contribute at the Fedora Test Week for Kernel 5.9

Posted by Fedora Magazine on October 24, 2020 10:34 AM

The kernel team is working on final integration for kernel 5.9. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, October 26, 2020 through Monday, November 02, 2020. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

آموزش نصب Kubernetes با Minikube

Posted by Fedora fans on October 24, 2020 09:13 AM

Docker_and_Kubernetesبرای نصب و راه اندازی Kubernetes ابزارها و روش های گوناگونی وجود دارد. یکی از این نرم افزارها Minikube نام دارد که بوسیله ی آن می توان یک Kubernetes Cluster را بر روی کامپیوتر شخصی و یا لپ تاپ خود نصب کنید. Minikube برای مصارف Production مناسب نمی باشد و بیشتر برای یادگیری و محیط های آزمایشگاهی مناسب می باشد.

Minikube از چندین مجازی ساز و کانتینر  مانند Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox و VMware پشتبانی می کند که در این مطلب قصد دارین تا نحوه ی نصب Minikube را با استفاده از VirtualBox و KVM توضیح دهیم.


۱- نصب مجازی ساز:

همانطور که گفته شد در این مطلب قصد داریم تا Minikube را با استفاده از VirtualBox و KVM نصب کنیم. به همین خاطر اگر قصد استفاده از VirtualBox را دارید کافیست تا مطلب نحوه ی نصب VirtualBox را مطالعه کنید و آن را نصب کنید.

اگر قصد استفاده از KVM را دارید کافیست تا مطلب نصب مجازی ساز KVM را مطالعه کنید و آن را نصب کنید. پس از نصب مجازی ساز KVM باید کاربر خود را به گروه libvirt اضافه کنید که برای اینکار کافیست تا دستور های زیر را اجرا کنید:

# usermod -a -G libvirt hos7ein
$ newgrp libvirt

نکته اینکه بجای hos7ein باید نام کاربری خود را بنویسید.


۲- نصب Docker:

برای نصب Docker کافیست تا مطلب نصب و راه اندازی Docker را مطالعه کنید.


۳- نصب docker-machine-driver-kvm2:

برای نصب docker-machine-driver-kvm2 کافیست تا ابتدا آن را دانلود کنید:

$ curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2

سپس به آن مجوز اجرایی بدهید:

$ chmod +x docker-machine-driver-kvm2

اکنون آن را به مسیر مشخص شده منتقل کنید:

# mv docker-machine-driver-kvm2 /usr/local/bin/


۴- نصب Minikube:

اکنون برای نصب Minikube ابتدا آن را دانلود کنید:

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

سپس به فایل مجوز اجرایی دهید:

$ chmod +x minikube

اکنون آن را به مسیر مشخص شده منتقل کنید:

# mv minikube /usr/local/bin/


۵- نصب Kubectl :

Kubectl یک ابزار خط فرمانی جهت ارتباط با Kubernetes می باشد. جهت نصب Kubectl اگر از فدورا استفاده می کنید و قصد دارید تا آن را از طریق مخازن فدورا نصب کنید، کافیست تا دستور زیر را اجرا کنید:

# dnf install kubernetes-client

یا اینکه فایل باینری آن را دانلود کنید:

$ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

سپس به فایل مجوز اجرایی دهید و آن را به مسیر مشخص شده منتقل کنید:

$ chmod +x kubectl

# mv kubectl /usr/local/bin


۶- ساخت Minikube Cluster :

پس از نصب ابزارها و نرم افزارهای گفته شده اکنون می توان کلاستر Minikube خود را راه اندازی کرد. برای اینکار اگر قصد استفاده از مجازی ساز VirtualBox را دارید، برای راه اندازی Minikube Cluster کافیست تا دستور زیر را وارد کنید. هر چند که به صورت پیش فرض Minikube از VirtualBox استفاده خواهد کرد:

$ minikube start

اگر قصد استفاده از KVM را دارید کافیست تا دستور زیر را اجرا کنید:

$ minikube start --vm-driver kvm2


اگر قصد دارید تا یک Minikube Cluster با نام دلخواه و با استفاده از مجازی ساز KVM راه اندازی کنید کافیست تا دستور زیر را وارد کنید:

$ minikube start -p cluster1-kvm --vm-driver=kvm2


یک نمونه خروجی از دستور گفته شده را در تصویر پایین مشاهده می کنید:


اگر قصد دارید تا KVM به عنوان Driver پیشفرض باشد کافیست تا دستور زیر را اجرا کنید:

$ minikube config set vm-driver kvm2


۷- برخی دستورهای مدیریتی:

برای Stop کردن Minikube Cluster کافیست تا دستور زیر را اجرا کنید:

$ minikube stop

اگر کلاستر شما دارای نام دلخواه می باشد می توان از این دستور استفاده کرد:

$ minikube stop -p MY_CLUSTER_NAME

نکته اینکه بجای MY_CLUSTER_NAME باید نام کلاستر خود را وارد کنید.


جهت حذف یک کلاستر نیز می توانید از این دستور استفاده کرد:

$ minikube delete

اگر کلاستر شما دارای نام دلخواه می باشد می توان از این دستور استفاده کرد:

$ minikube delete -p MY_CLUSTER_NAME

نکته اینکه بجای MY_CLUSTER_NAME باید نام کلاستر خود را وارد کنید.

اگر از KVM برای راه اندازی کلاستر Minikube استفاده کرده اید. با اجرای دستور زیر می توانید مشاهده کنید که یک ماشین مجازی (VM) با نام کلاستر شما ساخته شده است:

# virsh list

اگر با استفاده از VirtualBox کلاستر Minikube را راه اندازی کرده اید، در پنل VirtualBox یک ماشین مجازی با نام کلاستر Minikube خود را مشاهده خواهید کرد.

برای ssh زدن به درون این ماشین مجازی می توانید از دستور زیر استفاده کنید:

$ minikube ssh

اگر یک نام به کلاستر خود اختصاص داده اید می توانید از این دستور استفاده کنید:

$ minikube ssh -p cluster1-kvm

یک نمونه خروجی از دستور گفته شده را در تصویر پایین مشاهده می کنید:

minikube-sshبرای بررسی وضعیت کلاستر Kubernetes می توانید از دستور زیر استفاده کنید:

$ kubectl cluster-info

برای بررسی وضعیت component های Kubernetes می توانید از دستور زیر استفاده کنید:

$ kubectl get cs

برای بررسی Node های کلاستر Kubernetes می توانید از دستور زیر استفاده کنید:

$ kubectl get nodes

یک نمونه خروجی از دستورهای گفته شده را مشاهده می کنید:


۸- فعال کردن Kubernetes Dashboard:

Minikube دارای چند Addons می باشند که یکی از آنها Dashboard کوبرنتیز است که معمولا به صورت پیش فرض فعال نمی باشد. برای لیست کردن Addons های Minikube می توانید از دستور پایین استفاده کنید:

$ minikube addons list

اگر کلاستر Minikube شما دارای نام می باشد می توانید از این دستور استفاده کنید:

$ minikube addons list -p cluster1-kvm

نکته اینکه بجای cluster1-kvm باید نام کلاستر خود را وارد کنید.

برای فعال کردن Dashboard کافیست تا دستور زیر را اجرا کنید:

$ minikube addons enable dashboard

اگر کلاستر Minikube شما دارای نام می باشد می توانید از این دستور استفاده کنید:

$ minikube addons enable dashboard -p cluster1-kvm

نکته اینکه بجای cluster1-kvm باید نام کلاستر خود را وارد کنید. در پایین تصویری از خروجی دستور های گفته شده را مشاهده می کنید:

minikube-addonsاکنون برای باز کردن Kubernetes Dashboard در مرورگر پیش فرض خود می توانید دستور زیر را اجرا کنید:

$ minikube dashboard

اگر کلاستر Minikube شما دارای نام می باشد می توانید از این دستور استفاده کنید:

$ minikube dashboard -p cluster1-kvm

نکته اینکه بجای cluster1-kvm باید نام کلاستر خود را وارد کنید. با اجرای دستور زیر نیز می توانید URL مربوط به Kubernetes dashboard را دریافت کنید:

$ minikube dashboard --url

اگر کلاستر Minikube شما دارای نام می باشد می توانید از این دستور استفاده کنید:

$ minikube dashboard --url -p cluster1-kvm

نکته اینکه بجای cluster1-kvm باید نام کلاستر خود را وارد کنید. در پایین یک نمونه تصویر از Kubernetes dashboard را مشاهده می کنید:


The post آموزش نصب Kubernetes با Minikube first appeared on طرفداران فدورا.

Secure NTP with NTS

Posted by Fedora Magazine on October 23, 2020 08:00 AM

Many computers use the Network Time Protocol (NTP) to synchronize their system clocks over the internet. NTP is one of the few unsecured internet protocols still in common use. An attacker that can observe network traffic between a client and server can feed the client with bogus data and, depending on the client’s implementation and configuration, force it to set its system clock to any time and date. Some programs and services might not work if the client’s system clock is not accurate. For example, a web browser will not work correctly if the web servers’ certificates appear to be expired according to the client’s system clock. Use Network Time Security (NTS) to secure NTP.

Fedora 331 is the first Fedora release to support NTS. NTS is a new authentication mechanism for NTP. It enables clients to verify that the packets they receive from the server have not been modified while in transit. The only thing an attacker can do when NTS is enabled is drop or delay packets. See RFC8915 for further details about NTS.

NTP can be secured well with symmetric keys. Unfortunately, the server has to have a different key for each client and the keys have to be securely distributed. That might be practical with a private server on a local network, but it does not scale to a public server with millions of clients.

NTS includes a Key Establishment (NTS-KE) protocol that automatically creates the encryption keys used between the server and its clients. It uses Transport Layer Security (TLS) on TCP port 4460. It is designed to scale to very large numbers of clients with a minimal impact on accuracy. The server does not need to keep any client-specific state. It provides clients with cookies, which are encrypted and contain the keys needed to authenticate the NTP packets. Privacy is one of the goals of NTS. The client gets a new cookie with each server response, so it doesn’t have to reuse cookies. This prevents passive observers from tracking clients migrating between networks.

The default NTP client in Fedora is chrony. Chrony added NTS support in version 4.0. The default configuration hasn’t changed. Chrony still uses public servers from the pool.ntp.org project and NTS is not enabled by default.

Currently, there are very few public NTP servers that support NTS. The two major providers are Cloudflare and Netnod. The Cloudflare servers are in various places around the world. They use anycast addresses that should allow most clients to reach a close server. The Netnod servers are located in Sweden. In the future we will probably see more public NTP servers with NTS support.

A general recommendation for configuring NTP clients for best reliability is to have at least three working servers. For best accuracy, it is recommended to select close servers to minimize network latency and asymmetry caused by asymmetric network routing. If you are not concerned about fine-grained accuracy, you can ignore this recommendation and use any NTS servers you trust, no matter where they are located.

If you do want high accuracy, but you don’t have a close NTS server, you can mix distant NTS servers with closer non-NTS servers. However, such a configuration is less secure than a configuration using NTS servers only. The attackers still cannot force the client to accept arbitrary time, but they do have a greater control over the client’s clock and its estimate of accuracy, which may be unacceptable in some environments.

Enable client NTS in the installer

When installing Fedora 33, you can enable NTS in the Time & Date dialog in the Network Time configuration. Enter the name of the server and check the NTS support before clicking the + (Add) button. You can add one or more servers or pools with NTS. To remove the default pool of servers (2.fedora.pool.ntp.org), uncheck the corresponding mark in the Use column.

<figure class="wp-block-image size-large"><figcaption>Network Time configuration in Fedora installer</figcaption></figure>

Enable client NTS in the configuration file

If you upgraded from a previous Fedora release, or you didn’t enable NTS in the installer, you can enable NTS directly in /etc/chrony.conf. Specify the server with the nts option in addition to the recommended iburst option. For example:

server time.cloudflare.com iburst nts
server nts.sth1.ntp.se iburst nts
server nts.sth2.ntp.se iburst nts

You should also allow the client to save the NTS keys and cookies to disk, so it doesn’t have to repeat the NTS-KE session on each start. Add the following line to chrony.conf, if it is not already present:

ntsdumpdir /var/lib/chrony

If you don’t want NTP servers provided by DHCP to be mixed with the servers you have specified, remove or comment out the following line in chrony.conf:

sourcedir /run/chrony-dhcp

After you have finished editing chrony.conf, save your changes and restart the chronyd service:

systemctl restart chronyd

Check client status

Run the following command under the root user to check whether the NTS key establishment was successful:

# chronyc -N authdata
Name/IP address             Mode KeyID Type KLen Last Atmp  NAK Cook CLen
time.cloudflare.com          NTS     1   15  256  33m    0    0    8  100
nts.sth1.ntp.se              NTS     1   15  256  33m    0    0    8  100
nts.sth2.ntp.se              NTS     1   15  256  33m    0    0    8  100

The KeyID, Type, and KLen columns should have non-zero values. If they are zero, check the system log for error messages from chronyd. One possible cause of failure is a firewall is blocking the client’s connection to the server’s TCP port ( port 4460).

Another possible cause of failure is a certificate that is failing to verify because the client’s clock is wrong. This is a chicken-or-the-egg type problem with NTS. You may need to manually correct the date or temporarily disable NTS in order to get NTS working. If your computer has a real-time clock, as almost all computers do, and it’s backed up by a good battery, this operation should be needed only once.

If the computer doesn’t have a real-time clock or battery, as is common with some small ARM computers like the Raspberry Pi, you can add the -s option to /etc/sysconfig/chronyd to restore time saved on the last shutdown or reboot. The clock will be behind the true time, but if the computer wasn’t shut down for too long and the server’s certificates were not renewed too close to their expiration, it should be sufficient for the time checks to succeed. As a last resort, you can disable the time checks with the nocerttimecheck directive. See the chrony.conf(5) man page for details.

Run the following command to confirm that the client is making NTP measurements:

# chronyc -N sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
^* time.cloudflare.com           3   6   377    45   +355us[ +375us] +/-   11ms
^+ nts.sth1.ntp.se               1   6   377    44   +237us[ +237us] +/-   23ms
^+ nts.sth2.ntp.se               1   6   377    44   -170us[ -170us] +/-   22ms

The Reach column should have a non-zero value; ideally 377. The value 377 shown above is an octal number. It indicates that the last eight requests all had a valid response. The validation check will include NTS authentication if enabled. If the value only rarely or never gets to 377, it indicates that NTP requests or responses are getting lost in the network. Some major network operators are known to have middleboxes that block or limit rate of large NTP packets as a mitigation for amplification attacks that exploit the monitoring protocol of ntpd. Unfortunately, this impacts NTS-protected NTP packets, even though they don’t cause any amplification. The NTP working group is considering an alternative port for NTP as a workaround for this issue.

Enable NTS on the server

If you have your own NTP server running chronyd, you can enable server NTS support to allow its clients to be synchronized securely. If the server is a client of other servers, it should use NTS or a symmetric key for its own synchronization. The clients assume the synchronization chain is secured between all servers up to the primary time servers.

Enabling server NTS is similar to enabling HTTPS on a web server. You just need a private key and certificate. The certificate could be signed by the Let’s Encrypt authority using the certbot tool, for example. When you have the key and certificate file (including intermediate certificates), specify them in chrony.conf with the following directives:

ntsserverkey /etc/pki/tls/private/foo.example.net.key
ntsservercert /etc/pki/tls/certs/foo.example.net.crt

Make sure the ntsdumpdir directive mentioned previously in the client configuration is present in chrony.conf. It allows the server to save its keys to disk, so the clients of the server don’t have to get new keys and cookies when the server is restarted.

Restart the chronyd service:

systemctl restart chronyd

If there are no error messages in the system log from chronyd, it should be accepting client connections. If the server has a firewall, it needs to allow both the UDP 123 and TCP 4460 ports for NTP and NTS-KE respectively.

You can perform a quick test from a client machine with the following command:

$ chronyd -Q -t 3 'server foo.example.net iburst nts maxsamples 1'
2020-10-13T12:00:52Z chronyd version 4.0 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
2020-10-13T12:00:52Z Disabled control of system clock
2020-10-13T12:00:55Z System clock wrong by -0.001032 seconds (ignored)
2020-10-13T12:00:55Z chronyd exiting

If you see a System clock wrong message, it’s working correctly.

On the server, you can use the following command to check how many NTS-KE connections and authenticated NTP packets it has handled:

# chronyc serverstats
NTP packets received       : 2143106240
NTP packets dropped        : 117180834
Command packets received   : 16819527
Command packets dropped    : 0
Client log records dropped : 574257223
NTS-KE connections accepted: 104
NTS-KE connections dropped : 0
Authenticated NTP packets  : 52139

If you see non-zero NTS-KE connections accepted and Authenticated NTP packets, it means at least some clients were able to connect to the NTS-KE port and send an authenticated NTP request.

— Cover photo by Louis. K on Unsplash —

1. The Fedora 33 Beta installer contains an older chrony prerelease which doesn’t work with current NTS servers because the NTS-KE port has changed. Consequently, in the Network Time configuration in the installer, the servers will always appear as not working. After installation, the chrony package needs to be updated before it will work with current servers.

Release of osbuild 23

Posted by OSBuild Project on October 23, 2020 12:00 AM

We are happy to announce version 23 of osbuild. This release makes it possible to build Fedora on RHEL systems.

Below you can find the official changelog from the osbuild-23 sources. All users are recommended to upgrade!

  • The org.osbuild.rpm stage now includes the SIGPGP and SIGGPG fields of each installed package in the returned metadata. Additionally, its docs have been improved to specify what metadata is returned.

  • The spec file has been changed so that the shebang for assemblers, stages and runners are not automatically mangled anymore. Runners were already changed to have the correct shebang for their target systems. Assemblers and stages are not meant to be run on the host itself, but always inside a build root container, which must bring the correct dependencies to run all stages and assemblers. For now, Python3 (>= 3.6), located at /usr/bin/python3, is required. This change will enable building Fedora systems on RHEL hosts.

  • Unit tests have been fixed to run on RHEL by dynamically selecting a runner that is suitable for the host system.

  • The stages unit tests are now using generated manifests via mpp, and have been updated to use Fedora 32. Additionally, the current mirror was replaced with rpmrepo, which should provide a faster, more reliable package source.

  • The CI has dropped Fedora 31 but instead includes Fedora 33 as systems to run the composer reverse dependency tests for.

Contributions from: Christian Kellner, Lars Karlitski

— Berlin, 2020-10-23

Introducing RPMrepo

Posted by OSBuild Project on October 23, 2020 12:00 AM

The Image Builder project builds Fedora and RHEL operating system images, as such its test infrastructure is reliant on RPM package repositories to verify previously succeeding builds do not fail with newer Image Builder updates. It is not unusual for CI systems to require external resources, but in the case of RPM repositories we are suddenly confronted with a moving target: If a test fails, was it our Image Builder modification that broke, or does the RPM repository include broken updates?

A way to tackle this is to require all external dependencies to be immutable and persistent. Most RPM repositories have neither of those properties, so we as the Image Builder team introduced RPMrepo, our immutable and persistent RPM repository snapshots.

Issue Report

Continuous Integration tests software updates on every single change. This ensures that problems are caught early and the need for bisects is reduced significantly. For the software that is part of Image Builder, this means test-building OS Artifacts in the CI. We want to verify that Image Builder still correctly assembles, for example, Fedora Cloud Images, that non-standard architecture builds still succeed, and that the different file-system types are all still correctly compiled. This requires us to build Fedora, RHEL, and other Operating System images from their official sources. In case of RPM based systems, this means RPM Repositories.

It is not unusual for CI Test Suites to require external dependencies. However, as soon as you deal with moving targets, when your dependencies are not content addressable, or when their availability is dependent on other factors, you end up with a test suite that can fail even though the software to be tested did not introduce the failure. This does not necessarily mean it is not at fault, but it significantly disrupts the following development efforts of this project. When your test-suite starts failing without you introducing any changes, you can no longer use it to evaluate proposed changes to your software.

If we look at the software repositories of the Fedora project, we will face a moving target. While the release repositories of Fedora are immutable, the update repositories are not. They can change any time, and there is no way to pin them at a specific point in time. This means anyone testing against those repositories needs to be prepared with their test dependencies changing without control.

Immutable and Persistent Dependencies

There are different approaches to solve this problem. For Image Builder, we decided to require all test input and dependencies to be immutable and persistent.


Immutability means that we know upfront the content that goes into our test-suite, and thus can deduce at every step which change introduced possible failure. We explicitly depend on a specific content hash of the RPM Repository Metadata that is used, and thus control the data that ends up in our tests. Any test failure thus means that this particular change caused the failure. As long as changes come self-contained and granular, you can rely on test results to represent a change.

This comes at a cost, though. You now need to pin the input to your test suite, and more importantly, you need to keep it up-to-date. You thus regularly have to update the pinned dependencies. Those updates will preferably trigger test suite runs as well, this time telling you whether external changes caused the failure. If those external changes cause failure, you would have to delay the dependency update until those are resolved.

Unfortunately, this also means that all dependencies need to be persistent, so as to retain access to the data even after the owner of that data decided to drop it.


Persistency is usually easily solved by pulling dependencies into your project, also often known as bundling or vendoring. While it neatly avoids any availability and persistency issues, it becomes unwieldly when the dependencies grow in size or change regularly. With RPM repositories as dependency, you need them to be hosted on separate infrastructure, as they can easily surpass 50-GiB in size.

This leaves you with only one option: Mirror the required data yourself on some central infrastructure.


After trying a lot of workarounds, mirroring subsets of repositories, avoiding update-repositories, and other tricks, we decided that none of this worked out. Instead, we started to create snapshots of our RPM repository dependencies, and now control their properties ourselves. This infrastructure we called RPMrepo.

RPMrepo has a set of input repositories, which it creates snapshots of on a regular basis. For now, this is done on a bi-weekly schedule, but can be amended with more snapshots at any time. The data is stored on our own infrastructure. As those repositories can become quite big, we deploy a content addressable shared storage system, which allows us to share any file between snapshots in the same group. This way, we were able to significantly reduce the required storage space of our snapshots, and we can create snapshots more often and more regular.

These snapshots are available publicly, but we retain the option to restrict access if this starts to exceed our budget. Furthermore, we ask everyone to drop us a short notice if they start using the snapshots, so we can communicate upcoming changes.

Using the Snapshots

For a detailed enumeration of the different parameters, check out the home of RPMrepo. It lists the different snapshot parameters as well as the set of repositories we currently take snapshots of.

Details: osbuild.org/rpmrepo/

Followingly, we will walk through a short example how to access these repositories. Lets assume you currently use the Fedora updates-released repository and need a most recent snapshot. You can list all available snapshots via:

$ curl https://rpmrepo.osbuild.org/control/snapshots | jq .

From this list, you can pick a snapshot of your choice. If you wanted Fedora-32 on x86_64, with the updates-released repository, you would possibly pick:


This snapshot was taken on October 10th, 2020. You now fill in the template URL:


with your information:


and this will be your Base URL ready to be used with dnf and yum:

$ curl -L https://rpmrepo.osbuild.org/v1/anon/f32/f32-x86_64-updates-released-20201010/repodata/repomd.xml

<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
  <data type="primary">
    <checksum type="sha256">ffb5f6c80f20d9d9d972ba89bab61c564d82f65e6a1e20449056864a6787d92a</checksum>
    <open-checksum type="sha256">9891249bf4ec277851bba6349a51b8a46622ef265f4916b34c3feb6a4bfb7c68</open-checksum>
    <location href="repodata/ffb5f6c80f20d9d9d972ba89bab61c564d82f65e6a1e20449056864a6787d92a-primary.xml.gz"/>

This is it. This snapshot is immutable and will not change during its lifetime. Do not hesitate to contact the Image Builder team if you have questions or further requests.

Why I Dislike Switch Statements

Posted by Robbie Harwood on October 22, 2020 04:00 AM

Why I dislike switch statements

In most coding styles (and in particular, in K&R, OTBS, and Linux), switch statements are written like so:

int func(int i) {
    switch (i) {
    case 1:
    case 2:
    case 3:
    case 4: {
        int j = i * 2;
        return j;
    case 5:
        /* Fall through. */
        return 2;
    return i;

Of course this is a contrived example, but readers will hopefully agree it's representative of the construct.


First, there are several things I consider clunky about using switch.

  1. The different case "arms" are deindented to the same layer as switch. Conceptually this is odd because they're still within the switch. However, the alternatives aren't better: indenting the arms wastes another 4 characters on the line (or tab width if you're like that), and half-indents are an excellent demonstration of why argumentum ad temperantiam is fallacious.
  2. Instead of being delimited by braces ({ and }), execution flow is instead delimited by : and break.
  3. As in case 2 and case 3, adding another value introduces another line in many styles. Thus, using switch is invariably at least as much code as the equivalent if/else flow.
  4. Each arm of the switch is not a separate scope. This leads to the pattern in case 4, wherein an explicit scope is needed to declare j.
  5. Because the break is needed to delimit cases, compilers warn on its absence - i.e., when control flow "falls through". However, since this is desirable in some cases, they do not warn when it's commented as intentional. Thus, the comment in case 5 becomes syntactically necessary.
  6. No ordering on arms is imposed. In practice this is generally helpful, but it leads to the weird situation where default doesn't have to be the final arm.

Finally, default prohibits flattening control flow in a helpful way. For example, consider this code:

int func(int i) {
    if (i == 0) {
        return 1;
    } else if (i == 1) {
        return 0;
    } else {
        i = do_some_stuff();
        if (i == 3)
            return 2;
            return -1;

Ideally, we would like to rewrite this as:

int func(int i) {
    if (i == 0)
        return 1;
    else if (i == 1) {
        return 0;

    i = do_some_stuff();
    if (i == 3)
        return 2;

    return -1;

By flattening the control flow, we make the code significantly easier to reason about. For instance, it's now very clear whether all paths through the function lead to a return.

The need for default makes this pattern much less elegant. The equivalent is:

int func(int i) {
    switch (i) {
    case 0:
        return 1;
    case 1:
        return 0;
        i = do_some_stuff();
        if (i == 3)
            return 2;
        return -1;

and to my mind it's debatable whether an empty default: arm helps that much.

Interaction loops

More than the general awkwardness, though, I'm bothered by how switch interacts with loops. (And no, I'm not referring to Duff's Device which, while horrible, shouldn't appear in modern code anyway because we have since optimized memmove()/memcpy().)

Consider code that looks like the following:

int func(int i) {
    while (1) {
        switch (i) {
        case 1:
        case 2:
            return 2;
    /* Point of interest. */
    return 0;

Control flow here is complex. Consider what happens in each of the three cases. In particular, switch is "kind of" loop-like in that it services break, but "kind of" conditional-like in that continue is handled by a higher scope. This makes trying to run code at the marked point of interest surprisingly involved, as well as making it difficult to terminate the loop itself.

Contrast with the this similar (but importantly different) code:

int func(int i) {
    while (1) {
        if (i == 1)
        else if (i == 2)
        return 2;
    /* Point of interest. */
    return 0;

From inside the if/else stanzas, it's very clear how to get to the point of interest, and break and continue behave in clear ways.


Historically, the most important reason to use switch statements was that the compiler could optimize them into jump tables. These days compilers are capable enough to perform this transformation where it's needed - in particular, whether a construct is expressed as if or switch doesn't prohibit it from being a jump table.

The other major reason I'm aware of to use a switch is for exhaustiveness checking on enums. The theory is that the compiler can check that all defined values for an enum are handled at any given point. While true, this typically doesn't matter:

  1. Most of the time, not all enum values represent expected program states.
  2. Compilers and static analysis tools also warn about missing default: branch, which negates the exhaustiveness check.
  3. There are other, more-or-less equally clear ways to write enum-driven state machines (e.g., dispatch table, callbacks, continuation-passing style, etc.).

I'm not about to issue an ultimatum and say "don't use switch" or anything similar. These are just the reasons that I happen to not like it. Certainly Python not having a close equivalent should be demonstration enough that it's not needed in the language.

And all that being said, when the idea of switch is generalized (into pattern matching), I do prefer it to if. Languages like Haskell (and to a lesser extent, Rust) provide nicer constructs, but they're also reliant on Algebraic Data Types (which Rust clarifies by treating them as generalized enums).

Adding Nodes to Ironic

Posted by Adam Young on October 22, 2020 03:14 AM

TheJulia was kind enough to update the docs for Ironic to show me how to include IPMI information when creating nodes.

To all delete the old nodes

for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ; do openstack baremetal node delete $UUID; done

nodes definition

I removed the ipmi common data from each definition as there is a password there, and I will set that afterwards on all nodes.

  "nodes": [
      "ports": [
          "address": "00:21:9b:93:d0:90"
      "name": "zygarde",
      "driver": "ipmi",
      "driver_info": {
      		"ipmi_address":  ""
      "ports": [
          "address": "00:21:9b:9b:c4:21"
      "name": "umbreon",
      "driver": "ipmi",
      "driver_info": {
	      "ipmi_address": ""
      "ports": [
          "address": "00:21:9b:98:a3:1f"
      "name": "zubat",
      "driver": "ipmi",
       "driver_info": {
	      "ipmi_address": ""

Create the nodes

openstack baremetal create  ./nodes.ipmi.json 

Check that the nodes are present

$ openstack baremetal node list
| UUID                                 | Name    | Instance UUID | Power State | Provisioning State | Maintenance |
| 3fa4feae-0d5c-4e38-a012-29258d40651b | zygarde | None          | None        | enroll             | False       |
| 00965ad4-c972-46fa-948a-3ce87aecf5ac | umbreon | None          | None        | enroll             | False       |
| 8702ea0c-aa10-4542-9292-3b464fe72036 | zubat   | None          | None        | enroll             | False       |

Update IPMI common data

for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ; 
do  openstack baremetal node set $UUID --driver-info ipmi_password=`cat ~/ipmi.password`  --driver-info   ipmi_username=admin   ; 

EDIT: I had ipmi_user before and it does not work. Needs to be ipmi_username.

Final Check

And if I look in the returned data for the definition, we see the password is not readable:

$ openstack baremetal node show zubat  -f yaml | grep ipmi_password
  ipmi_password: '******'

Power On

for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ; do  openstack baremetal node power on $UUID  ; done

Change “on” to “off” to power off.

Fedora 32 : Can be better? part 016.

Posted by mythcat on October 21, 2020 09:38 PM
Today I tested the Unity 3D version 2020 on Linux Fedora 32.Maybe it would be better to integrate Unity 3D or Unity Hub in Fedora repo just like other useful software like Blender 3D, GIMP.It will improve the user experience and attract new users and developers for this distro.I download the AppImage from Unity website and I run with these commands:
[mythcat@desk Downloads]$ chmod a+x UnityHub.AppImage 
[mythcat@desk Downloads]$ ./UnityHub.AppImage
r: 0
License accepted

Incremental backup with Butterfly Backup

Posted by Fedora Magazine on October 21, 2020 08:00 AM


This article explains how to make incremental or differential backups, with a catalog available to restore (or export) at the point you want, with Butterfly Backup.


Butterfly Backup is a simple wrapper of rsync written in python; the first requirement is python3.3 or higher (plus module cryptography for init action). Other requirements are openssh and rsync (version 2.5 or higher). Ok, let’s go!

[Editors note: rsync version 3.2.3 is already installed on Fedora 33 systems]

$ sudo dnf install python3 openssh rsync git
$ sudo pip3 install cryptography


After that, installing Butterfly Backup is very simple by using the following commands to clone the repository locally, and set up Butterfly Backup for use:

$ git clone https://github.com/MatteoGuadrini/Butterfly-Backup.git
$ cd Butterfly-Backup
$ sudo python3 setup.py
$ bb --help
$ man bb

To upgrade, you would use the same commands too.


Butterfly Backup is a server to client tool and is installed on a server (or workstation). The restore process restores the files into the specified client. This process shares some of the options available to the backup process.

Backups are organized accord to precise catalog; this is an example:

$ tree destination/of/backup
├── destination
│   ├── hostname or ip of the PC under backup
│   │   ├── timestamp folder
│   │   │   ├── backup folders
│   │   │   ├── backup.log
│   │   │   └── restore.log
│   │   ├─── general.log
│   │   └─── symlink of last backup
├── export.log
├── backup.list
└── .catalog.cfg

Butterfly Backup has six main operations, referred to as actions, you can get information about them with the –help command.

$ bb --help
usage: bb [-h] [--verbose] [--log] [--dry-run] [--version]
          {config,backup,restore,archive,list,export} ...

Butterfly Backup

optional arguments:
  -h, --help            show this help message and exit
  --verbose, -v         Enable verbosity
  --log, -l             Create a log
  --dry-run, -N         Dry run mode
  --version, -V         Print version

  Valid action

                        Available actions
    config              Configuration options
    backup              Backup options
    restore             Restore options
    archive             Archive options
    list                List options
    export              Export options


Configuration mode is straight forward; If you’re already familiar with the exchange keys and OpenSSH, you probably won’t need it. First, you must create a configuration (rsa keys), for instance:

$ bb config --new
SUCCESS: New configuration successfully created!

After creating the configuration, the keys will be installed (copied) on the hosts you want to backup:

$ bb config --deploy host1
Copying configuration to host1; write the password:
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/arthur/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
arthur@host1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'arthur@host1'"
and check to make sure that only the key(s) you wanted were added.

SUCCESS: Configuration copied successfully on host1!


There are two backup modes: single and bulk.
The most relevant features of the two backup modes are the parallelism and retention of old backups. See the two parameters –parallel and –retention in the documentation.

Single backup

The backup of a single machine consists in taking the files and folders indicated in the command line, and putting them into the cataloging structure indicated above. In other words, copy all file and folders of a machine into a path.

This is an examples:

$ bb backup --computer host1 --destination /mnt/backup --data User Config --type Unix
Start backup on host1
SUCCESS: Command rsync -ah --no-links arthur@host1:/home :/etc /mnt/backup/host1/2020_09_19__10_28

Bulk backup

Above all, bulk mode backups share the same options as single mode, with the difference that they accept a file containing a list of hostnames or ips. In this mode backups will performed in parallel (by default 5 machines at a time). Above all, if you want to run fewer or more machines in parallel, specify the –parallel parameter.

Incremental of the previous backup, for instance:

$ cat /home/arthur/pclist.txt
$ bb backup --list /home/arthur/pclist.txt --destination /mnt/backup --data User Config --type Unix
ERROR: The port 22 on host2 is closed!
ERROR: The port 22 on host3 is closed!
Start backup on host1
SUCCESS: Command rsync -ahu --no-links --link-dest=/mnt/backup/host1/2020_09_19__10_28 arthur@host1:/home :/etc /mnt/backup/host1/2020_09_19__10_50

There are four backup modes, which you specify with the –mode flag: Full (backup all files) , Mirror (backup all files in mirror mode), Differential (is based on the latest Full backup) and Incremental (is based on the latest backup).
The default mode is Incremental; Full mode is set by default when the flag is not specified.

Listing catalog

The first time you run backup commands, the catalog is created. The catalog is used for future backups and all the restores that are made through Butterfly Backup. To query this catalog use the list command.
First, let’s query the catalog in our example:

$ bb list --catalog /mnt/backup


Backup id: aba860b0-9944-11e8-a93f-005056a664e0
Hostname or ip: host1
Timestamp: 2020-09-19 10:28:12

Backup id: dd6de2f2-9a1e-11e8-82b0-005056a664e0
Hostname or ip: host1
Timestamp: 2020-09-19 10:50:59

Press q for exit and select a backup-id:

$ bb list --catalog /mnt/backup --backup-id dd6de2f2-9a1e-11e8-82b0-005056a664e0
Backup id: dd6de2f2-9a1e-11e8-82b0-005056a664e0
Hostname or ip: host1
Type: Incremental
Timestamp: 2020-09-19 10:50:59
Start: 2020-09-19 10:50:59
Finish: 2020-09-19 11:43:51
OS: Unix
ExitCode: 0
Path: /mnt/backup/host1/2020_09_19__10_50
List: backup.log

To export the catalog list use it with an external tool like cat, include the log flag:

$ bb list --catalog /mnt/backup --log
$ cat /mnt/backup/backup.list


The restore process is the exact opposite of the backup process. It takes the files from a specific backup and push it to the destination computer.
This command perform a restore on the same machine of the backup, for instance:

$ bb restore --catalog /mnt/backup --backup-id dd6de2f2-9a1e-11e8-82b0-005056a664e0 --computer host1 --log
Want to do restore path /mnt/backup/host1/2020_09_19__10_50/etc? To continue [Y/N]? y
Want to do restore path /mnt/backup/host1/2020_09_19__10_50/home? To continue [Y/N]? y
SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2020_09_19__10_50/restore.log /mnt/backup/host1/2020_09_19__10_50/etc arthur@host1:/restore_2020_09_19__10_50
SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2020_09_19__10_50/restore.log /mnt/backup/host1/2020_09_19__10_50/home/* arthur@host1:/home

Without specifying the “type” flag that indicates the operating system on which the data are being retrieved, Butterfly Backup will select it directly from the catalog via the backup-id.

Archive old backup

Archive operations are used to store backups by saving disk space.

$ bb archive --catalog /mnt/backup/ --days 1 --destination /mnt/archive/ --verbose --log
INFO: Check archive this backup f65e5afe-9734-11e8-b0bb-005056a664e0. Folder /mnt/backup/host1/2020_09_18__17_50
INFO: Check archive this backup 4f2b5f6e-9939-11e8-9ab6-005056a664e0. Folder /mnt/backup/host1/2020_09_15__07_26
SUCCESS: Delete /mnt/backup/host1/2020_09_15__07_26 successfully.
SUCCESS: Archive /mnt/backup/host1/2020_09_15__07_26 successfully.
$ ls /mnt/archive
$ ls /mnt/archive/host1

After that, look in the catalog and see that the backup was actually archived:

$ bb list --catalog /mnt/backup/ -i 4f2b5f6e-9939-11e8-9ab6-005056a664e0
Backup id: 4f2b5f6e-9939-11e8-9ab6-005056a664e0
Hostname or ip: host1
Type: Incremental
Timestamp: 2020-09-15 07:26:46
Start: 2020-09-15 07:26:46
Finish: 2020-09-15 08:43:45
OS: Unix
ExitCode: 0
Path: /mnt/backup/host1/2020_09_15__07_26
Archived: True


Butterfly Backup was born from a very complex need; this tool provides superpowers to rsync, automates the backup and restore process. In addition, the catalog allows you to have a system similar to a “time machine”.

In conclusion, Butterfly Backup is a lightweight, versatile, simple and scriptable backup tool.

One more thing; Easter egg: bb -Vv

Thank you for reading my post.

Full documentation: https://butterfly-backup.readthedocs.io/
Github: https://github.com/MatteoGuadrini/Butterfly-Backup

Photo by Manu M on Unsplash.

Fixing errors on my blog's feed

Posted by Kushal Das on October 21, 2020 06:34 AM

For the last few weeks, my blog feed was not showing up in the Fedora Planet. While trying to figure out what is wrong, Nirik pointed me to the 4 errors in the feed according to the W3C validator. If you don't know, I use a self developed Rust application called khata for my static blog. This means I had to fix these errors.

  • Missing guid, just adding the guid to the feed items solved this.
  • Relative URLs, this had to be fixed via the pulldown_cmark parser.
  • Datetime error as error said "not RFC822" value. I am using chrono library, and was using to_rfc2822 call. Now, creating by hand with format RFC822 value.
  • There is still one open issue dependent on the upstream fix.

The changes are in the git. I am using a build from there. I will make a release after the final remaining issue is fixed.

Oh, I also noticed how bad the code looks now as I can understand Rust better :)

Also, the other Planets, like Python and Tor, are still working for my feed.

Running Cassandra on Fedora 32

Posted by Adam Young on October 20, 2020 01:10 PM

This is not a tutorial. These are my running notes from getting Cassandra to run on Fedora 32. The debugging steps are interesting in their own right. I’ll provide a summary at the end for any sane enough not to read through the rest.

<figure class="wp-block-image"></figure>

Old Instructions

So…Starting with https://www.liquidweb.com/kb/how-to-install-cassandra-2-on-fedora-20/ The dsc-20 is, I think, version specific, so I want to se if there is something more appropriate for F32 (has it really been so many years?)

Looking in here https://rpm.datastax.com/community/noarch/ I see that there is still a dsc-20 series of packages, but also dsc-30…which might be a bit more recent of a release.

Dependencies resolved.
 Package                      Architecture            Version                     Repository                 Size
 dsc30                        noarch                  3.0.9-1                     datastax                  1.9 k
Installing dependencies:
 cassandra30                  noarch                  3.0.9-1                     datastax                   24 M

Transaction Summary
Install  2 Packages

I’d be interested to see what is in the dsc30 package versus Cassandra.

$ rpmquery --list dsc30
(contains no files)

OK. But…there is no Systemd file:

sudo systemctl start cassandra
Failed to start cassandra.service: Unit cassandra.service not found.

Garbage Collection Configuration

We’ll, let’s just try to run it.

sudo /usr/sbin/cassandra

Unrecognized VM option 'UseParNewGC'

Seems like it is built to use an older version of the Java CLI params, which is now gone. Where does this come from?

$ rpmquery --list cassandra30 | xargs grep UseParNewGC  2>&1 | grep -v "Is a direc" 

We can remove it there. According to this post, the appropriate replacement is -XX:+UseG1GC

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Unrecognized VM option 'PrintGCDateStamps'

OK, lets take care of both of those. According to this post, the GC line we put in above should cover UseConcMarkSweepGC.

The second option is in the logging section. It is not included in the jvm.options. However, if I run it with just the first option removed, I now get:

$ sudo /usr/sbin/cassandra
[0.000s][warning][gc] -Xloggc is deprecated. Will use -Xlog:gc:/var/log/cassandra/gc.log instead.
Unrecognized VM option 'PrintGCDateStamps'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

More trial and error shows I need to comment out all of the GC logging values at the bottom of the file:


-Xloggc is deprecated. Will use -Xlog:gc:/var/log/cassandra/gc.log instead. This is not from the jvm.options file (it was already commented out above).

$ rpmquery --list cassandra30 | xargs grep loggc  2>&1 | grep -v "Is a direc" 
/etc/cassandra/default.conf/cassandra-env.sh:JVM_OPTS="$JVM_OPTS -Xloggc:/var/log/cassandra/gc.log"
/etc/cassandra/default.conf/cassandra-env.sh.orig:JVM_OPTS="$JVM_OPTS -Xloggc:${CASSANDRA_HOME}/logs/gc.log"

I’m going to replace this with -Xlog:gc:/var/log/cassandra/gc.log as the message suggests in /etc/cassandra/default.conf/cassandra-env.sh

Thread Priority Policy

$ sudo /usr/sbin/cassandra
intx ThreadPriorityPolicy=42 is outside the allowed range [ 0 ... 1 ]
Improperly specified VM option 'ThreadPriorityPolicy=42'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
$ rpmquery --list cassandra30 | xargs grep ThreadPriorityPolicy  2>&1 | grep -v "Is a direc" 
/etc/cassandra/default.conf/cassandra-env.sh:JVM_OPTS="$JVM_OPTS -XX:ThreadPriorityPolicy=42"
/etc/cassandra/default.conf/cassandra-env.sh.orig:JVM_OPTS="$JVM_OPTS -XX:ThreadPriorityPolicy=42"

Looks like that was never a legal value. Since I am running a pretty tip-of-tree Linux distribution and OpenJDK version, I am going to set this to 1.

And with that, Cassandra will run. Too much output here. Let’s try to connect:

cqlsh doesn’t run

Connection error: ('Unable to connect to any servers', {'': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None',)})

OK…let’s dig: First, is it listening:

$ ps -ef | grep java
root      618809    5573  7 14:30 pts/3 ....
java    618809 root   61u     IPv4           32477117       0t0        TCP localhost:7199 (LISTEN)
java    618809 root   62u     IPv4           32477118       0t0        TCP localhost:46381 (LISTEN)
java    618809 root   70u     IPv4           32477124       0t0        TCP localhost:afs3-fileserver (LISTEN)
$ grep afs3-file /etc/services 
afs3-fileserver 7000/tcp                        # file server itself
afs3-fileserver 7000/udp                        # file server itself

I’m not sure off the top of my head which of those is the Query language port, but I can telnet to 7000, 7199, and 46381

Running cqlsh –help I see:

Connects to by default. These defaults can be changed by
setting $CQLSH_HOST and/or $CQLSH_PORT. When a host (and optional port number)
are given on the command line, they take precedence over any defaults.

Lets give that a try:

[ayoung@ayoungP40 ~]$ cqlsh 
Connection error: ('Unable to connect to any servers', {'': ConnectionShutdown('Connection <asyncoreconnection> is already closed',)})
[ayoung@ayoungP40 ~]$ export CQLSH_PORT=7100
[ayoung@ayoungP40 ~]$ cqlsh 
Connection error: ('Unable to connect to any servers', {'': error(111, "Tried connecting to [('', 7100)]. Last error: Connection refused")})
[ayoung@ayoungP40 ~]$ export CQLSH_PORT=46381
[ayoung@ayoungP40 ~]$ cqlsh 
nOPConnection error: ('Unable to connect to any servers', {'': ConnectionShutdown('Connection <asyncoreconnection> is already closed',)})

Nope. Ok, maybe there is a log file. Perhaps the Casandra process is stuck.

[ayoung@ayoungP40 ~]$ ls -lah /var/log/cassandra/ total 52M drwxr-xr-x. 2 cassandra cassandra 4.0K Oct 19 15:41 . drwxr-xr-x. 23 root root 4.0K Oct 19 11:49 .. -rw-r–r–. 1 root root 19M Oct 19 15:41 debug.log

That is a long log file. I’m going to stop the process, wipe this directory and start again. Note that just hitting Ctrl C on the terminal was not enough to stop the process, I had to send a kill by pid.

This time the shell script exited on its own, but the cassandra process is running in the background of that terminal. lsof provides similar output. The high number port is now 44823 which means that I can at least rule that out; I think it is an ephemeral port anyway.

[ayoung@ayoungP40 ~]$ export CQLSH_PORT=7199
[ayoung@ayoungP40 ~]$ cqlsh 
Connection error: ('Unable to connect to any servers', {'': ConnectionShutdown('Connection <asyncoreconnection> is already closed',)})

According to This post, the port for The query language is not open. That would be 9042. The two ports are for Data sync and for Java Management Extensions (JMX).

Why don’t I get Query port? Lets look in the log:

INFO  [main] 2020-10-19 15:46:11,640 Server.java:160 - Starting listening for CQL clients on localhost/ (unencrypted)...
INFO  [main] 2020-10-19 15:46:11,665 CassandraDaemon.java:488 - Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it

But starting it seems to trigger a cascading failure: I now have a lot of log files. Let me see if I can find the first error. Nah, they are all zipped up. Going to wipe and restart, using tail -f on the log file before asking to restart thrift.

$ grep start_rpc /etc/cassandra/conf/*
/etc/cassandra/conf/cassandra.yaml:start_rpc: false
/etc/cassandra/conf/cassandra.yaml.orig:start_rpc: false
grep: /etc/cassandra/conf/triggers: Is a directory

Since trying to start it with nodetool enablethrift failed. Let me try changing that value in the config file and restarting. My log file now ends as:

INFO  [main] 2020-10-19 16:12:47,695 ThriftServer.java:119 - Binding thrift service to localhost/
INFO  [Thread-1] 2020-10-19 16:12:47,699 ThriftServer.java:136 - Listening for thrift clients...
$ cqlsh 
Connection error: ('Unable to connect to any servers', {'': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None',)})

Something is not happy. Let me see where the errors start. tail the log and tee it into a file in /tmp so I can look at it in the end.

ERROR [SharedPool-Worker-6] 2020-10-19 16:24:34,069 Message.java:617 - Unexpected exception during request; channel = [id: 0x0e698e4a, / => /]
java.lang.RuntimeException: Unable to access address of buffer
        at io.netty.channel.epoll.Native.read(Native Method) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
        at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:678) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
        at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
        at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe$3.run(EpollSocketChannel.java:755) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
        at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:268) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
        at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
ERROR [SharedPool-Worker-2] 2020-10-19 16:24:34,069 Message.java:617 - Unexpected exception during request; channel = [id: 0x0e698e4a, / => /]

Note: At this point, I suspected SELinux, so I put my machine in permissive mode. No change.

Native Transport

So I turned to Minecraft. Turns out they have the same problem there, and the solution is to disable native transport: Lets see if that applies to Cassandra.

$ grep start_native_transport /etc/cassandra/conf/*
/etc/cassandra/conf/cassandra.yaml:start_native_transport: true
/etc/cassandra/conf/cassandra.yaml.orig:start_native_transport: true

Ok, and looking in that file I see:

#Whether to start the native transport server.

#Please note that the address on which the native transport is bound is the

#same as the rpc_address. The port however is different and specified below.

Let me try disabling that and see what happens. No love but…in the log file I now see:

INFO  [main] 2020-10-19 17:23:28,272 ThriftServer.java:119 - Binding thrift service to localhost/
INFO  [main] 2020-10-19 17:23:28,272 ThriftServer.java:119 - Binding thrift service to localhost/

So let me try on that port.

[ayoung@ayoungP40 ~]$ export CQLSH_PORT=9160
[ayoung@ayoungP40 ~]$ cqlsh

Maybe it needs the native transport, and it should not be on the same port? Sure enough, further down the conf I find:

rpc_port: 9160

Change the value for start_native_transport back to true and restart the server.

Now it fails with no message why.

This native_transport intrigues me. Lets see what else we can find…hmm seems as if that is an old protocol, and the native_transport has been in effect for 5 or so years…which would explain why it shows in the F22 page, but is not really supported. I should probably turn it off.

Interlude: nodetool status

OK…what else can I do to test my cluster? Nodetool?

$ nodetool status
WARN  21:37:34 Only 51050 MB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots
Datacenter: datacenter1
|/ State=Normal/Leaving/Joining/Moving
--  Address    Load       Tokens       Owns (effective)  Host ID                               Rack
UN  186.68 KB  256          100.0%            6cf084ed-a30a-4b80-9efb-acbcd22362c2  rack1

Better install repository

OK….wipe the system, try with a different repo. Before this, I wiped out all of my old config files, and will need to remake any changes that I noted above.

sudo yum install http://apache.mirror.digitalpacific.com.au/cassandra/redhat/311x/cassandra-3.11.8-1.noarch.rpm http://apache.mirror.digitalpacific.com.au/cassandra/redhat/311x/cassandra-tools-3.11.8-1.noarch.rpm

Still no systemd scripts. Maybe in the 4 Beta. I’ll check that later. Make the same config changes for the jvm.options. Note that the thread priority has moved here, too. Also, it does not want to run as root any more…progress. Do this to let things log:

sudo chown -R ayoung:ayoung /var/log/cassandra/

Insufficient permissions on directory /var/lib/cassandra/data

Get that one too.

sudo chown -R ayoung:ayoung /var/lib/cassandra/

telnet localhost 9160

Connects….I thought that would be turned off…interesting

cqlsh Connection error: (‘Unable to connect to any servers’, {‘’: ConnectionShutdown(‘Connection to was closed’,)}) $ strace cqlsh 2>&1 | grep -i connect\( connect(5, {sa_family=AF_INET, sin_port=htons(9160), sin_addr=inet_addr(“”)}, 16) = -1 EINPROGRESS (Operation now in progress)

This despite the fact that the docs say

Connects to by default.

Possibly picking up from config files.

Anticlimax: Works in a different terminal

I just opened another window, typed cqlsh and it worked…..go figure….Maybe some phantom env var from a previous incantation.


  • For stable (not beta) Use the 3.11 version
  • Run as non-root user is now enforced
  • Change ownership of the var directories so your user can read and write
  • remove the GC options from jvm.options
  • Set the threading priority policy to 1 (or 0)
  • make sure that you have a clean env when running cqlsh

It would be nice to have a better understanding of what went wrong running cqlsh. Problems that go away by themselves tend to return by them selves.

Web of Trust, Part 2: Tutorial

Posted by Fedora Magazine on October 19, 2020 08:00 AM

The previous article looked at how the Web of Trust works in concept, and how the Web of Trust is implemented at Fedora. In this article, you’ll learn how to do it yourself. The power of this system lies in everybody being able to validate the actions of others—if you know how to validate somebody’s work, you’re contributing to the strength of our shared security.

Choosing a project

Remmina is a remote desktop client written in GTK+. It aims to be useful for system administrators and travelers who need to work with lots of remote computers in front of either large monitors or tiny netbooks. In the current age, where many people must work remotely or at least manage remote servers, the security of a program like Remmina is critical. Even if you do not use it yourself, you can contribute to the Web of Trust by checking it for others.

The question is: how do you know that a given version of Remmina is good, and that the original developer—or distribution server—has not been compromised?

For this tutorial, you’ll use Flatpak and the Flathub repository. Flatpak is intentionally well-suited for making verifiable rebuilds, which is one of the tenets of the Web of Trust. It’s easier to work with since it doesn’t require users to download independent development packages. Flatpak also uses techniques to prevent in‑flight tampering, using hashes to validate its read‑only state. As far as the Web of Trust is concerned, Flatpak is the future.

For this guide, you use Remmina, but this guide generally applies to every application you use. It’s also not exclusive to Flatpak, and the general steps also apply to Fedora’s repositories. In fact, if you’re currently reading this article on Debian or Arch, you can still follow the instructions. If you want to follow along using traditional RPM repositories, make sure to check out this article.

Installing and checking

To install Remmina, use the Software Center or run the following from a terminal:

flatpak install flathub org.remmina.Remmina -y

After installation, you’ll find the files in:


Open a terminal here and find the following directories using ls -la:

total 44
drwxr-xr-x.  2 root root  4096 Jan  1  1970 bin
drwxr-xr-x.  3 root root  4096 Jan  1  1970 etc
drwxr-xr-x.  8 root root  4096 Jan  1  1970 lib
drwxr-xr-x.  2 root root  4096 Jan  1  1970 libexec
-rw-r--r--.  2 root root 18644 Aug 25 14:37 manifest.json
drwxr-xr-x.  2 root root  4096 Jan  1  1970 sbin
drwxr-xr-x. 15 root root  4096 Jan  1  1970 share

Getting the hashes

In the bin directory you will find the main binaries of the application, and in lib you find all dependencies that Remmina uses. Now calculate a hash for ./bin/remmina:

sha256sum ./bin/*

This will give you a list of numbers: checksums. Copy them to a temporary file, as this is the current version of Remmina that Flathub is distributing. These numbers have something special: only an exact copy of Remmina can give you the same numbers. Any change in the code—no matter how minor—will produce different numbers.

Like Fedora’s Koji and Bodhi build and update services, Flathub has all its build servers in plain view. In the case of Flathub, look at Buildbot to see who is responsible for the official binaries of a package. Here you will find all of the logs, including all the failed builds and their paper trail.

<figure class="wp-block-image size-large">Illustration image, which shows the process-graph of Buildbot on Remmina.</figure>

Getting the source

The main Flathub project is hosted on GitHub, where the exact compile instructions (“manifest” in Flatpak terms) are visible for all to see. Open a new terminal in your Home folder. Clone the instructions, and possible submodules, using one command:

git clone --recurse-submodules https://github.com/flathub/org.remmina.Remmina

Developer tools

Start off by installing the Flatpak Builder:

sudo dnf install flatpak-builder

After that, you’ll need to get the right SDK to rebuild Remmina. In the manifest, you’ll find the current SDK is.

    "runtime": "org.gnome.Platform",
    "runtime-version": "3.38",
    "sdk": "org.gnome.Sdk",
    "command": "remmina",

This indicates that you need the GNOME SDK, which you can install with:

flatpak install org.gnome.Sdk//3.38

This provides the latest versions of the Free Desktop and GNOME SDK. There are also additional SDK’s for additional options, but those are beyond the scope of this tutorial.

Generating your own hashes

Now that everything is set up, compile your version of Remmina by running:

flatpak-builder build-dir org.remmina.Remmina.json --force-clean

After this, your terminal will print a lot of text, your fans will start spinning, and you’re compiling Remmina. If things do not go so smoothly, refer to the Flatpak Documentation; troubleshooting is beyond the scope of this tutorial.

Once complete, you should have the directory ./build-dir/files/, which should contain the same layout as above. Now the moment of truth: it’s time to generate the hashes for the built project:

sha256sum ./bin/*
<figure class="aligncenter size-large is-resized">Illustrative image, showing the output of sha256sum. To discourage copy-pasting old hashes, they are not provided as in-text.</figure>

You should get exactly the same numbers. This proves that the version on Flathub is indeed the version that the Remmina developers and maintainers intended for you to run. This is great, because this shows that Flathub has not been compromised. The web of trust is strong, and you just made it a bit better.

Going deeper

But what about the ./lib/ directory? And what version of Remmina did you actually compile? This is where the Web of Trust starts to branch. First, you can also double-check the hashes of the ./lib/ directory. Repeat the sha256sum command using a different directory.

But what version of Remmina did you compile? Well, that’s in the Manifest. In the text file you’ll find (usually at the bottom) the git repository and branch that you just used. At the time of this writing, that is:

    "type": "git",
    "url": "https://gitlab.com/Remmina/Remmina.git",
    "tag": "v1.4.8",
    "commit": "7ebc497062de66881b71bbe7f54dabfda0129ac2"

Here, you can decide to look at the Remmina code itself:

git clone --recurse-submodules https://gitlab.com/Remmina/Remmina.git 
cd ./Remmina 
git checkout tags/v1.4.8

The last two commands are important, since they ensure that you are looking at the right version of Remmina. Make sure you use the corresponding tag of the Manifest file. you can see everything that you just built.

What if…?

The question on some minds is: what if the hashes don’t match? Quoting a famous novel: “Don’t Panic.” There are multiple legitimate reasons as to why the hashes do not match.

It might be that you are not looking at the same version. If you followed this guide to a T, it should give matching results, but minor errors will cause vastly different results. Repeat the process, and ask for help if you’re unsure if you’re making errors. Perhaps Remmina is in the process of updating.

But if that still doesn’t justify the mismatch in hashes, go to the maintainers of Remmina on Flathub and open an issue. Assume good intentions, but you might be onto something that isn’t totally right.

The most obvious upstream issue is that Remmina does not properly support reproducible builds yet. The code of Remmina needs to be written in such a way that repeating the same action twice, gives the same result. For developers, there is an entire guide on how to do that. If this is the case, there should be an issue on the upstream bug-tracker, and if it is not there, make sure that you create one by explaining your steps and the impact.

If all else fails, and you’ve informed upstream about the discrepancies and they to don’t know what is happening, then it’s time to send an email to the Administrators of Flathub and the developer in question.


At this point, you’ve gone through the entire process of validating a single piece of a bigger picture. Here, you can branch off in different directions:

  • Try another Flatpak application you like or use regularly
  • Try the RPM version of Remmina
  • Do a deep dive into the C code of Remmina
  • Relax for a day, knowing that the Web of Trust is a collective effort

In the grand scheme of things, we can all carry a small part of responsibility in the Web of Trust. By taking free/libre open source software (FLOSS) concepts and applying them in the real world, you can protect yourself and others. Last but not least, by understanding how the Web of Trust works you can see how FLOSS software provides unique protections.

Update hell due to not updating for a long time

Posted by Kushal Das on October 19, 2020 06:29 AM

SecureDrop right now runs on Ubuntu Xenial. We are working on moving to Ubuntu Focal. Here is the EPIC on the issue tracker.

While I was creating the Docker development environment on Focal, I noticed our tests were failing with the following message:

Traceback (most recent call last):                                                                                            
  File "/opt/venvs/securedrop-app-code/bin/pytest", line 5, in <module>              
    from pytest import console_main
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/pytest/__init__.py", line 5, in <module>
    from _pytest.assertion import register_assert_rewrite
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/_pytest/assertion/__init__.py", line 8, in <module>
    from _pytest.assertion import rewrite
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/_pytest/assertion/rewrite.py", line 31, in <module>
    from _pytest.assertion import util
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/_pytest/assertion/util.py", line 14, in <module>
    import _pytest._code
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/_pytest/_code/__init__.py", line 2, in <module>
    from .code import Code
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/_pytest/_code/code.py", line 29, in <module>
    import pluggy
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/pluggy/__init__.py", line 16, in <module>
    from .manager import PluginManager, PluginValidationError
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/pluggy/manager.py", line 6, in <module>
    import importlib_metadata
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/importlib_metadata/__init__.py", line 471, in <module>
    __version__ = version(__name__)
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/importlib_metadata/__init__.py", line 438, in version
    return distribution(package).version
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/importlib_metadata/__init__.py", line 411, in distribution
    return Distribution.from_name(package)
  File "/opt/venvs/securedrop-app-code/lib/python3.8/site-packages/importlib_metadata/__init__.py", line 179, in from_name
    dists = resolver(name)
  File "<frozen importlib._bootstrap_external>", line 1382, in find_distributions
  File "/usr/lib/python3.8/importlib/metadata.py", line 466, in find_distributions
    found = cls._search_paths(context.name, context.path)
AttributeError: 'str' object has no attribute 'name'
make: *** [Makefile:238: test-focal] Error 1

Found out that pluggy dependency is too old. We update all application dependencies whenever there is a security update, but that is not the case with the development or testing requirements. These requirements only get installed on the developers' systems or in the CI. Then I figured that we are using a version of pytest 3 years old. That is why the code refuses to run on Python3.8 on Focal.

The update hell

Now, to update pluggy, I also had to update pytest and pytest-xdist, and that initial issue solved. But, this broke testinfra. Which we use in various molecule scenarios, say to test a staging or production server configurations or to test the Debian package builds. As I updated testinfra, molecule also required an update, which broke due to the old version of molecule in our pinned dependency. Now, to update I had to update molecule.yml and create.yml file for the different scenarios and get molecule-vagrant 0.3. Now, after I can run the molecule scenarios, I noticed that our old way of injecting variables to the pytest namespace via pytest_namespace function does not work. That function was dropped in between. So, had to fix that as the next step. This whole work is going on a draft PR, and meanwhile, some new changes merged with a new scenario. This means I will be spending more time to rebase properly without breaking these scenarios. The time takes to test each one of them, which frustrates me while fixing them one by one.

Lesson learned for me

We should look into all of our dependencies regularly and keep them updated. Otherwise, if we get into a similar situation again, someone else has to cry in a similar fashion :) Also, this is difficult to keep doing in a small team.

Episode 220 – Securing network time and IoT

Posted by Josh Bressers on October 19, 2020 12:01 AM

Josh and Kurt talk about Network Time Security (NTS) how it works and what it means for the world (probably not very much). We also talk about Singapore’s Cybersecurity Labelling Scheme (CLS). It probably won’t do a lot in the short term, but we hope it’s a beacon of hope for the future.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2014-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_220_Securing_network_time_and_IoT.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_220_Securing_network_time_and_IoT.mp3</audio>

Show Notes

Secure Azure blobs pre-signing in Elixir

Posted by Josef Strzibny on October 19, 2020 12:00 AM

If you use Azure block storage, chances are you want to take advantage of a shared access signature (SAS), a feature usually known as pre-signing URLs. Shared access signatures allow you to share access to your blobs with a given time limit.

The Elixir project I am on is hosted on Azure and so it’s to nobody surprise that we are using the Azure Blob Storage for our uploads. We started with ExAzure, a wrapper for Erlang’s :erlazure, to do the uploads from Elixir.

However, when it comes to pre-signing, we are on our own.

I am including an example of a minimal module that can pre-sign URLs, but make sure you go through and understand the documentation on SAS and official examples.

Below is a MyApp.Storage.Blob module that can pre-sign given paths using the shared access signature.

As a configuration you will need to provide a hostname, account_name and access_key:

defmodule MyApp.Storage.Blob do
  @hostname Application.get_env(:digi, MyApp.Storage)[:hostname]
  @account_name Application.get_env(:digi, MyApp.Storage)[:account_name]
  @access_key Application.get_env(:digi, MyApp.Storage)[:access_key]

Thesigned_uri/2 function is the main function of the module. It accepts the path to sign and options.

It gives read-only access for fixed 30 minutes by default, but options let you adjust permissions and exact start and expiry datetimes:

  def signed_uri(path, options \\ %{}) do
    now = DateTime.truncate(DateTime.utc_now(), :second)
    expiry = Timex.shift(now, minutes: +30)

    default_options = %{
      path: path,
      permissions: "r",
      start: DateTime.to_iso8601(now),
      expiry: DateTime.to_iso8601(expiry),
      stg_version: "2017-11-09",
      protocol: ""

    opts = Map.merge(default_options, options)
    signable_string = signable_string_for_blob(opts)
    signature = sign(signable_string)

    query_hash = %{
      "sp" => opts[:permissions],
      "sv" => opts[:stg_version],
      "sr" => "b",
      "st" => opts[:start],
      "se" => opts[:expiry],
      "sig" => signature

    "https://#{@hostname}/#{path}?" <> URI.encode_query(query_hash)

As you can see, the main purpose is to build an actual URI.

We also needed to prepare a signable string using signable_string_for_blob/1 and generate HMAC signature with sign/1 functions:

  defp sign(body) do
    {:ok, key} = @access_key |> Base.decode64()

    :crypto.hmac(:sha256, key, body)
    |> Base.encode64()

  defp signable_string_for_blob(options) do
    signable_opts = [

    Enum.join(signable_opts, "\n")

And that’s pretty much everything to get you started.

The usage of the signed_uri/2 function is simple (paths depend on your storage):

  def get_tmp_url(id) do

SEDC Academy For Software Testing is running Kiwi TCMS

Posted by Kiwi TCMS on October 18, 2020 02:25 PM

Hello testers, recently we had a chat with Gjore Zaharchev, a QA manager at Seavus and Testing Coach at SEDC Software Testing Academy in Skopje. Here is their story and how they use Kiwi TCMS!

Seavus Educational and Development Center is a private company as part of the Seavus Group and a specialized training center for education of staff in the fields of programming, design, computer networks and software testing. Around 90 students pass through their software testing academy every year with 60 students enrolled thus far. The training program is 6 months and covers many basic IT skills, manual testing, ISTQB fundamentals and automation testing. SEDC is located in Skopje, North Macedonia.

Hands-On Lab Activities

The study program includes multiple individual and team projects, intermediate exams and a final project. These are intended to exercise the most commonly used test design techniques and practice writing up the test scenarios. All scenarios are written directly in Kiwi TCMS. We've even seen test plans and test cases created during Christmas and the New Year holidays last year!

The software under test is usually the programs developed by students from our Academy for Programming, says Gjore. Later in the program we use real websites in order to show some bugs in the wild, he continues. On occasion students have found interesting problems with the websites of Booking.com and WizzAir. They've also managed to find a critical issue on one of our local systems. These are the trials and tribulations of teaching & testing in the wild.

Kiwi TCMS team still remembers one of Alex's training sessions where we used the website of an actual cinema. Unfortunately they went out of business and shutdown the victimized website right in the middle of the session. ;-)

Why did you decide to use Kiwi TCMS

When searching for a TCMS platform for the academy one of the decisive factors was cost. By being open source Kiwi TCMS has the side benefit of having zero initial usage cost which was very important for us. Regardless of that Kiwi TCMS is very easy to install & setup using Docker, very easy to on-board new users and generally well received by everyone.

This is a huge benefit for students because they can experiment with Kiwi TCMS and immediately see how some items work when executing testing workflows. For example they can visualize how a regression test run looks like compared to a test run for a critical security fix; or they can simulate working in groups to cover execution of a larger test plan.

What do your students say

Overall they like the workflow and can easily navigate within the user interface. They feel very positive because there is no complexity in the system and it is very intuitive. One of the areas which often receives questions is the ability to record test automation results!

Answer: Kiwi TCMS has the ability to fetch names and test results directly, via plugins for several test automation frameworks while others are on our backlog - TestNG, Jenkins, C#/NUnit! Anyone interested is welcome to Subscribe to each GitHub issue and follow the progress. Some issues are also part of our open source bounty program so we urge students to take a look and contribute!

Anything you want to ask our team

At SEDC we'd like to know what are your plans for Kiwi TCMS in the future?

Answer: Our plans, like our software, are transparent, check-out posts tagged roadmap! For 2020 this is refactoring of the last remaining legacy bits, continue work on our Telemetry feature and more work towards integration with various bug trackers and test automation frameworks. An extension to that is tighter integration with the GitHub platform!

Help us do more

If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Thanks for reading and happy testing!

Fedora 32 : About positive and negative lookahead with Bash commands.

Posted by mythcat on October 18, 2020 09:43 AM
Today I will talk about something more complex in Linux commands called: positive and negative lookahead.
This solution can be found in several programming languages including Bash
The lookahead process is part of regular expressions.
The lookahead process looks ahead in the string and sees if it matches the given pattern, but then disregard it and move on.
It is very useful when we want to go through the strings.
The lookahead process can be both positive and negative depending on the purpose.
Negative lookahead is indispensable if you want to match something not followed by something else and looks like this:
The string is the question q is analyzed and if it does not match and is not followed by s returns the result.
The positive lookahead it works the same way only now it is parsed if it corresponds to s.
The positive lookahead looks like this:
Let's look at a simple example of detecting the PAE option for the processor.
We can use this command but we will find a lot of information ...
[root@desk mythcat]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Celeron(R) CPU G1620 @ 2.70GHz
stepping : 9
In some cases, the resulting information can be taken using pipe and grep but they will be increasingly fragmented.
I will use the same command cpuinfo and we will look for the pae information in the flags.
All CPU flags can be found here.
Let's prove with internal lookahead to find the pae flag.
[root@desk mythcat]# cat /proc/cpuinfo | grep -oP '(?='pae')...' 
This result gives me additional information, namely that there are two cores.
Do you have a question?

Fedora 32 : Visual Code and C# on Fedora distro.

Posted by mythcat on October 17, 2020 08:23 PM
Today I will show you how to use Visual Code with C#.
sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
sudo sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/
vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" >
Then use dnf to check and install this editor.
#dnf check-update
#dnf install code
Use Extensions button or Ctrl+Shift+X keys to open in the left side area and intall the C# extension from Microsoft by pressing the Install button, see:

Created a folder named CSharpProjects and using the linux terminal execute the following command:
[mythcat@desk CSharpProjects]$ dotnet new mvc -au None -o aspnetapp
The template "ASP.NET Core Web App (Model-View-Controller)" was created successfully.
This template contains technologies from parties other than Microsoft,
see https://aka.ms/aspnetcore/3.1-third-party-notices for details.

Processing post-creation actions...
Running 'dotnet restore' on aspnetapp/aspnetapp.csproj...
Restore completed in 112.76 ms for /home/mythcat/CSharpProjects/aspnetapp/aspnetapp.csproj.

Restore succeeded.

[mythcat@desk CSharpProjects]$ cd aspnetapp/
[mythcat@desk aspnetapp]$ code .
This command will open the Visual Code. At this point, in the aspnetapp folder is an ASP.NET project open in Visual Code. You can run this project with command:
[mythcat@desk aspnetapp]$ dotnet run
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {4c284989-9a5d-4ea7-89e2-a383828fd7ab} may be persisted
to storage in unencrypted form.
info: Microsoft.Hosting.Lifetime[0]
Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: /home/mythcat/CSharpProjects/aspnetapp
You can open the https://localhost:5001/ and see the default Welcome page from ASP.NET.

Fedora 32 : First example with C# on Fedora distro.

Posted by mythcat on October 17, 2020 07:45 PM
Let's enable the COPR repository for dotnet:
[mythcat@desk ~]$ sudo dnf copr enable @dotnet-sig/dotnet
[sudo] password for mythcat:
Enabling a Copr repository. Please note that this repository is not part
of the main distribution, and quality may vary.

The Fedora Project does not exercise any power over the contents of
this repository beyond the rules outlined in the Copr FAQ at
and packages are not held to any quality or security level.

Please do not file bug reports about these packages in Fedora
Bugzilla. In case of problems, contact the owner of this repository.

Do you really want to enable copr.fedorainfracloud.org/@dotnet-sig/dotnet? [y/N]: y
Repository successfully enabled.
Install the .NET Core package:
[mythcat@desk ~]$ sudo dnf install dotnet
Copr repo for dotnet owned by @dotnet-sig 42 kB/s | 59 kB 00:01
Dependencies resolved.
Package Arch Version Repository Size
dotnet x86_64 3.1.106-1.fc32 updates 11 k
Installing dependencies:
aspnetcore-runtime-3.1 x86_64 3.1.6-1.fc32 updates 6.2 M
aspnetcore-targeting-pack-3.1 x86_64 3.1.6-1.fc32 updates 945 k
dotnet-apphost-pack-3.1 x86_64 3.1.6-1.fc32 updates 70 k
dotnet-host x86_64 3.1.6-1.fc32 updates 104 k
dotnet-hostfxr-3.1 x86_64 3.1.6-1.fc32 updates 164 k
dotnet-runtime-3.1 x86_64 3.1.6-1.fc32 updates 27 M
dotnet-sdk-3.1 x86_64 3.1.106-1.fc32 updates 41 M
dotnet-targeting-pack-3.1 x86_64 3.1.6-1.fc32 updates 1.8 M
dotnet-templates-3.1 x86_64 3.1.106-1.fc32 updates 1.8 M
netstandard-targeting-pack-2.1 x86_64 3.1.106-1.fc32 updates 1.3 M

Transaction Summary
Install 11 Packages

Total download size: 79 M
Installed size: 298 M
Is this ok [y/N]:
Use this tutorial to install Visual Studio Code. Press Ctr-P keys to install the C# extension by OmniSharp.
ext install ms-dotnettools.csharp
The last step is to create a application HelloWorld:
[mythcat@desk ~]$ dotnet new console -o HelloWorld

Welcome to .NET Core 3.1!
SDK Version: 3.1.106

Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
Getting ready...
The template "Console Application" was created successfully.

Processing post-creation actions...
Running 'dotnet restore' on HelloWorld/HelloWorld.csproj...
Restore completed in 119.48 ms for /home/mythcat/HelloWorld/HelloWorld.csproj.

Restore succeeded.
You can run it with dotnet run command:
[mythcat@desk ~]$ cd HelloWorld/
[mythcat@desk HelloWorld]$ ls
HelloWorld.csproj obj Program.cs
[mythcat@desk HelloWorld]$ dotnet run Program.cs
Hello World!

F32-20201016 Updated isos Released

Posted by Ben Williams on October 16, 2020 05:11 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated F32-20201010-Live ISOs, carrying the 5.8.15-201 kernel.

This set of updated isos will save considerable amounts  of updates after install.  ((for new installs.)(New installs of Workstation have about 1GB+ of updates)).

A huge thank you goes out to irc nicks dowdle, ledeni, Southern-Gentleman for testing these iso.

And as always our isos can be found at http://tinyurl.com/Live-respins2

systemd-resolved: introduction to split DNS

Posted by Fedora Magazine on October 16, 2020 08:00 AM

Fedora 33 switches the default DNS resolver to systemd-resolved. In simple terms, this means that systemd-resolved will run as a daemon. All programs wanting to translate domain names to network addresses will talk to it. This replaces the current default lookup mechanism where each program individually talks to remote servers and there is no shared cache.

If necessary, systemd-resolved will contact remote DNS servers. systemd-resolved is a “stub resolver”—it doesn’t resolve all names itself (by starting at the root of the DNS hierarchy and going down label by label), but forwards the queries to a remote server.

A single daemon handling name lookups provides significant benefits. The daemon caches answers, which speeds answers for frequently used names. The daemon remembers which servers are non-responsive, while previously each program would have to figure this out on its own after a timeout. Individual programs only talk to the daemon over a local transport and are more isolated from the network. The daemon supports fancy rules which specify which name servers should be used for which domain names—in fact, the rest of this article is about those rules.

Split DNS

Consider the scenario of a machine that is connected to two semi-trusted networks (wifi and ethernet), and also has a VPN connection to your employer. Each of those three connections has its own network interface in the kernel. And there are multiple name servers: one from a DHCP lease from the wifi hotspot, two specified by the VPN and controlled by your employer, plus some additional manually-configured name servers. Routing is the process of deciding which servers to ask for a given domain name. Do not mistake this with the process of deciding where to send network packets, which is called routing too.

The network interface is king in systemd-resolved. systemd-resolved first picks one or more interfaces which are appropriate for a given name, and then queries one of the name servers attached to that interface. This is known as “split DNS”.

There are two flavors of domains attached to a network interface: routing domains and search domains. They both specify that the given domain and any subdomains are appropriate for that interface. Search domains have the additional function that single-label names are suffixed with that search domain before being resolved. For example, a lookup for “server” is treated as a lookup for “server.example.com” if the search domain is “example.com.” In systemd-resolved config files, routing domains are prefixed with the tilde (~) character.

Specific example

Now consider a specific example: your VPN interface tun0 has a search domain private.company.com and a routing domain ~company.com. If you ask for mail.private.company.com, it is matched by both domains, so this name would be routed to tun0.

A request for www.company.com is matched by the second domain and would also go to tun0. If you ask for www, (in other words, if you specify a single-label name without any dots), the difference between routing and search domains comes into play. systemd-resolved attempts to combine the single-label name with the search domain and tries to resolve www.private.company.com on tun0.

If you have multiple interfaces with search domains, single-label names are suffixed with all search domains and resolved in parallel. For multi-label names, no suffixing is done; search and routing domains are are used to route the name to the appropriate interface. The longest match wins. When there are multiple matches of the same length on different interfaces, they are resolved in parallel.

A special case is when an interface has a routing domain ~. (a tilde for a routing domain and a dot for the root DNS label). Such an interface always matches any names, but with the shortest possible length. Any interface with a matching search or routing domain has higher priority, but the interface with ~. is used for all other names. Finally, if no routing or search domains matched, the name is routed to all interfaces that have at least one name server attached.

Lookup routing in systemd-resolved

Domain routing

This seems fairly complex, partially because of the historic names which are confusing. In actual practice it’s not as complicated as it seems.

To introspect a running system, use the resolvectl domain command. For example:

$ resolvectl domain
Link 4 (wlp4s0): ~.
Link 18 (hub0):
Link 26 (tun0): redhat.com

You can see that www would resolve as www.redhat.com. over tun0. Anything ending with redhat.com resolves over tun0. Everything else would resolve over wlp4s0 (the wireless interface). In particular, a multi-label name like www.foobar would resolve over wlp4s0, and most likely fail because there is no foobar top-level domain (yet).

Server routing

Now that you know which interface or interfaces should be queried, the server or servers to query are easy to determine. Each interface has one or more name servers configured. systemd-resolved will send queries to the first of those. If the server is offline and the request times out or if the server sends a syntactically-invalid answer (which shouldn’t happen with “normal” queries, but often becomes an issue when DNSSEC is enabled), systemd-resolved switches to the next server on the list. It will use that second server as long as it keeps responding. All servers are used in a round-robin rotation.

To introspect a running system, use the resolvectl dns command:

$ resolvectl dns
Link 4 (wlp4s0):
Link 18 (hub0):
Link 26 (tun0):

When combined with the previous listing, you know that for www.redhat.com, systemd-resolved will query, and—if it doesn’t respond— For www.google.com, systemd-resolved will query or the two Google servers and

Differences from nss-dns

Before going further detail, you may ask how this differs from the previous default implementation (nss-dns). With nss-dns there is just one global list of up to three name servers and a global list of search domains (specified as nameserver and search in /etc/resolv.conf).

Each name to query is sent to the first name server. If it doesn’t respond, the same query is sent to the second name server, and so on. systemd-resolved implements split-DNS and remembers which servers are currently considered active.

For single-label names, the query is performed with each of the the search domains suffixed. This is the same with systemd-resolved. For multi-label names, a query for the unsuffixed name is performed first, and if that fails, a query for the name suffixed by each of the search domains in turn is performed. systemd-resolved doesn’t do that last step; it only suffixes single-label names.

A second difference is that with nss-dns, this module is loaded into each process. The process itself communicates with remote servers and implements the full DNS stack internally. With systemd-resolved, the nss-resolve module is loaded into the process, but it only forwards the query to systemd-resolved over a local transport (D-Bus) and doesn’t do any work itself. The systemd-resolved process is heavily sandboxed using systemd service features.

The third difference is that with systemd-resolved all state is dynamic and can be queried and updated using D-Bus calls. This allows very strong integration with other daemons or graphical interfaces.

Configuring systemd-resolved

So far, this article talked about servers and the routing of domains without explaining how to configure them. systemd-resolved has a configuration file (/etc/systemd/resolv.conf) where you specify name servers with DNS= and routing or search domains with Domains= (routing domains with ~, search domains without). This corresponds to the Global: lists in the two listings above.

In this article’s examples, both lists are empty. Most of the time configuration is attached to specific interfaces, and “global” configuration is not very useful. Interfaces come and go and it isn’t terribly smart to contact servers on an interface which is down. As soon as you create a VPN connection, you want to use the servers configured for that connection to resolve names, and as soon as the connection goes down, you want to stop.

How does then systemd-resolved acquire the configuration for each interface? This happens dynamically, with the network management service pushing this configuration over D-Bus into systemd-resolved. The default in Fedora is NetworkManager and it has very good integration with systemd-resolved. Alternatives like systemd’s own systemd-networkd implement similar functionality. But the interface is open and other programs can do the appropriate D-Bus calls.

Alternatively, resolvectl can be used for this (it is just a wrapper around the D-Bus API). Finally, resolvconf provides similar functionality in a form compatible with a tool in Debian with the same name.

Scenario: Local connection more trusted than VPN

The important thing is that in the common scenario, systemd-resolved follows the configuration specified by other tools, in particular NetworkManager. So to understand how systemd-resolved names, you need to see what NetworkManager tells it to do. Normally NM will tell systemd-resolved to use the name servers and search domains received in a DHCP lease on some interface. For example, look at the source of configuration for the two listings shown above:

There are two connections: “Parkinson” wifi and “Brno (BRQ)” VPN. In the first panel DNS:Automatic is enabled, which means that the DNS server received as part of the DHCP lease ( is passed to systemd-resolved. Additionally. and are listed as alternative name servers. This configuration is useful if you want to resolve the names of other machines in the local network, which provides. Unfortunately the hotspot DNS server occasionally gets stuck, and the other two servers provide backup when that happens.

The second panel is similar, but doesn’t provide any special configuration. NetworkManager combines routing domains for a given connection from DHCP, SLAAC RDNSS, and VPN, and finally manual configuration and forward this to systemd-resolved. This is the source of the search domain redhat.com in the listing above.

There is an important difference between the two interfaces though: in the second panel, “Use this connection only for resources on its network” is checked. This tells NetworkManager to tell systemd-resolved to only use this interface for names under the search domain received as part of the lease (Link 26 (tun0): redhat.com in the first listing above). In the first panel, this checkbox is unchecked, and NetworkManager tells systemd-resolved to use this interface for all other names (Link 4 (wlp4s0): ~.). This effectively means that the wireless connection is more trusted.

Scenario: VPN more trusted than local network

In a different scenario, a VPN would be more trusted than the local network and the domain routing configuration reversed. If a VPN without “Use this connection only for resources on its network” is active, NetworkManager tells systemd-resolved to attach the default routing domain to this interface. After unchecking the checkbox and restarting the VPN connection:

$ resolvectl domain
Link 4 (wlp4s0):
Link 18 (hub0):
Link 28 (tun0): ~. redhat.com
$ resolvectl dns
Link 4 (wlp4s0):
Link 18 (hub0):
Link 28 (tun0):

Now all domain names are routed to the VPN. The network management daemon controls systemd-resolved and the user controls the network management daemon.

Additional systemd-resolved functionality

As mentioned before, systemd-resolved provides a common name lookup mechanism for all programs running on the machine. Right now the effect is limited: shared resolver and cache and split DNS (the lookup routing logic described above). systemd-resolved provides additional resolution mechanisms beyond the traditional unicast DNS. These are the local resolution protocols MulticastDNS and LLMNR, and an additional remote transport DNS-over-TLS.

Fedora 33 does not enable MulticastDNS and DNS-over-TLS in systemd-resolved. MulticastDNS is implemented by nss-mdns4_minimal and Avahi. Future Fedora releases may enable these as the upstream project improves support.

Implementing this all in a single daemon which has runtime state allows smart behaviour: DNS-over-TLS may be enabled in opportunistic mode, with automatic fallback to classic DNS if the remote server does not support it. Without the daemon which can contain complex logic and runtime state this would be much harder. When enabled, those additional features will apply to all programs on the system.

There is more to systemd-resolved: in particular LLMNR and DNSSEC, which only received brief mention here. A future article will explore those subjects.

Release of osbuild-composer 22

Posted by OSBuild Project on October 16, 2020 12:00 AM

We are happy to announce that we released osbuild-composer 22.

Below you can find the official change log, compiled by Ondřej Budai. Everyone is encouraged to upgrade!

  • Support for building Fedora 33 images is now available as a tech preview.

  • The osbuild-composer-cloud binary is gone. The osbuild-composer binary now serves the Composer API along with Weldr and Koji APIs.

  • The testing setup was reworked. All files related to tests are now shipped in the tests subpackage. A script to run the test suite locally is now also available. See HACKING.md for more details.

  • GPG keys in Koji API are no longer marked as required.

  • Osbuild-composer RPM is now buildable on Fedora 33+ and Fedora ELN.

  • Osbuild-composer for Fedora 34 and higher now obsoletes lorax-composer.

Contributions from: Alexander Todorov, Jacob Kozol, Lars Karlitski, Martin Sehnoutka, Ondřej Budai, Tom Gundersen

— Liberec, 2020-10-16

Migrating WordPress blog to Jekyll

Posted by Josef Strzibny on October 16, 2020 12:00 AM

This year I migrated my blog from the famous WordPress blogging platform to Jekyll static site generator. The main reasons were the handling of code snippets, simplicity, and security. I think that WordPress is fine, but my own time with WordPress is certainly up.

Jekyll 101

Jekyll is a blog aware static site generator. It’s one of the oldest and most popular static site generators out there. The three pillars of Jekyll are:

  • It’s simple

    No database, no comments, no updates. After a little setup, Jekyll frees your mind and let you focus on your content.

  • It’s static

    Bring your Markdown, HTML and CSS. The result is a static site. No backends.

  • It’s blog-aware

    Jekyll features all the bits to make a successful blog such as posts, pages, categories, custom layouts and many other things via its plugins.

Jekyll is not a content management system (CMS) so the comparison with WordPress might be unfair (both ways), however, both are widely used these days for blogging and Jekyll got a +1 this year from me.


Jekyll is written in Ruby. This is great for the ability to extend its core functionality with plugins but bad for the requirement of having Ruby installed on your system.

So first, we need to install Ruby. On Fedora you can install system Ruby with:

$ dnf install ruby -y

Then we can install Jekyll:

$ gem install bundler jekyll

Bundler, a Ruby gems manager, is optional but can help you manage all the site dependencies in the Gemfile.


Once installed, it’s easy to tell Jekyll to generate a new site to have a peek inside:

$ jekyll new my-awesome-site
$ tree my-awesome-site
├── 404.html
├── about.markdown
├── _config.yml
├── Gemfile
├── Gemfile.lock
├── index.markdown
└── _posts
    └── 2020-10-16-welcome-to-jekyll.markdown

Each project has a few key files and directories, Here are the main ones:

  • Gemfile and Gemfile.lock are standard Bundler files to manage Ruby dependencies.
  • _config.yml contains the main Jekyll configuration.
  • _posts directory is a jar for published blog posts.
  • index.markdown as your site homepage.
  • about.markdown as your site example standalone page.
  • 404.html as your site fallback for HTTP 404 Not Found error.

An important thing to keep in mind is that Jekyll requires post files to be named in the following format:


There are also other directories in a typical Jekyll site that might make their way in. Here are some that you might come across:

  • css directory as a jar for site stylesheets.
  • img directory as a jar for site images.
  • _drafts directory as a jar for unpublished drafts.
  • _uploads directory as a jar for posts images (I like to split design images to post images).
  • _layouts directory containing your page, post, and tag layouts.
  • _includes directory containing repeatable elements such as header and footer.


The _config.yml YAML file is the source of configuration.

Here are some of its parts.

Basic name, description, author, layout:

name: Notes to self
title: Notes to self
tagline: Software engineering, ...and bootstrapping
description: Josef Strzibny's publication on software engineering...
author: strzibny
permalink: /:title/
layout: post

Markdown used:

markdown: kramdown
  input: GFM
  syntax_highlighter: rouge

Sass setup:

    sass_dir: _sass
    style: compressed

Default value:

  - scope:
      path: ""
      image: /img/nts.png

Plugin definitions:

  - jekyll-sitemap
  - jekyll-paginate
  - jekyll-seo-tag
  - jekyll-archives

And some of the settings for plugins:

    - tags
    - month
    month: month
    tag: tag
    month: '/:year/:month/'
    tag: '/tag/:name/'
  title_prefix: Archives
paginate: 5
paginate_path: "/page/:num"

I mention some of these settings below. Feel free to start with the default and incrementally add new settings with each added feature.


Jekyll uses the Liquid templating language from Shopify to process site templates.

Content is injected by using variables surrounded by two curly braces as showed on the following snippet of this blog’s _layouts/default.html layour:

  <a title="Notes to self by Josef Strzibny" href="/"><strong>/home/nts</strong></a>

{{ content }}

  <a href="/about/">About</a>

content above is a variable that is provided for us by Jekyll.

Filters can modify the provided values:

{{ "Hello world!" | number_of_words }}

Tags provide control flow and helpers. They are started with {% and ended with %}:

{% link /assets/files/doc.pdf %}

I recommend going through the generated files to get the feel for Liquid and to see official docs on tags and filters.

Front Matter

A regular Liquid template in Jekyll will start with a so-called Front Matter, a special meta YAML block.

Here is a start of one of the post on this blog:

title: Getting Vagrant with libvirt support on your Fedora 20
layout: post
permalink: /getting-vagrant-with-libvirt-support-on-your-fedora-20/
  - fedora
  - vagrant
I was ...

It defines the post title, layout, permalink and tags.

This block must come first in the template and be separate with three triple-dashed lines as you can see above.

Build and production

We can build our new site with jekyll build or use jekyll serve which will build the site and serve it immediately for preview:

$ bundle exec jekyll serve --drafts

By default, Jekyll will use, so just open it locally with any browser. If we need to preview our drafts as well, we should add --drafts option.

Once the site is ready for publishing we should build it with JEKYLL_ENV set to production:

$ JEKYLL_ENV=production jekyll build

This environment is useful for building up features we want to only see in production such as a snippet for tracking visitors. In any template we can use the following if statement to separate the production environment:

{% if jekyll.environment == "production" %}
  ...only in production...
{% endif %}


The writing routine is certainly something I had to change. Before I would log in to my WordPress instance and write my posts entirely from the WordPress admin interface. I could schedule the publishing, save drafts, or see the number of clicks per posts.

Since the build and management are more local, I recommend leveraging your preferred Markdown editor. I moved entirely to Sublime Text 3 where I already program and write. Here is a post on jekyll-sublime plugin that helps me with this.

Some other people might still want to use some kind of admin. There is Jekyll Admin, Netlify CMS or Prose.io that are worth checking out.

If you will host your site git repository on GitHub, you can also leverage their online editor, which is handy for quick fixes. I also started to leverage GitHub issues for my post ideas and notes.


I am done with WordPress hosting. While I can imagine a project that I would start on WordPress, I don’t want to handle the hosting part of it. This is because I don’t want to deal with PHP hosting anymore. Some hosting providers have a one -click WordPress install, but then leave you on an old version of PHP and some plugins won’t install, etc. Managed WordPress would be my choice today, but it’s quite expensive.

Hosting is one of the best reasons to migrate off WordPress. By building the blog with Jekyll I don’t have to manage the PHP backend nor do I have to care about running a MySQL database. It’s also a reason I specifically evaluated only static site generators. It’s liberating.

And the cherry on top? There are many completely free static site hosting providers and many of them even integrate directly with Jekyll. They usually handle TLS certificates and offer a CDN as well.

GitHub Pages and Netflify integrate with Jekyll nicely. Render is another option for completely free static site hosting with CDN.


One thing to consider for blogs with a lot of images and media would be separate hosting for your assets. Doing a lot of images inside the main git repository does not scale well. Object storage like S3 or Digital Ocean spaces could be an option. I am still thinking of a good setup for the future.

Error pages

Error pages redirects depend on your hosting.

If you decide to use Netlify like me, see their redirects guide.

WordPress migration

Almost sold on Jekyll? Curious about how much work it takes to migrate a WP site?

There is an official importer that supports WordPress. The importer only converts posts with YAML front-matter. It does not import any layouts, styling, or external files (images, CSS, etc.). If your site is assets heavy, I would not recommend it.

There are however WordPress plugins that can get you started with the migration. One of them is WordPress to Jekyll Exporter.

It should create drafts and posts directories. Assets will be referenced from the wp-content directory that you either leave as it is or optionally migrate the files out of it.

It does not migrate your design or comments though. Personally, I took this as an opportunity to create a new design for my blog from scratch and simply remove comments. I direct people to drop me a note on Twitter instead.

Building blocks

Let’s see how to implement and work with the main bits and pieces on a typical Jekyll programmer blog.

This section covers:

  • Excerpts
  • Tags
  • Code snippets
  • SEO
  • Social images
  • Sitemap
  • Post updates
  • Permalinks
  • Favicon
  • RSS and Atom feeds
  • Search


Each post can define its own excerpt separator with excerpt_separator in the Front Matter:

excerpt_separator: <!--more-->

This can be useful when migrating from systems such as WordPress.

Other than that, excerpt will be the first paragraph. If you are building a post listing you can edit the excerpt using Liquid filters.

For example, you can strip HTML and truncate to a designated length with the following:

{{ post.excerpt | strip_html | truncatewords:75 }}

Tags and archives

I recommend jekyll-archives plugin that can handle both archive and tag pages.

Add to Gemfile

gem "jekyll-archives"

and _config.yml:

  - jekyll-archives

And you can start with the similar settings I use on this blog (also in _config.yml):

    - tags
    - month
    month: month
    tag: tag
    month: '/:year/:month/'
    tag: '/tag/:name/'
  title_prefix: Archives

What these little settings do?

  • It enables tags and monthly archives.
  • It tells Jekyll to use layouts/tag.html for tag pages and layouts/month.html for archive pages.
  • It sets permalinks and a title prefix.

If you are curious, here is the simple tag layout I use:

layout: default

<h1><span>{{ page.title }}{{ page.date | date: "%B %Y" }}</span></h1>

<section class="main">
    {% for post in page.posts %}
      <li><a href="{{ post.url }}">{{ post.title }}</a></li>
    {% endfor %}

And archive template:

layout: default
<h1><span>{{ page.date | date: "%B %Y" }}</span></h1>

<section class="main">
    {% for post in page.posts %}
      <li><a href="{{ post.url }}">{{ post.title }}</a></li>
    {% endfor %}

Code snippets

One of the main reasons for leaving WordPress was the handling of code snippets. I never ever made them to work to my satisfaction on WordPress.

Luckily this is actually very straightforward on Jekyll.

Rouge is a pure-ruby syntax highlighter that Jekyll will happily use with a little configuration in _config.yml:

markdown: kramdown

  input: GFM
  syntax_highlighter: rouge

Make sure the syntax_highlighter is set to rouge.

Then you need to pick a color theme. You can start with github or monokai.

Generate the styles you want with rougify:

$ rougify style monokai > /path/to/css/file.css

And include the generated styles in your <head> as:

<link href="/path/to/css/file.css" rel="stylesheet">

Code snippets will be automatically highlighted if the programming language used is specified:

def function():


There are few basic things to set up for a decent search engine optimization. Most of it can be handled by jekyll-seo-tag. It takes care of the following (taken from the project README.md):

  • Page title, with site title or description appended
  • Page description
  • Canonical URL
  • Next and previous URLs on paginated pages
  • JSON-LD Site and post metadata for richer indexing
  • Open Graph title, description, site title, and URL (for Facebook, LinkedIn, etc.)
  • Twitter Summary Card metadata

Add as any other plugin in Gemfile:

gem "jekyll-seo-tag", "~> 2.6"

And to _config.yml:

  - jekyll-seo-tag

Then add the following just before </head>:

{% seo %}

Finally, adjust as you like. On this blog, I am using the following in the config file:

  name: Josef Strzibny
    - https://twitter.com/strzibnyj
    - https://www.linkedin.com/in/strzibny
    - https://github.com/strzibny
    - https://gitlab.com/strzibny
  username: strzibnyj
  card: summary

I also recommend to do a Google site verification if you didn’t do it previously.

Social images

With jekyll-seo-tag plugin mentioned above we can easily specify social images per post or page in Front Matter with:

image: /path/to/post/image.img

And adjust its properties by providing the whole object if needed:

  path: /img/twitter.png
  height: 100
  width: 100

You can also set a default image as a placeholder with the following snippet in the _config.yml file:

  - scope:
      path: ""
      image: /assets/images/default-card.png

If you want to automate the whole generation of social media images, you can look into the open-source Open Graph Image as a Service.


There is a jekyll-sitemap plugin to handle the sitemap generation for you.

Add to Gemfile:

gem "jekyll-sitemap"

And to _config.yaml:

url: "https://hostname.com"
  - jekyll-sitemap

Read the plugin documentation if you need to exclude some pages.

Post updates

Post updates can be managed by updating last_modified_at directive in Front Matter:

date: 2016-02-13T20:47:31+08:00
last_modified_at: 2016-02-14T00:00:00+08:00

Then in the template you can include the date within the HTML <time> tag:

<p class="post-meta">
  <time datetime="2020-10-16T00:00:00+00:00" itemprop="datePublished">Oct 16, 2020</time>

You can define custom permalinks with permalink directive in Front Matter:

permalink: /about/

Here is how to use many of the variables available such as ::categories, :year, :month, and :day.

If you need to tweak the permalinks even further, I have previously written a post Changing URLs in Jekyll.


If you need a new favicon, you can generate one with Favic-o-Matic.

RSS and Atom feeds

With jekyll-feed plugin you can have an Atom feed available in no time.

Add to Gemfile:

gem "jekyll-feed", "~> 0.12"

And to the _config file:

  - jekyll-feed

The plugin will generate the feed at /feed.xml. The path is configurable.

Then you can include the link to the feed as:

<link rel="alternate" type="application/rss+xml" title="RSS feed" href="http://your-site.com/feed.xml" />

One thing that is not possible with a typical Jekyll site is to have a database-powered search. Your option is to implement a separate backend service (that would need hosting management), go with a search engine like Google (you could redirect to search with site: scope), or do pure pre-generated front-end search.

For Fedora Developer we successfully used jekyll-lunr-js-search plugin. Unfortunately it’s unmaintained by now. You can look into Simple-Jekyll-Search as an alternative, but I don’t have a personal experience with it.

In my case, I decided not to implement a search function for now. I am not sure it’s needed for a typical blog. Nobody will miss it.


With WordPress, I used a server-side analytics plugin which was very easy to setup. It gave me a simple overview of how I am doing. While I got a free hosting for my static blog with Netflify, I would have to pay $10 every month with the most basic plan to have access to analytics.

So I started with exploring my options which I summarized in Privacy-oriented alternatives to Google Analytics for 2020. The article mentions a lot of client-side services and open source tools from which I tried Fathom and Plausible in the last couple of months.

I am staying with Plausible for now, but I also recommend you to read my experience with Netlify analytics which I gave two months to compare. Choosing between client-side and server-side analytics is not easy, you might as well run both.


Most people won’t test their static sites the same way they won’t test their WordPress installation. And mostly you don’t need to. But I would actually recommend doing some simple testing.

Personally, I see value in checking that relative links work. You could then extend it to test that your references work as well (to delete dead links).

Here is my approach to testing static sites.

Is Jekyll worth it?

I spent a few days on the migration, but I am so relieved and happy with it! It also made me create a new minimal theme which I am improving as time goes. It’s light, fast, and easy to manage. I am not missing WordPress a bit.

As a Rubyist and someone that used Jekyll before, the choice was obvious. I want to be able to tweak my sites with Ruby. If you don’t like Jekyll in particular, try Hugo.

Running Ironic Standalone on RHEL

Posted by Adam Young on October 15, 2020 09:26 PM

This is only going to work if you have access to the OpenStack code. If you are not an OpenStack customer, you are going to need an evaluation entitlement. That is beyond the scope of this article.

For me, I had to enable the repo openstack-16-for-rhel-8-x86_64-rpms. With that enabled I was able to install via yum:

yum install openstack-ironic-api mariadb-server python3-openstackclient python3-ironicclient openstack-ironic-conductor

Set up Maria DB

systemctl enable mariadb 
Created symlink /etc/systemd/system/mysql.service → /usr/lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/mysqld.service → /usr/lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/multi-user.target.wants/mariadb.service → /usr/lib/systemd/system/mariadb.service.
[root@nuzleaf ~]# systemctl start mariadb 
mysql -u root 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 8
Server version: 10.3.17-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database ironic;
Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> CREATE USER 'ironic'@'localhost' IDENTIFIED BY 'ironic';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> \q

Make sure I can log in as the ironic user

mysql ironic -u ironic -p

Back up the config file:

cp /etc/ironic/ironic.conf /etc/ironic/ironic.conf.orig

I’ll use these values:

auth_strategy = noauth
enabled_hardware_types = ipmi
enabled_power_interfaces = ipmitool
rpc_transport = json-rpc

Now let me see if I can sync the database:

INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> 2581ebaf0cb2, initial migration
INFO  [alembic.runtime.migration] Running upgrade 2581ebaf0cb2 -> 21b331f883ef, Add provision_updated_at

INFO  [alembic.runtime.migration] Running upgrade 28c44432c9c3 -> 2aac7e0872f6, Create deploy_templates and deploy_template_steps tables.
INFO  [alembic.runtime.migration] Running upgrade 2aac7e0872f6 -> 1e15e7122cc9, add extra column to deploy_templates

Seemed to work. Let’s start conductor:

systemctl status openstack-ironic-conductor
# systemctl status openstack-ironic-conductor
● openstack-ironic-conductor.service - OpenStack Ironic Conductor service
   Loaded: loaded (/usr/lib/systemd/system/openstack-ironic-conductor.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-10-15 17:11:36 EDT; 1s ago
 Main PID: 491085 (ironic-conducto)
    Tasks: 1 (limit: 99656)
   Memory: 40.0M
   CGroup: /system.slice/openstack-ironic-conductor.service
           └─491085 /usr/bin/python3 /usr/bin/ironic-conductor

Oct 15 17:11:36 nuzleaf.home.younglogic.net systemd[1]: Started OpenStack Ironic Conductor service.

So far so good.

The API server is going to be an interesting piece. For a first run, I’ll try the command line version I ran on the upstream. However, my longer term approach will be to run it as a wsgi App behind the Apache server that is already running on this host.

Run the API server in one terminal


and open up a second one to try to hit it with curl.

$ curl
{"name": "OpenStack Ironic API", "description": "Ironic is an OpenStack project which aims to provision baremetal machines.", "versions": [{"id": "v1", "links": [{"href": "", "rel": "self"}], "status": "CURRENT", "version": "1.58", "min_version": "1.1"}], "default_version": {"id": "v1", "links": [{"href": "", "rel": "self"}], "status": "CURRENT", "version": "1.58", "min_version": "1.1"}}

Excellent. let’s try the command line. The python3-ironicclient RPM does not ship the baremetal executable, but using the openstack common client, we can see all of the baremetal sub-commands. To list the drivers and the nodes, we can use the following.

openstack baremetal driver list
| Supported driver(s) | Active host(s)              |
| ipmi                | nuzleaf.home.younglogic.net |
$ openstack baremetal node list

Edit: Next up was going to be configuring Apache to run the Ironic API as a WSGI App, but I decided to use the systemd files to enable and run Ironic for the time being. Here’s all the commands together.

systemctl start mariadb 
systemctl start openstack-ironic-conductor
systemctl start openstack-ironic-api
systemctl enable mariadb 
systemctl enable openstack-ironic-conductor
systemctl enable openstack-ironic-api

Edit 2: Seems I am having trouble connecting. I’m back to running the API from the command line. More to come.

Introduction to Ironic

Posted by Adam Young on October 15, 2020 07:27 PM

“I can do any thing. I can’t do everything.”

The sheer number of projects and problem domains covered by OpenStack was overwhelming. I never learned several of the other projects under the big tent. One project that is getting relevant to my day job is Ironic, the bare metal provisioning service. Here are my notes from spelunking the code.

The Setting

I want just Ironic. I don’t want Keystone (personal grudge) or Glance or Neutron or Nova.

Ironic will write files to e.g. /var/lib/tftp and /var/www/html/pxe and will not handle DHCP, but can make sue of static DHCP configurations.

Ironic is just an API server at this point ( python based web service) that manages the above files, and that can also talk to the IPMI ports on my servers to wake them up and perform configurations on them.

I need to provide ISO images to Ironic so it can put the in the right place to boot them

Developer steps

I checked the code out of git. I am working off the master branch.

I ran tox to ensure the unit tests are all at 100%

I have mysql already installed and running, but with a Keystone Database. I need to make a new one for ironic. The database name, user, and password are all going to be ironic, to keep things simple.

CREATE USER 'ironic'@'localhost' IDENTIFIED BY 'ironic';
create database ironic;
GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost';

Note that I did this as the Keystone user. That dude has way to much privilege….good thing this is JUST for DEVELOPMENT. This will be used to follow the steps in the developers quickstart docs. I also set the mysql URL in the config file to this

connection = mysql+pymysql://ironic:ironic@localhost/ironic

Then I can run ironic db sync. Lets’ see what I got:

mysql ironic --user ironic --password
MariaDB [ironic]> show tables;
| Tables_in_ironic              |
| alembic_version               |
| allocations                   |
| bios_settings                 |
| chassis                       |
| conductor_hardware_interfaces |
| conductors                    |
| deploy_template_steps         |
| deploy_templates              |
| node_tags                     |
| node_traits                   |
| nodes                         |
| portgroups                    |
| ports                         |
| volume_connectors             |
| volume_targets                |
15 rows in set (0.000 sec)

OK, so the first table shows that Ironic uses Alembic to manage migrations. Unlike the SQLAlchemy migrations table, you can’t just query this table to see how many migrations have been performed:

MariaDB [ironic]> select * from alembic_version;
| version_num  |
| cf1a80fdb352 |
1 row in set (0.000 sec)

Running The Services

The script to start the API server is:
ironic-api -d --config-file etc/ironic/ironic.conf.local

Looking in the file requirements.txt, I see that they Web framework for Ironic is Pecan:

$ grep pecan requirements.txt 
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD

This is new to me. On Keystone, we converted from no framework to Flask. I’m guessing that if I look in the chain that starts with ironic-api file, I will see a Pecan launcher for a web application. We can find that file with

$which ironic-api

Looking in that file, it references ironic.cmd.api, which is the file ironic/cmd/api.py which in turn refers to ironic/common/wsgi_service.py. This in turn refers to ironic/api/app.py from which we can finally see that it imports pecan.

Now I am ready to run the two services. Like most of OpenStack, there is an API server and a “worker” server. In Ironic, this is called the Conductor. This maps fairly well to the Operator pattern in Kubernetes. In this pattern, the user makes changes to the API server via a web VERB on a URL, possibly with a body. These changes represent a desired state. The state change is then performed asynchronously. In OpenStack, the asynchronous communication is performed via a message queue, usually Rabbit MQ. The Ironic team has a simpler mechanism used for development; JSON RPC. This happens to be the same mechanism used in FreeIPA.

Command Line

OK, once I got the service running, I had to do a little fiddling around to get the command lines to work. The was an old reference to


which needed to be replaces with


Both are in the documentation, but only the second one will work.

I can run the following commands:

$ baremetal driver list
| Supported driver(s) | Active host(s) |
| fake-hardware       | ayoungP40      |
$ baremetal node list


Lets see if I can figure out from CURL what APIs those are…There is only one version, and one link, so:

curl | jq '.versions  | .[] | .links | .[] |  .href'


Doing curl against that second link gives a list of the top level resources:

  • media_types
  • chassis
  • nodes
  • drivers

And I assume that, if I use curl to GET the drivers, I should see the fake driver entry from above:

$ curl "" | jq '.drivers |.[] |.name'


OK, that is enough to get started. I am going to try and do the same with the RPMs that we ship with OSP and see what I get there.

But that is a tale for another day.

Thank You

I had a conversation I had with Julia Kreger, a long time core member of the Ironic project. This helped get me oriented.

Kiwi TCMS is partnering with Vola Software

Posted by Kiwi TCMS on October 15, 2020 12:51 PM

We are happy to announce that Kiwi TCMS is going to partner with Vola Software to provide 2 interns with opportunities for hacking open source and bootstrapping their careers!

Vola Software is a custom software development company in one of the poorest regions of the European Union and a long-time contributor to their local ecosystem via Vratsa Software Community. They are located in Vratsa, Bulgaria.

Internship program

Alexander Tsvetanov and Vladislav Ankov are joining the Kiwi TCMS team for a 10 month adventure until the end of July 2021 with the opportunity to continue for another year afterwards!

Both Sasho and Vladi are students in the Professional Technical Gymnasium in Vratsa and are required to work part-time as junior software developers during the last 2 years of their education. Given that they have very limited practical experience and the additional red tape around hiring youngsters many software companies directly avoid such kind of relationship. This creates a catch-22 circle for both employers who are looking to hire somewhat experienced young people and youngsters who are looking to advance their practical skills.

Here's where Kiwi TCMS steps-in! What better way to improve practical knowledge than contributing to an actively used and actively maintained open source software! We are nearing the 200K downloads mark on Docker Hub so changes made by Sasho and Vladi will be visible to a very big pool of our users and customers!

Both have already started their open source adventure last week and are currently going through some training. However they were faced with real problems which resulted in bug discovery and a pull request for python-bugzilla underlined by a 20 years old issue in Python. How's that for a start ?


Vola Software is the direct employer for Sasho and Vladi because they have the necessary permits and experience required for hiring youngsters. Kiwi TCMS is the direct technical mentor and will be acting as a customer to Vola Software!

Vola Software will be paying a minimal salary to our interns as required by law. Kiwi TCMS will reimburse the full amount while Vola Software will be covering their accounting and administrative expenses. Both Sasho and Vladi will also be eligible for our open source bounty-program as an extra stimulation!

All expenses will be fully transparent and visible via our Open Collective page!

Help us do more

If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Thanks for reading and happy testing!

Ultra low cost webcam studio

Posted by Daniel Pocock on October 14, 2020 08:00 PM

Most of us are starting to realize there will be no quick fix or silver bullet for the pandemic.

Online meetings are not as much fun as the real world. Nonetheless, we can make them more stimulating with some very low cost effects.

Photographic shops and online stores offer some very high quality equipment for these purposes but it isn't always necessary. You may already have some unused things, like a spare curtain pole, that you can use. If you only have a small space, you may not have space for a full studio setup but the hacks described here will help you achieve a solution that integrates with your space and furniture.

Sound check

Having a Hollywood-sized budget for visual effects is not much use if people can't hear you.

Before spending any time or money on visual improvements, such as lighting, it is important to conquer sound.

A cost effective solution for many people is something like the Rode smartLav+ combined with Rode's TRRS-to-3.5mm adapter and a 3.5mm extension lead so you can move around the room wearing it.

The extension cable is both cheaper and easier to use than any wireless microphone.

As a bonus, smartLav+ can be used anywhere with a smartphone to record speeches and interviews, like a premium dictaphone.

extension sc3 smartLav+

Correct webcam height

Take a minute to watch this promo for the Sachtler Flowtech tripod.

More affordable for most people is an adhesive velcro patch. Use the velcro to stick a small webcam to your wall at the correct height.

Another low budget alternative is the pocket tripod with flexible legs. You can wrap the legs around a piece of furniture or hang it from a shelf so that the camera looks downwards.

velcro webcam tripod

Green screen

Photographic shops typically sell large green photographic curtains. Your software can chroma key to replace the curtain with any background you like.

If you spend most of your time seated, you don't need a full sized curtain. A green flat sheet for a king-size bed may be big enough and cheaper. You may have one already. Even a green beach towel may work for a very tight scene, for example, hanging 50cm behind your chair.

Instead of using a full size stand, you may be able to hang it over a broom stick, a curtain track, an extensible pole for shower curtains or one of these clothes hanging frames. They are a lot cheaper than the gear from a photographic shop.

clothes frame

You can also find pop-up green screens, used as a backdrop for exhibitions and market stalls. Some of these are extremely fast to open up and pack away again.


The critical secret is to get lights with a Colour Rendering Index (CRI) over ninety percent. This is not the same as colour temperature, which is measured in Kelvin.

Almost all studio lights from a photographic store will have a suitable CRI but they are not cheap. Many are now out of stock too.

Nonetheless, you can find many standalone Hi-CRI bulbs online (example: Hi-CRI corn bulbs on Amazon) for very cheap prices, mount them in a lamp or clamp reflector and use aluminium foil or another surface to reflect and control the light.

corn bulbs clamp

Try to arrange them in a 3-point lighting setup (highly recommended demo video) to ensure your head is illuminated optimally. As the video shows, three cheap bulbs arranged correctly will give a much better effect than one expensive bulb used alone.

To soften the light from the Hi-CRI corn bulbs, you can either use an aluminium foil reflector or create some kind of diffuser. There are plenty of online recipes for diffusers. For example, this one is made from a shower curtain.


OBS Studio is completely free to download, install and use. It allows you to swap backgrounds, do picture-in-picture and other effects very quickly.

Share your experiences

Please join the Free-RTC discussion list to share any feedback you have about this topic.

Recovering Audio off an Old Tape Using Audacity

Posted by Adam Young on October 14, 2020 05:30 PM

One of my fiorends wrote a bunch of music back in high school. The only remainig recordings are on a casette tape that he produced. Time has not been kind to the recordings, but they are audible…barely. He has a device that produces MP3s from the tape. My job has been to try and get them so that we can understand them well enough to recover the original songs.

I have the combined recording on a single MP3. I’ve gone through and noted the times where each song starts and stops. I am going to go through the steps I’ve been using to go from that single long MP3 to an individual recording.

<figure class="wp-block-image"></figure>

Cutting from the big file

First, I pull up the original recording in Audacity. Today I am working on the recording that starts at 51 minutes and a few seconds into the track. Visually, I can see that the last song ends at 51:23 and this one starts at 51:26. I highlight the majority of the silence that precedes the song, through to the end. Select copy, and start a new project. Select Tracks->add-new->Stereo Track and paste the selection from the longer file.

I’d suggest listening to the recording prior to and after each step, so you can hear the differences.

Removing the Tape His

The reason I keep the silence from the beginning of the song is to use it as a noise profile for removing the hiss from the rest of the recording.

<figure class="wp-block-image"></figure>
  • Highlight a small section of the empty segment
  • select Effect->Noise Reduction.
  • Click Noise Profile. The dialog will close
  • Select the entire clip (Ctrl A)
  • select Effect->Noise Reduction again
  • Select OK to take the defaults


The Vocals are pretty hard to hear in the original. I use the graphics equalizer to try and pull the frequencies that the voice cover up from the piano background.

  • Select the whole clip
  • Select Effect->Compressor. Take the defaults
  • Select Effect->Normalize. Take the defaults
  • Select Effect->Graphic Equalizer. I use a shape like this:
<figure class="wp-block-image"></figure>

I tend to undo and redo this a few times until I can best hear the vocals.

Pitch Correction

The tape is slightly sped up. I need to take it down just shy of one whole step. Again, select the whole clip. Then select Effect->Change Pitch. This is what worked for one of the tunes:

<figure class="wp-block-image"></figure>

I suspect that I should drop that “7” in the last fields to a more reasonable octave, probably the 3 or 4 in the middle of the range. I’ll play with that in the future.

Again, I do and undo this multiple times, playing the chords on a eyboard along with the recording , until I feel the pitch matches.

Close enough, anyway.

Once this is done, I save the project and export to MP3, and share with my other players.

Sandboxing inside the sandbox: No rogue thumbnailers inside Flatpak

Posted by Bastien Nocera on October 14, 2020 02:43 PM

 A couple of years ago, we sandboxed thumbnailers using bubblewrap to avoid drive-by downloads taking advantage of thumbnailers with security issues.

 It's a great tool, and it's a tool that Flatpak relies upon to create its own sandboxes. But that also meant that we couldn't use it inside the Flatpak sandboxes themselves, and those aren't always as closed as they could be, to support legacy applications.

 We've finally implemented support for sandboxing thumbnailers within Flatpak, using the Spawn D-Bus interface (indirectly).

This should all land in GNOME 40, though it should already be possible to integrate it into your Flatpaks. Make sure to use the latest gnome-desktop development version, and that the flatpak-spawn utility is new enough in the runtime you're targeting (it's been updated in the freedesktop.org runtimes #1, #2, #3, but it takes time to trickle down to GNOME versions). Example JSON snippets:

"name": "flatpak-xdg-utils",
"buildsystem": "meson",
"sources": [
"type": "git",
"url": "https://github.com/flatpak/flatpak-xdg-utils.git",
"tag": "1.0.4"
"name": "gnome-desktop",
"buildsystem": "meson",
"config-opts": ["-Ddebug_tools=true", "-Dudev=disabled"],
"sources": [
"type": "git",
"url": "https://gitlab.gnome.org/GNOME/gnome-desktop.git"

(We also sped up GStreamer-based thumbnailers by allowing them to use a cache, and added profiling information to the thumbnail test tools, which could prove useful if you want to investigate performance or bugs in that area)

Email Delivery series

Posted by Truong Anh Tuan on October 14, 2020 11:20 AM
Lâu lâu để blog mốc meo quá 😀 Đợt này đang có hứng, tớ sẽ viết một series bài về email delivery cho email gửi ra từ hệ thống của các bạn (corporate email, marketing email, automation email…), cách thức kiểm soát để đảm bảo 100% email luôn vào inbox người nhận.

Web of Trust, Part 1: Concept

Posted by Fedora Magazine on October 14, 2020 08:00 AM

Every day we rely on technologies who nobody can fully understand. Since well before the industrial revolution, complex and challenging tasks required an approach that broke out the different parts into smaller scale tasks. Each resulting in specialized knowledge used in some parts of our lives, leaving other parts to trust in skills that others had learned. This shared knowledge approach also applies to software. Even the most avid readers of this magazine, will likely not compile and validate every piece of code they run. This is simply because the world of computers is itself also too big for one person to grasp.

Still, even though it is nearly impossible to understand everything that happens within your PC when you are using it, that does not leave you blind and unprotected. FLOSS software shares trust, giving protection to all users, even if individual users can’t grasp all parts in the system. This multi-part article will discuss how this ‘Web of Trust’ works and how you can get involved.

But first we’ll have to take a step back and discuss the basic concepts, before we can delve into the details and the web. Also, a note before we start, security is not just about viruses and malware. Security also includes your privacy, your economic stability and your technological independence.

One-Way System

By their design, computers can only work and function in the most rudimentary ways of logic: True or false. And or Or. This (boolean logic) is not readily accessible to humans, therefore we must do something special. We write applications in a code that we can (reasonably) comprehend (human readable). Once completed, we turn this human readable code into a code that the computer can comprehend (machine code).

The step of conversion is called compilation and/or building, and it’s a one-way process. Compiled code (machine code) is not really understandable by humans, and it takes special tools to study in detail. You can understand small chunks, but on the whole, an entire application becomes a black box.

This subtle difference shifts power. Power, in this case being the influence of one person over another person. The person who has written the human-readable version of the application and then releases it as compiled code to use by others, knows all about what the code does, while the end user knows a very limited scope. When using (software) in compiled form, it is impossible to know for certain what an application is intended to do, unless the original human readable code can be viewed.

The Nature of Power

Spearheaded by Richard Stallman, this shift of power became a point of concern. This discussion started in the 1980s, for this was the time that computers left the world of academia and research, and entered the world of commerce and consumers. Suddenly, that power became a source of control and exploitation.

One way to combat this imbalance of power, was with the concept of FLOSS software. FLOSS Software is built on 4-Freedoms, which gives you a wide array of other ‘affiliated’ rights and guarantees. In essence, FLOSS software uses copyright-licensing as a form of moral contract, that forces software developers not to leverage the one-way power against their users. The principle way of doing this, is with the the GNU General Public Licenses, which Richard Stallman created and has since been promoting.

One of those guarantees, is that you can see the code that should be running on your device. When you get a device using FLOSS software, then the manufacturer should provide you the code that the device is using, as well as all instructions that you need to compile that code yourself. Then you can replace the code on the device with the version you can compile yourself. Even better, if you compare the version you have with the version on the device, you can see if the device manufacturer tried to cheat you or other customers.

This is where the web of Trust comes back into the picture. The Web of Trust implies that even if the vast majority of people can’t validate the workings of a device, that others can do so on their behalf. Journalists, security analysts and hobbyists, can do the work that others might be unable to do. And if they find something, they have the power to share their findings.

Security by Blind Trust

This is of course, if the application and all components underneath it, are FLOSS. Proprietary software, or even software which is merely Open Source, has compiled versions that nobody can recreate and validate. Thus, you can never truly know if that software is secure. It might have a backdoor, it might sell your personal data, or it might be pushing a closed ecosystem to create a vendor-lock. With closed-source software, your security is as good as the company making the software is trustworthy.

For companies and developers, this actually creates another snare. While you might still care about your users and their security, you’re a liability:<mark class="annotation-text annotation-text-yoast" id="annotation-text-1239a1be-7f69-4b74-87fd-fcbdfdd79b6d"></mark> If a criminal can get to your official builds or supply-chain, then there is no way for anybody to discover that afterwards. An increasing number of attacks do not target users directly, but instead try to get in, by exploiting the trust the companies/developers have carefully grown.

You should also not underestimate pressure from outside: Governments can ask you to ignore a vulnerability, or they might even demand cooperation. Investment firms or shareholders, may also insist that you create a vendor-lock for future use. The blind trust that you demand of your users, can be used against you.

Security by a Web of Trust

If you are a user, FLOSS software is good because others can warn you when they find suspicious elements. You can use any FLOSS device with minimal economic risk, and there are many FLOSS developers who care for your privacy. Even if the details are beyond you, there are rules in place to facilitate trust.

If you are a tinkerer, FLOSS is good because with a little extra work, you can check the promises of others. You can warn people when something goes wrong, and you can validate the warnings of others. You’re also able to check individual parts in a larger picture. The libraries used by FLOSS applications, are also open for review: It’s “Trust all the way down”.

For companies and developers, FLOSS is also a great reassurance that your trust can’t be easily subverted. If malicious actors wish to attack your users, then any irregularity can quickly be spotted. Last but not least, since you also stand to defend your customers economic well-being and privacy, you can use that as an important selling point to customers who care about their own security.

Fedora’s case

Fedora embraces the concept of FLOSS and it stands strong to defend it. There are comprehensive legal guidelines, and Fedora’s principles are directly referencing the 4-Freedoms: Freedom, Friends, Features, and First

<figure class="aligncenter size-large is-resized">Fedora's Foundation logo, with Freedom highlighted. Illustrative.</figure>

To this end, entire systems have been set up to facilitate this kind of security. Fedora works completely in the open, and any user can check the official servers. Koji is the name of the Fedora Buildsystem, and you can see every application and it’s build logs there. For added security, there is also Bohdi, which orchestrates the deployment of an application. Multiple people must approve it, before the application can become available.

This creates the Web of Trust on which you can rely. Every package in the repository goes through the same process, and at every point somebody can intervene. There are also escalation systems in place to report issues, so that issues can quickly be tackled when they occur. Individual contributors also know that they can be reviewed at every time, which itself is already enough of a precaution to dissuade mischievous thoughts.

You don’t have to trust Fedora (implicitly), you can get something better; trust in users like you.

Release of osbuild 22

Posted by OSBuild Project on October 14, 2020 12:00 AM

We are happy to announce version 22 of osbuild. This release improves the internal error-handling as well as error-reporting on the command-line.

Below you can find the official changelog from the osbuild-22 sources. All users are recommended to upgrade!

  • runners: support for RHEL 8.4 was added

  • A new internal API was added that can be used to communicate exceptions from runners, stages and assemblers in a more structured way and, thus, make it possible to include them in the final result in a machine readable way. Use that new API in the runners.

  • Improvements to the CI, including the integration of codespell to check for spelling mistakes.

Contributions from: Chloe Kaubisch, Christian Kellner, Jacob Kozol, Lars Karlitski, Major Hayden

— Berlin, 2020-10-08

GitLab blocked Iranians’ access.

Posted by Ahmad Haghighi on October 14, 2020 12:00 AM

On 3rd Oct. 2020 GitLab blocked Iranians’ access (based on IP) without any prior notice! and five days later (8th Oct.) my friend’s account blocked and still he doesn’t have any access to his projects! even after creating a ticket and asks for a temporary access to only export his projects! GitLab refused to unblock him! (screenshot in appendix). My friend is not the only one who blocked by GitLab, with a simple search on the web you can find a growing list of blocked accounts.
So I decided to move from GtiLab and EVERY Free Software based/hosted/managed on/in USA.

When it comes to USA policies, Free Software is a Joke :)

GitLab is not the only actor in this discrimination against Persian/Iranian people, we also blocked by GitHub, Docker, NPM, Google Developer, Android, AWS, Go, Kubernetes and etc.

But I believe in Freedom and Free Software. I believe Free Software is a SOCIAL MOVEMENT and I hope …


gitlab-403 my-tweet01 my-tweet02 gitlab-tweet-response gitlab-ticket-response google-developer-403 docker-403 golang-403 kubernetes-403 Update 01: ‘youtube-dl’ removed from GitHub
youtube-dl removed from GitHub

New badge: Fedora Zine Contributor !

Posted by Fedora Badges on October 13, 2020 11:01 AM
Fedora Zine ContributorYou contributed to the Fedora Zine!

Kiwi TCMS is partnering with MLH Fellowship program

Posted by Kiwi TCMS on October 13, 2020 09:50 AM

We are happy to announce that Kiwi TCMS is going to partner with the MLH Fellowship open source program which is a 12 week internship alternative for students interested in becoming software engineers.

Major League Hacking (MLH) is a mission-driven B-Corp focused on empowering our next generation of technologists. Every year, more than 100,000 developers, designers, and makers join the MLH community to gain hands-on experience and build their professional networks. They are headquartered in the Greater New York Area, USA but operate world-wide.

Fellowship program

The Fall 2020 cohort runs between October 5 - December 21 and we're already into the Contributing phase of the program. Fellow students will be working on open issues from our backlog with a focus on tasks from our open source bounty program but they can really work on any open task!

Once a pull request has been made it will undergo first round code review by Cory Massaro who is the dedicated MLH Python mentor. Once Cory gives the +1 each pull request will be reviewed by a member of the Kiwi TCMS core team like usual.

To minimize the risk of conflicts between contributors we are going to apply the following rules:

  • Each MLH fellow would add a comment on the issue they are interested in (applies to other contributors too);
  • The issue will be assigned to them (new);
  • The issue will be labeled with the MLH Fellowship label (new);
  • The following comment will be added: This issue/bounty has been assigned to an MLH fellow who is currently working on it. Pull requests towards the same issue from other contributors will not be considered at this time. Please pick something else to work on!

Upon successful completion of tasks all MLH fellows will be eligible to claim their bounties via our Open Collective page. This is the same process other contributors go through, there is no difference.

Kiwi TCMS commits 2 core-team members who will serve office hours on the MLH Fellowship Discord server for couple of hours per week in order to answer questions and keep the ball rolling. We also commit to having bi-weekly meetings with MLH mentor(s) during the duration of the program.

Kiwi TCMS would also like to thank our friend Eddie Jaoude for putting us in touch with MLH and helping bring this partnership to reality. Thank you Eddie!

Help us do more

If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Thanks for reading and happy testing!

Updates from Johnnycanencrypt development in last few weeks

Posted by Kushal Das on October 13, 2020 04:32 AM

In July this year, I wrote a very initial Python module in Rust for OpenPGP, Johnnycanencrypt aka jce. It had very basic encryption, decryption, signing, verification, creation of new keys available. It uses https://sequoia-pgp.org library for the actual implementation.

I wanted to see if I can use such a Python module (which does not call out to the gpg2 executable) in the SecureDrop codebase.

First try (2 weeks ago)

Two weeks ago on the Friday, when I sat down to see if I can start using the module, within a few minutes, I understood it was not possible. The module was missing basic key management, more more refined control over creation, or expiration dates.

On that weekend, I wrote a KeyStore using file-based keys as backend and added most of the required functions to try again.

The last Friday

I sat down again; this time, I had a few friends (including Saptak, Nabarun) on video along with me, and together we tried to plug the jce inside SecureDrop container for Focal. After around 4 hours, we had around 5 failing tests (from 32) in the crypto-related tests. Most of the basic functionality was working, but we are stuck for the last few tests. As I was using the file system to store the keys (in simple .sec or .pub files), it was difficult to figure out the existing keys when multiple processes were creating/deleting keys in the same KeyStore.

Next try via a SQLite based KeyStore

Next, I replaced the KeyStore with an SQLite based backend. Now multiple processes can access the keys properly. With a few other updates, now I have only 1 failing test (where I have to modify the test properly) in that SecureDrop Focal patch.

While doing this experiment, I again found the benefits of writing the documentation of the library as I developed. Most of the time, I had to double-check against it to make sure that I am doing the right calls. I also added one example where one can verify the latest (10.0) Tor Browser download via Python.

In case you already use OpenPGP encryption in your tool/application, or you want to try it, please give jce a try. Works on Python3.7+. I tested on Linux and macOS, and it should work on Windows too. I have an issue open on that, and if you know how to do that, please feel free to submit a PR.

Hacktoberfest 2020 with TeleIRC

Posted by Justin W. Flory on October 12, 2020 08:15 AM

The post Hacktoberfest 2020 with TeleIRC appeared first on Justin W. Flory's blog.

Justin W. Flory's blog - Free Software, music, travel, and life reflections

October is here! If you contribute to Open Source projects, you might know that October is the month of Hacktoberfest. DigitalOcean teams up with different partners each year to send a t-shirt (or plant a tree on your behalf) for anyone who makes four GitHub Pull Requests in October. And guess what? TeleIRC is a participating project for you to get your Hacktoberfest t-shirt or tree!

This post identifies specific tasks the TeleIRC team identified as “good first issues” for Hacktoberfest hackers. They are in order of least difficult to most difficult. Golang developers especially are encouraged to participate!

Why work on TeleIRC for Hacktoberfest?

Before sharing how you can contribute for Hacktoberfest, what about why you should contribute?

TeleIRC originally launched in 2016. Since then, we have built up a community of users around the world. TeleIRC is also used in other larger Open Source projects like the Fedora Project and LibreOffice! Of course, it is still used in the Rochester Institute of Technology community where it was first developed.

Working on TeleIRC means you can contribute to a project that is actually used in the real world. Hundreds of user communities, some even the size of thousands of people, use TeleIRC. Your improvements and changes will help the many downstream users of our project. (P.S. – See the full list of who uses TeleIRC in our docs!)

With that out of the way… let’s talk about what there is to do!

#1: Large messages go to a pastebin

This corresponds to RITlug/teleirc#56.

  • Goal: When a Telegram user writes a single line that exceeds the maximum number of characters for an IRC message (512 characters, per RFC 1459, section 2.3), send the string to a pastebin service.
  • Success criteria: Any line greater than 512 characters is sent to a pastebin-like service.
  • What we think: Note the difference between “lines” and “messages”. Telegram users can add line breaks to messages. TeleIRC should respect those line breaks as new IRC messages. So, only a single line that exceeds the maximum should go to a pastebin-like service.

#2: Telegram Poll handler

This corresponds to RITlug/teleirc#267.

  • Goal: Send text representations of Telegram Polls to IRC. Currently, Polls are ignored by TeleIRC and do not appear in any way on IRC.
  • Success criteria: If a Telegram user sends a Poll to a group, a text representation should appear in IRC.
  • What we think: IRC users will not be able to participate in Polls. This is a platform limitation. However, IRC users should get some context about what a Poll includes, e.g. what the question is and what answer choices are available.

#3: Support more encoding types (e.g. CP1251)

This corresponds to RITlug/teleirc#332.

  • Goal: Support more string encoding types than UTF-8.
  • Success criteria: If a Telegram user writes a message in Cyrillic script, it should appear in Cyrillic script on IRC (if the server supports it, e.g. CP1251).
  • What we think: This is one of the toughest issues we have and requires knowledge about string encoding methods. The current core developers are native English speakers and we do not use other languages that have non-Latin script. The GitHub issue has more info, but it will need additional research or knowledge about string encoding.

Need Hacktoberfest help? Come talk to us!

Want to work on any of these? Add a new comment to the GitHub Issue and let us know you are interested in working on it.

Have questions about the project or getting started? Come talk with the TeleIRC team! Of course, you can find us both on IRC (#rit-lug-teleirc on Freenode IRC) and Telegram (@teleirc).

Additionally, the TeleIRC team meets virtually every Sunday at 11:00 U.S. EDT / 15:00 UTC. Ask us for a calendar invite in our team chat if you would like one!

Episode 219 – Chat with Larry Cashdollar

Posted by Josh Bressers on October 12, 2020 12:01 AM

Josh and Kurt have a chat with Larry Cashdollar. The three of us go way back. Larry has done some amazing things and he tells us all about it!

<audio class="wp-audio-shortcode" controls="controls" id="audio-2010-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_219_Chat_with_Larry_Cashdollar.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_219_Chat_with_Larry_Cashdollar.mp3</audio>

Show Notes

Progress update on open source hardware for black-box testing

Posted by Kiwi TCMS on October 10, 2020 01:35 PM

Hello testers, as you know our friends at Pionir are working on physical hardware which can be used for interactive training and explanation of the Black-box testing technique. The inspiration comes from James Lyndsay’s Black Box Puzzles and Claudiu Draghia.

We have the source code of 3 boxes already published at https://github.com/kiwitcms/black-boxes but still missing bill of materials, design files for 3D printing and some basic instructions how to put everything together. There was a delay in delivery of some components but most of the work is close to completion. You may subscribe to issues for each box to follow the progress! Here are some images & videos from the development process.

Wason 2-4-6 box

Wason 246 box

<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="562" src="https://www.youtube.com/embed/UGfgcTS66KY" width="1000"> </iframe>

Peltzman effect box

Peltzman effect box

<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="562" src="https://www.youtube.com/embed/9wAr--X4YGM" width="1000"> </iframe>

Salience bias box

Salience bias box

<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="562" src="https://www.youtube.com/embed/pgADatxcZBg" width="1000"> </iframe>

According to Pionir's own words: We are testing the salience box and trying to cause emotional effect of an element. Vuk made a mistake and now he can't turn it off :D. How's that for dogfooding?

Help us do more

If you like what we're doing and how Kiwi TCMS supports the various communities around us please nominate us as GitHub Stars!

Thanks for reading and happy testing!

Fedora - KDE development journey (Qt5X11Extras)

Posted by Robbi Nespu on October 10, 2020 01:10 PM

I tried to build kwindowsystem but seem I am missing some package dependencies:

rnm@b0x (~/kde/src/kdesrc-build)
$ kdesrc-build kwindowsystem
Updating sysadmin-repo-metadata (to branch master)
        Stashing local changes if any...

Building extra-cmake-modules from frameworks (1/2)
        Updating extra-cmake-modules (to branch master)
        Stashing local changes if any...
        No changes to extra-cmake-modules source, proceeding to build.
        Compiling... succeeded (after 0 seconds)
        Installing.. succeeded (after 0 seconds)

Building kwindowsystem from frameworks (2/2)
        No source update, but the last configure failed
        Updating kwindowsystem (to branch master)
        Stashing local changes if any...
        Source update complete for kwindowsystem: no files affected
        Preparing build system for kwindowsystem.
        Removing files in build directory for kwindowsystem
        Old build system cleaned, starting new build system.
        Running cmake targeting Unix Makefiles...
        Unable to configure kwindowsystem with CMake!
        Unable to configure kwindowsystem with KDE

Removing 1 out of 3 old log directories...

kwindowsystem - ~/kde/src/log/2020-10-10-01/kwindowsystem/cmake.log

Important notification for kwindowsystem:
    kwindowsystem has failed to build 5 times.
    You can check https://build.kde.org/search/?q=kwindowsystem to see if this is expected.

Your logs are saved in file:///home/rnm/kde/src/log/2020-10-10-01

rnm@b0x (~/kde/src/kdesrc-build)
$ cat ~/kde/src/log/2020-10-10-01/kwindowsystem/cmake.log
# kdesrc-build running: 'cmake' '/home/rnm/kde/src/kwindowsystem' '-G' 'Unix Makefiles' '-DCMAKE_BUILD_TYPE=RelWithDebInfo' '-DBUILD_TESTING=TRUE' '-DCMAKE_CXX_FLAGS:STRING=-pipe' '-DCMAKE_INSTALL_PREFIX=/home/rnm/kde/usr'
# from directory: /home/rnm/kde/build/kwindowsystem
-- The C compiler identification is GNU 10.2.1
-- The CXX compiler identification is GNU 10.2.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done

Installing in /home/rnm/kde/usr. Run /home/rnm/kde/build/kwindowsystem/prefix.sh to set the environment for KWindowSystem.
-- Looking for __GLIBC__
-- Looking for __GLIBC__ - found
-- Performing Test _OFFT_IS_64BIT
-- Performing Test _OFFT_IS_64BIT - Success
-- Performing Test HAVE_DATE_TIME
-- Performing Test HAVE_DATE_TIME - Success
-- Found X11: /usr/include   
-- Looking for XOpenDisplay in /usr/lib64/libX11.so
-- Looking for XOpenDisplay in /usr/lib64/libX11.so - found
-- Looking for gethostbyname
-- Looking for gethostbyname - found
-- Looking for connect
-- Looking for connect - found
-- Looking for remove
-- Looking for remove - found
-- Looking for shmat
-- Looking for shmat - found
-- Looking for IceConnectionNumber in ICE
-- Looking for IceConnectionNumber in ICE - found
-- Found PkgConfig: /usr/bin/pkg-config (found version "1.7.3") 
-- Found XCB_XCB: /usr/lib64/libxcb.so (found version "1.13.1") 
-- Found XCB_KEYSYMS: /usr/lib64/libxcb-keysyms.so (found version "0.4.0") 
-- Found XCB_RES: /usr/lib64/libxcb-res.so (found version "1.13.1") 
-- Found XCB: /usr/lib64/libxcb.so;/usr/lib64/libxcb-keysyms.so;/usr/lib64/libxcb-res.so (found version "1.13.1") found components: XCB KEYSYMS RES 
CMake Error at /usr/lib64/cmake/Qt5/Qt5Config.cmake:28 (find_package):
  Could not find a package configuration file provided by "Qt5X11Extras" with
  any of the following names:


  Add the installation prefix of "Qt5X11Extras" to CMAKE_PREFIX_PATH or set
  "Qt5X11Extras_DIR" to a directory containing one of the above files.  If
  "Qt5X11Extras" provides a separate development package or SDK, be sure it
  has been installed.
Call Stack (most recent call first):
  CMakeLists.txt:69 (find_package)

-- Configuring incomplete, errors occurred!
See also "/home/rnm/kde/build/kwindowsystem/CMakeFiles/CMakeOutput.log".

As solution, i need to install qt5-qtx11extras-devel package from Fedora to allow cmake found the dependencies needed.

Fedora - KDE development journey (Qt5UiPlugin)

Posted by Robbi Nespu on October 09, 2020 01:12 AM

I getting compiler error when trying to build kwidgetsaddons. The error log said:

# kdesrc-build running: 'cmake' '/home/rnm/kde/src/kwidgetsaddons' '-G' 'Unix Makefiles' '-DCMAKE_BUILD_TYPE=RelWithDebInfo' '-DBUILD_TESTING=TRUE' '-DCMAKE_CXX_FLAGS:STRING=-pipe' '-DCMAKE_INSTALL_PREFIX=/home/rnm/kde/usr'
# from directory: /home/rnm/kde/build/kwidgetsaddons
-- The C compiler identification is GNU 10.2.1
-- The CXX compiler identification is GNU 10.2.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done

Installing in /home/rnm/kde/usr. Run /home/rnm/kde/build/kwidgetsaddons/prefix.sh to set the environment for KWidgetsAddons.
-- Looking for __GLIBC__
-- Looking for __GLIBC__ - found
-- Performing Test _OFFT_IS_64BIT
-- Performing Test _OFFT_IS_64BIT - Success
-- Performing Test HAVE_DATE_TIME
-- Performing Test HAVE_DATE_TIME - Success
-- Performing Test COMPILER_HAS_DEPRECATED_ATTR - Success
-- At least one python version must be available to use PythonModuleGeneration.
-- The following features have been enabled:

 * DESIGNERPLUGIN, Build plugin for Qt Designer

-- The following REQUIRED packages have been found:

 * ECM (required version >= 5.75.0), Extra CMake Modules., <https://commits.kde.org/extra-cmake-modules>
 * Qt5Gui (required version >= 5.15.1)
 * Qt5Widgets
 * Qt5Test
 * Qt5 (required version >= 5.12.0)

-- The following features have been disabled:

 * QCH, API documentation in QCH format (for e.g. Qt Assistant, Qt Creator & KDevelop)

-- The following OPTIONAL packages have not been found:

 * PythonModuleGeneration

-- The following REQUIRED packages have not been found:

 * Qt5UiPlugin
   Required to build Qt Designer plugins

CMake Error at /usr/share/cmake/Modules/FeatureSummary.cmake:457 (message):
  feature_summary() Error: REQUIRED package(s) are missing, aborting CMake
Call Stack (most recent call first):
  CMakeLists.txt:89 (feature_summary)

-- Configuring incomplete, errors occurred!
See also "/home/rnm/kde/build/kwidgetsaddons/CMakeFiles/CMakeOutput.log".

It said we need qt5-qttools-devel package and their dependencies to be installed.

$ sudo dnf install qt5-qttools-devel
Last metadata expiration check: 0:08:13 ago on Fri 09 Oct 2020 05:18:53 PM +08.
Dependencies resolved.
 Package                                                    Architecture                  Version                               Repository                      Size
 qt5-qttools-devel                                          x86_64                        5.15.1-1.fc34                         rawhide                        183 k
Installing dependencies:
 qt5-designer                                               x86_64                        5.15.1-1.fc34                         rawhide                        165 k
 qt5-doctools                                               x86_64                        5.15.1-1.fc34                         rawhide                        690 k
 qt5-linguist                                               x86_64                        5.15.1-1.fc34                         rawhide                        872 k
 qt5-qttools-libs-designercomponents                        x86_64                        5.15.1-1.fc34                         rawhide                        796 k
 qt5-qttools-libs-help                                      x86_64                        5.15.1-1.fc34                         rawhide                        155 k

Transaction Summary
Install  6 Packages

Total download size: 2.8 M
Installed size: 8.9 M
Is this ok [y/N]: 

let try to build now

[rnm@b0x kdesrc-build]$ kdesrc-build kwidgetsaddons
Updating sysadmin-repo-metadata (to branch master)
        Stashing local changes if any...

Building extra-cmake-modules from kf5-build-support (1/2)
        Updating extra-cmake-modules (to branch master)
        Stashing local changes if any...
        No changes to extra-cmake-modules source, proceeding to build.
        Running cmake targeting Unix Makefiles...
        Compiling... succeeded (after 0 seconds)
        Installing.. succeeded (after 0 seconds)

Building kwidgetsaddons from frameworks (2/2)
        No source update, but the last configure failed
        Updating kwidgetsaddons (to branch master)
        Stashing local changes if any...
        Source update complete for kwidgetsaddons: no files affected
        Preparing build system for kwidgetsaddons.
        Removing files in build directory for kwidgetsaddons
        Old build system cleaned, starting new build system.
        Running cmake targeting Unix Makefiles...
        Compiling... succeeded (after 6 minutes, and 13 seconds)
        Installing.. succeeded (after 0 seconds)

Removing 1 out of 4 old log directories...


Separate INFO and DEBUG logs

Posted by Linux System Roles on October 08, 2020 12:00 PM


Before refactoring logging of network module, the module collects all logging statements, and at the end returns them as “warnings”, so that they are shown by ansible. Obviously, these are not really warnings, but rather debug information..

How to reproduce

We can reproduce this network module bug by doing qemu test.

TEST_SUBJECTS=CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2 ansible-playbook -vv -i /usr/share/ansible/inventory/standard-inventory-qcow2 ./tests/playbooks/tests_ethernet.yml

How to resolve it

The logging messages should be returned in a different json field that is ignored by ansible. Then, the tasks/main.yml should have a follow-up debug task that prints the returned variable. In the failure case, the network_connections task must run ignoring failures to reach the debug statement. Then, a follow up task should check whether the network_connections task failed and abort.

What is the result

After bug fixed, we can also use the same qemu test to compare the result:

Additional test cases

Beyond that, we also have some assertion to confirm that we indeed separate Info and Debug logs. In ./tests/tests_default.yml, we have the following testing code to assert no warning in _network_connections_result.

- name: Test executing the role with default parameters
  hosts: all
    - linux-system-roles.network
    - name: Test warning and info logs
          - "'warnings' not in __network_connections_result"
        msg: "There are warnings"

In ./tests/tasks/assert_output_in_stderr_without_warnings.yml, we assert no warning in _network_connections_result, and assert stderr in _network_connections_result.

- name: "Assert that warnings is empty"
      - "'warnings' not in __network_connections_result"
    msg: "There are unexpected warnings"
- name: "Assert that there is output in stderr"
      - "'stderr' in __network_connections_result"
    msg: "There are no messages in stderr"

The following Ansible logs is extracted from same qemu testing result after the bug fixed:

Demo video

I made a demo video to show the bugs and refactoring logging of network module after bug fixed, as well as additional test cases running result.

Separate INFO and DEBUG logs


  1. Refactor logging of network module, https://github.com/linux-system-roles/network/issues/29
  2. Separate debug and info logs from warnings, https://github.com/linux-system-roles/network/pull/207

Home Gym

Posted by Adam Young on October 07, 2020 06:39 PM

Concrete weighs 5 pounds per quart.

<figure class="wp-block-image size-large"><figcaption>Concrete Weights</figcaption></figure>

For benching I needed something bigger. The big ones are 60 pounds.

<figure class="wp-block-image size-large"><figcaption>60 pounds Weights for bench press</figcaption></figure>

Stretching and calesthenics…and kicking. Great stress relief.

<figure class="wp-block-image size-large"><figcaption>Half a ring</figcaption></figure>

Use the rubber bands for lat work and hips.

<figure class="wp-block-image size-large"></figure>

New badge: Fedora IOT WG member !

Posted by Fedora Badges on October 07, 2020 02:54 PM
Fedora IOT WG memberYou are a member of the Fedora IOT Working Group!

New badge: fedora-iot-wg !

Posted by Fedora Badges on October 07, 2020 01:58 PM
fedora-iot-wgYou are a member of the Fedora IOT Working Group!

Notes to self on frama-c

Posted by Richard W.M. Jones on October 07, 2020 12:04 PM

Frama-C is a giant modular system for writing formal proofs of C code. For months I’ve been on-and-off trying to see if we could use it to do useful proofs for any parts of the projects we write, like qemu, libvirt, libguestfs, nbdkit etc. I got side-tracked at first with this frama-c tutorial which is fine, but I got stuck trying to make the GUI work.

Yesterday I discovered this set of 3 short command-line based tutorials: https://maniagnosis.crsr.net/2017/06/AFL-brute-force-search.html https://maniagnosis.crsr.net/2017/06/AFL-bug-in-quicksearch.html https://maniagnosis.crsr.net/2017/07/AFL-correctness-of-quicksearch.html

I thought I’d start by trying to apply this to a small section of qemu code, the fairly self-contained range functions.

The first problem is how to invoke frama-c:

frama-c -wp -wp-rte -wp-print util/range.c -cpp-extra-args=" -I include -I build -I /usr/include -DQEMU_WARN_UNUSED_RESULT= "

You have to give all the include directories and define out some qemu-isms.

The first time you run it, this won’t work for “reasons”. You have to initialize the why3 verifier using:

why3 config --full-config

Really frama-c should just do this for you, or at least tell you what you need to do in the obscure error message it prints.

This still won’t work because util/range.c includes glib headers which use GCC attributes and builtins and frama-c simply cannot parse any of that. So I ended up hacking on the source to replace the headers with standard C headers and remove the one glib-based function in the file.

At this point it does compile and the frama-C WP plugin runs. Of course without having added any annotations it simply produces a long list of problems. Also it takes a fair bit of time to run, which is interesting. I wonder if it will get faster with annotations?

That’s as far as I’ve got for the moment. I’ll come back later and try to add annotations.

How to be a spooky ARMv8m hardware debugger

Posted by Laura Abbott on October 07, 2020 11:00 AM

Debuggers are one of many tools available to assist developers in figuring out problems.Many of the ARM Cortex-M boards support a standard called CMSIS-DAP for hardware debugging. This is designed to let board makers provide a dedicated chip to facilitate communication between a debugger chip and a host. The debugger chip then commmincates to the actual CPU being debugged via other signals. Like all standards, implementation can be incomplete and buggy but if a board says it has CMSIS-DAP support, there’s a good chance it will “just work” for debugging. You could leave all the details to debuggers but it also turns out you can do many of these steps with CMSIS-DAP yourself. Being a debugger is also a great Halloween costume because you can do mysterious things to your device and also stay home. There is no candy involved unfortunately but knowledge is pretty sweet.

ARM has fairly detailed documentation on their website about how this works behind the scenes. At a very high level, you can write to the Debug Port and some number of Access Ports to affect the state of the chip. The actual detail of what’s implemented is given by ROM tables. A fairly common setup is a debug port and then a Memory Access Port (MEM-AP) per CPU.

There’s a Rust package called probe-rs to handle the communication and transport. It has abstractions to do things like read/write from memory but you can also implement the commands yourself at the DP/AP level. One of the first tasks everyone wants to do as a debugger is read memory. The CMSIS-DAP page even gives the commands you need to run to set it up:

use probe_rs::architecture::arm::{
    ap::{AddressIncrement, DataSize, CSW},
    ArmCommunicationInterface, ArmCommunicationInterfaceState, PortType,
use probe_rs::{DebugProbeError, Probe};

fn main() {
    let probes = Probe::list_all();
    let mut state = ArmCommunicationInterfaceState::new();
    let mut probe = probes[0].open().unwrap();

    // The ArmCommunicationInterface doesn't give DAP access but it's also
    // the only real way to enter debug mode so cheat and set it up but
    // then ignore it
    let _fake_face = ArmCommunicationInterface::new(&mut probe, &mut state)

    let interface = probe
        .ok_or_else(|| DebugProbeError::InterfaceNotAvailable("ARM"))

        .write_register(PortType::DebugPort, 0x8, 0x00000000)

    let old_csw_val = interface
        .read_register(PortType::AccessPort(0x0), 0x0)

    let old_csw = CSW::from(old_csw_val);

    let mut new_csw = CSW::default();

    if old_csw.SPIDEN != 0 {
        new_csw.PROT = 0b010;
    } else {
        new_csw.PROT = 0b110;

    new_csw.CACHE = 0b11;

    new_csw.AddrInc = AddressIncrement::Single;

    new_csw.SIZE = DataSize::U32;

        .write_register(PortType::AccessPort(0x0), 0x0, new_csw.into())

        .write_register(PortType::AccessPort(0x0), 0x4, 0x0)

    let result = interface.read_register(PortType::AccessPort(0x0), 0xc);

    println!("read value {:x?}", result);

Breaking this down a little, we do some initial setup to get interface to actually let us write to the registers. This is pretty hacky because probe-rs doesn’t expose any of these ports directly but it’s good enough for experimentation purposes.

The first step is to make sure we’re accessing the correct access port with a write to the DebugPort SELECT register at address 0x8. This is described in detail in the manual but the trick here is that we’re selecting access port #0 and bank #0 of AP registers. Once that is written, we can actually access the AP registers.

The read consists of 3 steps: write the CSW at 0x0 to control our read request, write the TAR at 0x4 for what address we want to read and then read the result from the DRW register at 0xC. The CSW is the most interesting register for controlling how the access goes out on the bus.

Naturally after reading the next thing you would want to do is write. This is as easy as writing to DRW. A great example of something to write is the DHCSR . This register is responsible for halting the processor among other uses. So we can change the code to:

        .write_register(PortType::AccessPort(0x0), 0x4, 0xE000EDF0)

    let result = interface.write_register(PortType::AccessPort(0x0), 0xc, 0xA05F0003);

and the processor halts. Spooky!

“Okay but what about setting registers can you really do that with just read/write” Yes, there is a register for that too! The DCRSR is designed to do exactly that! You’ll get some odd behavior if you don’t halt the processor first but using that code plus the DCRDR you can read/write the CPU registers:

    // Put 0x1234 in the DCRDR
        .write_register(PortType::AccessPort(0x0), 0x0, new_csw.into())

        .write_register(PortType::AccessPort(0x0), 0x4, 0xE000EDF8)

        .write_register(PortType::AccessPort(0x0), 0xc, 0x1234)

    // Write to r11
        .write_register(PortType::AccessPort(0x0), 0x0, new_csw.into())

        .write_register(PortType::AccessPort(0x0), 0x4, 0xE000EDF4)

        .write_register(PortType::AccessPort(0x0), 0xc, 0x1000b)

You can cross check it with either a read or attach another debugger that doesn’t reset the state and see the change you made.

This method of interacting with the debugger is probably not a good idea to use for anything other than toy code given other abstrations like probe-rs exist. It’s still very useful for learning what exactly is happening behind the scenes. Most of the code I gave should be common across multiple Cortex-M devices but always double check the manual for your particular board/processor.

Happy haunting of your spooky devices!