Fedora People

Contribute at the Fedora Test Week for Kernel 5.7

Posted by Fedora Magazine on June 19, 2020 05:50 PM

The kernel team is working on final integration for kernel 5.7. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, June 22, 2020 through Monday, June 29, 2020. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

Contribute at the Fedora Test Week for Kernel 5.7

Posted by Fedora Community Blog on June 19, 2020 05:18 PM
Fedora 33 Kernel 5.7 Test Day

The kernel team is working on final integration for kernel 5.7. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, June 22, 2020 through Monday, June 29, 2020. Refer to the wiki page for links to […]

The post Contribute at the Fedora Test Week for Kernel 5.7 appeared first on Fedora Community Blog.

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on June 19, 2020 02:57 PM
New status scheduled: Move to new datacentre for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Fedora websites, Fedora Wiki, Zodbot IRC bot

Major service disruption

Posted by Fedora Infrastructure Status on June 19, 2020 12:55 PM
New status major: The IAD2 datacentre has a major network issue for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Fedora websites, Fedora Wiki, Zodbot IRC bot

On git master, main and inclusion

Posted by Lukas "lzap" Zapletal on June 19, 2020 12:00 AM

On git master, main and inclusion

All I can say is that I find some words like whiltelist, blacklist and master/slave awful. I do realize that the context in git might not be racist, however I am trying to be nice-by-default person. I won’t comment more on the situation as this blog is strictly non-political. So let’s be technical in that regard:

If one of your projects rename master to main and you are having hard time checking out the mainline branch, I have a tip for you:

$ alias gcm='git checkout master || git checkout main || git checkout gh-pages'
$ alias gll='(git checkout master || git checkout main || git checkout gh-pages) && git pull'

If you prefer git subcommand aliases, git cm for example, then add this to your git configuration:

$ cat ~/.gitconfig
[alias]
# ... gazillion of my other aliases ...
cm = !sh -c 'git checkout master || git checkout main || git checkout gh-pages'

Checking out main branch is now easy, you don’t need to ever think about which project are you working on:

$ gcm
error: pathspec 'master' did not match any file(s) known to git
Switched to branch 'main'

You can ignore the error when git was unable to find “master” branch, that’s expected. I did not bother to solve this issue, you can probably redirect the standard error if you find this annoying. Let me know on my twitter @lzap if this helped to you, if you have a better solution or similar tip.

I have been using this for years, our project Foreman actually use a different term for the mainline branch. We call it the “develop” branch.

Oh, by the way, Steve Job’s style: it even works for github pages repositories.

نسخه نهایی توزیع Linux CentOS 8.2.2004 منتشر شد

Posted by Fedora fans on June 18, 2020 02:15 PM
centos

centos

نسخه نهایی توزیع CentOS 8.2 که بر پایه سورس کد Red Hat Enterprise Linux می باشد، منتشر شد. این نسخه که با تگ 2004 معرفی شده است جدیدترین نسخه از سری 8 از توزیع CentOS می باشد و بر اساس سورس کد Red Hat Enterprise Linux 8.2 می باشد.

نسخه CentOS 8.2 برای معماری های aarch64, ppc64le و x86_64 در دسترس می باشد و کاربران می توانند آن را دانلود و استفاده کنند. کاربرانی که در حال حاضر از سری 8 از توزیع CentOS استفاده می کنند کافیست تا سیستم خود را آپدیت کنند که برای اینکار کافیست تا دستور «dnf update» را اجرا کنند.

جهت اطلاعات بیشتر در مورد لینوکس CentOS 8.2 می توانید نکات انتشار و آگهی انتشار آن را مطالعه کنید:

https://lists.centos.org/pipermail/centos-announce/2020-June/035756.html

 

https://wiki.centos.org/Manuals/ReleaseNotes/CentOS8.2004

جهت دانلود CentOS 8.2 نیز می توانید به وب سایت رسمی آن مراجعه نمایید:

https://centos.org/download

 

 

Release 5.4.0

Posted by Bodhi on June 18, 2020 12:00 PM

v5.4.0

This is a minor release.

Server upgrade instructions

This release contains database migrations. To apply them, run::

$ sudo -u apache /usr/bin/alembic -c /etc/bodhi/alembic.ini upgrade head

Summary of the migrations:

  • Migrate relationship between TestCase and Package to TestCase and Build. The migration script will take care of migrate existing data to the new relation.
  • The user_id column in comments table has been set to be not nullable.
  • The notes column in buildroot_overrides table has been converted to UnicodeText (from Unicode).

Bug fixes

  • Associate TestCase to Build instead of Package, allowing to remove old
    testcases from updates (:issue:1794).
  • Replace koji krb_login with gssapi_login. (:issue:4029).
  • Making sure that builds of side tag update for normal releases are marked as
    signed. (:issue:4032).
  • Handle Cornice 5.0 JSON error handling. (:issue:4033).
  • Cap buildroot overrides notes to a maximum of 2k characters and convert the
    database field to UnicodeText (:issue:4044).

Development improvements

  • The user_id field in the comments table has been made not nullable. Some
    database joins have been tweaked to get better performance (:pr:4046).
  • Always use koji.multiCall for untag/unpush for better handle updates with a
    lot of builds (:pr:4052).

Contributors

The following developers contributed to this release of Bodhi:

  • Clement Verna
  • Karma Dolkar
  • Mattia Verga
  • Miro Hrončok
  • Sebastian Wojciechowski

Outreachy with Fedora’s Bodhi Project

Posted by Fedora Community Blog on June 18, 2020 07:00 AM
Fedora Project and Outreachy internship program for underrepresented groups in open source

Here, I express my hitherto educative, fun, and exciting experience as an Outreachy intern. About the Project The aim of the project is to provide a /graphql endpoint to Bodhi, Fedora’s update gating system. This would allow users to query through Bodhi’s resources using GraphQL. Progress at a Glance The task for the first week […]

The post Outreachy with Fedora’s Bodhi Project appeared first on Fedora Community Blog.

EBBR on RockPro64

Posted by Marcin 'hrw' Juszkiewicz on June 17, 2020 03:53 PM

SBBR or GTFO

Me.

But Arm world no longer ends on “SBBR compliant or complete mess”. For over a year there is new specification called EBBR (Embedded Base Boot Requirements).

WTH is EBBR?

In short it is kind of SBBR for devices which can not comply. So you still need to have some subset of UEFI Boot/Runtime Services but it can be provided by whatever bootloader you use. So U-Boot is fine as long it’s EFI implementation is enabled.

ACPI is not required but may be present. DeviceTree is perfectly fine. You may provide both or one of them.

Firmware can be stored wherever you wish. Even MBR partitioning is available if really needed.

Make it nice way

RockPro64 has 16MB of SPI flash on board. This is far more than needed for storing firmware (I remember time when it was enough for palmtop Linux).

During last month I sent a bunch of patches to U-Boot to make this board as comfortable to use as possible. Including storing of all firmware parts into on board SPI flash.

To have U-Boot there you need to fetch two files:

Their sha256 sums:

3985f2ec63c2d31dc14a08bd19ed2766b9421f6c04294265d484413c33c6dccc  idbloader.img
35ec30c40164f00261ac058067f0a900ce749720b5772a759e66e401be336677  u-boot.itb

Store them to USB pen drive and plug it into any of RockPro64 USB ports. Then reboot to U-Boot.

Next do this set of commands to update U-Boot:

Hit any key to stop autoboot:  0 
=> usb start

=> ls usb 0:1
   163807   idbloader.img
   867908   u-boot.itb

2 file(s), 0 dir(s)

=> sf probe
SF: Detected gd25q128 with page size 256 Bytes, erase size 4 KiB, total 16 MiB

=> load usb 0:1 ${fdt_addr_r} idbloader.img
163807 bytes read in 16 ms (9.8 MiB/s)

=> sf update ${fdt_addr_r} 0 ${filesize}
device 0 offset 0x0, size 0x27fdf
163807 bytes written, 0 bytes skipped in 2.93s, speed 80066 B/s

=> load usb 0:1 ${fdt_addr_r} u-boot.itb
867908 bytes read in 53 ms (15.6 MiB/s)

=> sf update ${fdt_addr_r} 60000 ${filesize}
device 0 offset 0x60000, size 0xd3e44
863812 bytes written, 4096 bytes skipped in 11.476s, speed 77429 B/s

And reboot board.

After this your RockPro64 will have firmware stored in on board SPI flash. No need for wondering which offsets to use to store them on SD card etc.

Booting installation media

The nicest part of it is that no longer you need to mess with installation media. Fetch Debian/Fedora installer ISO, write it to USB pen drive, plug into port and reboot board.

Should work with any generic AArch64 installation media. Of course kernel on media needs to support RockPro64 board. I played with Debian ‘testing’ and Fedora 32 and rawhide and they booted fine.

My setup

My board boots to either Fedora rawhide or Debian ‘testing’ (two separate pen drives).

تولد ۹ سالگی وب سایت طرفداران فدورا

Posted by Fedora fans on June 17, 2020 11:45 AM
happy-birthday-fedora

happy-birthday-fedora

با سلام خدمت شما دوستان گرامی

امیدوارم که سالم و سلامت باشید. یک سال دیگر از فعالیت وب سایت طرفداران فدورا سپری شد و این افتخار را داریم که تولد ۹ سالگی آن را اعلام نماییم.

امید است تا در این ۹ سال فعالیت، کمکی هر چند کوچک به جامعه ی نرم افزار آزاد کرده باشیم. شخصا از تمامی دوستانی که ما را در این مسیر یاری کردند سپاسگزاری می کنم.

 

سلامت و فدورایی باشید

hos7ein

Internet connection sharing with NetworkManager

Posted by Fedora Magazine on June 17, 2020 07:00 AM

NetworkManager is the network configuration daemon used on Fedora and many other distributions. It provides a consistent way to configure network interfaces and other network-related aspects on a Linux machine. Among many other features, it provides a Internet connection sharing functionality that can be very useful in different situations.

F<mark class="annotation-text annotation-text-yoast" id="annotation-text-50da97d1-a97d-473a-89d4-b410ed9c0cc9"></mark>or example, suppose you are in a place without Wi-Fi and want to share your laptop’s mobile data connection with friends. Or maybe you have a laptop with broken Wi-Fi and want to connect it via Ethernet cable to another laptop; in this way the first laptop become able to reach the Internet and maybe download new Wi-Fi drivers.

In cases like these it is useful to share Internet connectivity with other devices. On smartphones this feature is called “Tethering” and allows sharing a cellular connection via Wi-Fi, Bluetooth or a USB cable.

This article shows how the connection sharing mode offered by NetworkManager can be set up easily; it addition, it explains how to configure some more advanced features for power users.

How connection sharing works

The basic idea behind connection sharing is that there is an upstream interface with Internet access and a downstream interface that needs connectivity. These interfaces can be of a different type—for example, Wi-Fi and Ethernet.

If the upstream interface is connected to a LAN, it is possible to configure our computer to act as a bridge; a bridge is the software version of an Ethernet switch. In this way, you “extend” the LAN to the downstream network. However this solution doesn’t always play well with all interface types; moreover, it works only if the upstream network uses private addresses.

A more general approach consists in assigning a private IPv4 subnet to the downstream network and turning on routing between the two interfaces. In this case, NAT (Network Address Translation) is also necessary. The purpose of NAT is to modify the source of packets coming from the downstream network so that they look as if they originate from your computer.

It would be inconvenient to configure manually all the devices in the downstream network. Therefore, you need a DHCP server to assign addresses automatically and configure hosts to route all traffic through your computer. In addition, in case the sharing happens through Wi-Fi, the wireless network adapter must be configured as an access point.

There are many tutorials out there explaining how to achieve this, with different degrees of difficulty. NetworkManager hides all this complexity and provides a shared mode that makes this configuration quick and convenient.

Configuring connection sharing

The configuration paradigm of NetworkManager is based on the concept of connection (or connection profile). A connection is a group of settings to apply on a network interface.

This article shows how to create and modify such connections using nmcli, the NetworkManager command line utility, and the GTK connection editor. If you prefer, other tools are available such as nmtui (a text-based user interface), GNOME control center or the KDE network applet.

A reasonable prerequisite to share Internet access is to have it available in the first place; this implies that there is already a NetworkManager connection active. If you are reading this, you probably already have a working Internet connection. If not, see this article for a more comprehensive introduction to NetworkManager.

The rest of this article assumes you already have a Wi-Fi connection profile configured and that connectivity must be shared over an Ethernet interface enp1s0.

To enable sharing, create a connection for interface enp1s0 and set the ipv4.method property to shared instead of the usual auto:

$ nmcli connection add type ethernet ifname enp1s0 ipv4.method shared con-name local

The shared IPv4 method does multiple things:

  • enables IP forwarding for the interface;
  • adds firewall rules and enables masquerading;
  • starts dnsmasq as a DHCP and DNS server.

NetworkManager connection profiles, unless configured otherwise, are activated automatically. The new connection you have added should be already active in the device status:

$ nmcli device
DEVICE         TYPE      STATE         CONNECTION
enp1s0         ethernet  connected     local
wlp4s0         wifi      connected     home-wifi

If that is not the case, activate the profile manually with nmcli connection up local.

Changing the shared IP range

Now look at how NetworkManager configured the downstream interface enp1s0:

$ ip -o addr show enp1s0
8: enp1s0 inet 10.42.0.1/24 brd 10.42.0.255 ...

10.42.0.1/24 is the default address set by NetworkManager for a device in shared mode. Addresses in this range are also distributed via DHCP to other computers. If the range conflicts with other private networks in your environment, change it by modifying the ipv4.addresses property:

$ nmcli connection modify local ipv4.addresses 192.168.42.1/24

Remember to activate again the connection profile after any change to apply the new values:

$ nmcli connection up local

$ ip -o addr show enp1s0
8: enp1s0 inet 192.168.42.1/24 brd 192.168.42.255 ...

If you prefer using a graphical tool to edit connections, install the nm-connection-editor package. Launch the program and open the connection to edit; then select the Shared to other computers method in the IPv4 Settings tab. Finally, if you want to use a specific IP subnet, click Add and insert an address and a netmask.

<figure class="wp-block-gallery columns-1 is-cropped"></figure>

Adding custom dnsmasq options

In case you want to further extend the dnsmasq configuration, you can add new configuration snippets in /etc/NetworkManager/dnsmasq-shared.d/. For example, the following configuration:

dhcp-option=option:ntp-server,192.168.42.1
dhcp-host=52:54:00:a4:65:c8,192.168.42.170

tells dnsmasq to advertise a NTP server via DHCP. In addition, it assigns a static IP to a client with a certain MAC.

There are many other useful options in the dnsmasq manual page. However, remember that some of them may conflict with the rest of the configuration; so please use custom options only if you know what you are doing.

Other useful tricks

If you want to set up sharing via Wi-Fi, you could create a connection in Access Point mode, manually configure the security, and then enable connection sharing. Actually, there is a quicker way, the hotspot mode:

$ nmcli device wifi hotspot [ifname $dev] [password $pw]

This does everything needed to create a functional access point with connection sharing. The interface and password options are optional; if they are not specified, nmcli chooses the first Wi-Fi device available and generates a random password. Use the ‘nmcli device wifi show-password‘ command to display information for the active hotspot; the output includes the password and a text-based QR code that you can scan with a phone:

<figure class="wp-block-image size-large"></figure>

What about IPv6?

Until now this article discussed sharing IPv4 connectivity. NetworkManager also supports sharing IPv6 connectivity through DHCP prefix delegation. Using prefix delegation, a computer can request additional IPv6 prefixes from the DHCP server. Those public routable addresses are assigned to local networks via Router Advertisements. Again, NetworkManager makes all this easier through the shared IPv6 mode:

$ nmcli connection modify local ipv6.method shared

Note that IPv6 sharing requires support from the Internet Service Provider, which should give out prefix delegations through DHCP. If the ISP doesn’t provides delegations, IPv6 sharing will not work; in such case NM will report in the journal that no prefixes are available:

policy: ipv6-pd: none of 0 prefixes of wlp1s0 can be shared on enp1s0

Also, note that the Wi-Fi hotspot command described above only enables IPv4 sharing; if you want to also use IPv6 sharing you must edit the connection manually.

Conclusion

Remember, the next time you need to share your Internet connection, NetworkManager will make it easy for you.

If you have suggestions on how to improve this feature or any other feedback, please reach out to the NM community using the mailing list, the issue tracker or joining the #nm IRC channel on freenode.

It's templates all the way down - part 2

Posted by Peter Hutterer on June 17, 2020 04:37 AM

In Part 1 I've shown you how to create your own distribution image using the freedesktop.org CI templates. In Part 2, we'll go a bit further than that by truly embracing nested images.

Our assumption here is that we have two projects (or jobs), with the second one relying heavily on the first one. For example, the base project and a plugin, or a base project and its language bindings. What we'll get out of this blog post is a setup where we have

  • a base image in the base project
  • an image extending that base image in a different project
  • automatic rebuilds of that extended image when the base image changes
And none of your contributors have to care about this. It's all handled automatically and filing a MR against a project will build against the right image. So let's get started.

Our base project has CI that pushes an image to its registry. The .gitlab-ci.yml contains something like this:


.fedora32:
variables:
FDO_DISTRIBUTION_VERSION: '32'
FDO_DISTRIBUTION_TAG: 'base.0'

build-img:
extends:
- .fedora32
- .fdo.container-build@fedora
variables:
FDO_DISTRIBUTION_PACKAGES: "curl wget"

This will build a fedora/32:base.0 image in the project's container registry. That image is built once and then re-used by any job extending .fdo.distribution-image@fedora. So far, so Part 1.

Now, the second project needs to test things on top of this base image, for example language bindings for rust. You want to use the same image that the base project uses (and has successfully completed its CI on) but you need some extra packages or setup. This is where the FDO_BASE_IMAGE comes in. In our dependent project, we have this:


.fedora32:
variables:
FDO_DISTRIBUTION_VERSION: '32'
FDO_DISTRIBUTION_TAG: 'rust.0'

build-rust-image:
extends:
- .fedora32
- .fdo.container-build@fedora
variables:
FDO_BASE_IMAGE: "registry.freedesktop.org/baseproject/name/fedora/32:base.0"
# extra packages we want to install and things we need to set up
FDO_DISTRIBUTION_PACKAGES: "rust cargo"
FDO_DISTRIBUTION_EXEC: "bash -x some-setup-script.sh"

test-rust:
extends:
- .fedora32
- .fdo.distribution-image@fedora
script:
- cargo build myproject-bindings

And voila, you now have two images: the base image with curl and wget in the base project and an extra image with rust and cargo in the dependent project. And all that is required is to reference the FDO_BASE_IMAGE, everything else is the same. Note how the FDO_BASE_IMAGE is a full path in this example since we assume it's in a different project. For dependent images within the same project, you can just use the image path without the host.

The dependency only matters while the image is built, after that the dependent image is just another standalone image. So even if the base project removes the base image, you still have yours to test on.

But eventually you will need to change the base image and you want the dependent image to update as well. The best solution here is to have a CI job as part of the base repo that pokes the dependent repo's CI whenever the base image updates. The CI templates add the pipeline id as label to an image when it is built. In your base project, you can thus have a job like this:


poke-dependents:
extends:
- .fedora32
- .fdo.distribution-image@fedora
image: something-with-skopeo-and-jq
script:
# FDO_DISTRIBUTION_IMAGE still has indirections
- DISTRO_IMAGE=$(eval echo ${FDO_DISTRIBUTION_IMAGE})
# retrieve info from the registry and extract the pipeline id
- JSON_IMAGE=$(skopeo inspect docker://$DISTRO_IMAGE)
- IMAGE_PIPELINE_ID=$(echo $JSON_IMAGE | jq -r '.Labels["fdo.pipeline_id"]')
- |
if [[ x"$IMAGE_PIPELINE_ID" == x"$CI_PIPELINE_ID" ]]; then
curl -X POST
-F "token=$AUTH_TOKEN_VALUE"
-F "ref=master"
-F "variables[SOMEVARIABLE]=somevalue"
https://gitlab.freedesktop.org/api/v4/projects/dependent${SLASH}project/trigger/pipeline
fi
variables:
SLASH: "%2F"

Let's dissect this: First, we use the .fdo.distribution-image@fedora template to get access to FDO_DISTRIBUTION_IMAGE. We don't need to use the actual image though, anything with skopeo and jq will do. Then we fetch the pipeline id label from the image and compare it to the current pipeline ID. If it is the same, our image was rebuilt as part of the pipeline and we poke the other project's pipeline with a SOMEVARIABLE set to somevalue. The auth token is a standard GitLab token you need to create to allow triggering the pipeline in the dependent project.

In that dependent project you can have a job like this:


rebuild-extra-image:
extends: build-extra-image
rules:
- if: '$SOMEVARIABLE == "somevalue"'
variables:
FDO_FORCE_REBUILD: 1

This job is only triggered where the variable is set and it will force a rebuild of the container image. If you want custom rebuilds of images, set the variables accordingly.

So, as promised above, we now have a base image and a separate image building on that, together with auto-rebuild hooks. The gstreamer-plugins-rs project uses this approach. The base image is built by gstreamer-rs during its CI run which then pokes gstreamer-plugins-rs to rebuild selected dependent images.

The above is most efficient when the base project knows of the dependent projects. Where this is not the case, the dependent project will need a scheduled pipeline to poll the base project and extract the image IDs from that, possibly using creation dates and whatnot. We'll figure that out when we have a use-case for it.

On inclusive language: an extended metaphor involving parties because why not

Posted by Adam Williamson on June 17, 2020 12:57 AM

So there's been some discussion within Red Hat about inclusive language lately, obviously related to current events and the worldwide protests against racism, especially anti-Black racism. I don't want to get into any internal details, but in one case we got into some general debate about the validity of efforts to use more inclusive language. I thought up this florid party metaphor, and I figured instead of throwing it at an internal list, I'd put it up here instead. If you have constructive thoughts on it, go ahead and mail me or start a twitter thread or something. If you have non-constructive thoughts on it, keep 'em to yourself!

Before we get into my pontificating, though, here's some useful practical resources if you just want to read up on how you can make the language in your projects and docs more inclusive:

To provide a bit of context: I was thinking about a suggestion that people promoting the use of more inclusive language are "trying to be offended". And here's where my mind went!

Imagine you are throwing a party. You send out the invites, order in some hors d'ouevres (did I spell that right? I never spell that right), queue up some Billie Eilish (everyone loves Billie Eilish, it's a scientific fact), set out the drinks, and wait for folks to arrive. In they all come, the room's buzzing, everyone seems to be having a good time, it's going great!

But then you notice (or maybe someone else notices, and tells you) that most of the people at your party seem to be straight white dudes and their wives and girlfriends. That's weird, you think, I'm an open minded modern guy, I'd be happy to see some Black folks and maybe a cute gay couple or something! What gives? I don't want people to think I'm some kind of racist or sexist or homophobe or something!

So you go and ask some non-white folks and some non-straight folks and some non-male folks what's going on. What is it? Is it me? What did I do wrong?

Well, they say, look, it's a hugely complex issue, I mean, we could be here all night talking about it. And yes, fine, that broken pipeline outside your house might have something to do with it (IN-JOKE ALERT). But since you ask, look, let us break this one part of it down for you.

You know how you've got a bouncer outside, and every time someone rolls up to the party he looks them up and down and says "well hi there! What's your name? Is it on the BLACKLIST or the WHITELIST?" Well...I mean...that might put some folks off a bit. And you know how you made the theme of the party "masters and slaves"? You know, that might have something to do with it too. And, yeah, you see how you sent all the invites to men and wrote "if your wife wants to come too, just put her name in your reply"? I mean, you know, that might speak to some people more than others, you hear what I'm saying?

Now...this could go one of two ways. On the Good Ending, you might say "hey, you know what? I didn't think about that. Thanks for letting me know. I guess next time I'll maybe change those things up a bit and maybe it'll help. Hey thanks! I appreciate it!"

and that would be great. But unfortunately, you might instead opt for the Bad Ending. In the Bad Ending, you say something like this:

"Wow. I mean, just wow. I feel so attacked here. It's not like I called it a 'blacklist' because I'm racist or something. I don't have a racist bone in my body, why do you have to read it that way? You know blacklist doesn't even MEAN that, right? And jeez, look, the whole 'masters and slaves' thing was just a bit of fun, it's not like we made all the Black people the slaves or something! And besides that whole thing was so long ago! And I mean look, most people are straight, right? It's just easier to go with what's accurate for most people. It's so inconvenient to have to think about EVERYBODY all the time. It's not like I'm homophobic or anything. If gay people would just write back and say 'actually I have a husband' or whatever they'd be TOTALLY welcome, I'm all cool with that. God, why do you have to be so EASILY OFFENDED? Why do you want to make me feel so guilty?"

So, I mean. Out of Bad Ending Person and Good Ending Person...whose next party do we think is gonna be more inclusive?

So obviously, in this metaphor, Party Throwing Person is Red Hat, or Google, or Microsoft, or pretty much any company that says "hey, we accept this industry has a problem with inclusion and we're trying to do better", and the party is our software and communities and events and so on. If you are looking at your communities and wondering why they seem to be pretty white and male and straight, and you ask folks for ideas on how to improve that, and they give you some ideas...just listen. And try to take them on board. You asked. They're trying to help. They are not saying you are a BAD PERSON who has done BAD THINGS and OFFENDED them and you must feel GUILTY for that. They're just trying to help you make a positive change that will help more folks feel more welcome in your communities.

You know, in a weird way, if our Party Throwing Person wasn't quite Good Ending Person or Bad Ending person but instead said "hey, you know what, I don't care about women or Black people or gays or whatever, this is a STRAIGHT WHITE GUY PARTY! WOOOOO! SOMEONE TAP THAT KEG!"...that's almost not as bad. At least you know where you stand with that. You don't feel like you're getting gaslit. You can just write that idiot and their party off and try and find another. The kind of Bad Ending Person who keeps insisting they're not racist or sexist or homophobic and they totally want more minorities to show up at their party but they just can't figure out why they all seem to be so awkward and easily offended and why they want to make poor Bad Ending Person feel so guilty...you know...that gets pretty tiring to deal with sometimes.

F32-20200615 updated live iso released

Posted by Ben Williams on June 16, 2020 03:14 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated F32-20200601-Live ISOs, carrying the 5.6.18-300 kernel.

This set of updated isos will save considerable amounts  of updates after install.  ((for new installs.)(New installs of Workstation have about 900+MB of updates)).

A huge thank you goes out to irc nicks dowdle,  Southern-Gentleman, vdamewood for testing these iso.

This set also includes i3LXDE and i3XFCE for testing for the Fedora i3 Sig.  And as always our isos can be found at http://tinyurl.com/Live-respins

I think the final version of relayd functionality written in C (arp & dhcp)

Posted by Jon Chiappetta on June 16, 2020 03:12 PM

So it took me a while to update & re-write the dhcp relayd functionality that I made in python previously. This new C file can relay & rebroadcast both arp and dhcp packets via raw sockets. For DHCP relaying, it has to be able to insert the bridges IP address into the request so that the server replies back to us and then we can forward it on (so we can run only 1 dhcp server total on the network). The last interface specified in the list is designated as the dhcp server interface to send the requests coming in.

Also tried to reduce the number of system calls made by reading both the arp table proc file and routing table file instead (only 1 sys call to replace host route entries on the bridge router).

It’s been a while since I used select with multiple sockets but this compile uses less memory while running versus the python version, although the py version is easier to read and maintain (depends on your needs).

https://github.com/stoops/arprb/blob/master/dhcprb.c?ts=4

I’ll run this for a while and see how it performs!

Jon C

Onion service v2 deprecation timeline

Posted by Kushal Das on June 16, 2020 02:26 PM

On Monday June 15, the developers of the Tor Project announced the initial plan for the deprecation of Onion services v2. You can identify v2 addresses easily as they are only 16 character long, where as the v3 addresses are 56 character long.

Why?

The v2 services used RSA1024, where as v3 uses ed25519, means better cryptography. We can also have offline keys for the onion service. You can read all other benefits in the v3 spec.

Timeline

According to the email to the list, the following the current timeline:

  • On 2020-09-15 with 0.4.4.x release Tor will start informing v2 onion service operators that v2 is deprecated.
  • On 2021-07-15 with 0.4.6.x release Tor will stop supporting v2 onion addresses, and all related source code will be removed.
  • On 2021-10-15 there will be a new stable version release which will disable using v2 onion services on the Tor network.

How can you prepare as an Onion service provider?

If you are using/providing any v2 onion service, you should enable v3 service for the same service. This will help you to test your v3 configuration while keeping the v2 on, and then you can retire your v2 address. If you need help in setting authenticated v3 service, you can follow this blog post. I wrote another post which explains how can you generate the keys using Python cryptography module.

Read the full announcement in the list archive.

Simplifying CA handling in syslog-ng TLS connections

Posted by Peter Czanik on June 16, 2020 12:42 PM

When talking to users about the TLS-encrypted message transfer, almost everyone immediately complains about configuring a certificate authority (CA) in syslog-ng. You needed to create a hash and create a symbolic link to the CA file based on the hash. Not anymore. While this old method is still available, there is now a much easier way: the new ca-file() option.

Before you begin

To be able to use the new ca-file() option, you need syslog-ng 3.27 or later. Some distros already have it in their official repositories, others carry older versions. For some of the most popular Linux distributions, you can use 3rd party repositories to install an up-to-date syslog-ng. Check https://syslog-ng.com/3rd-party-binaries for details.

TLS encryption and thus the ca-dir() option were added to syslog-ng even before I joined Balabit, so well over a decade ago. On the other hand, it took another few years, before syslog-ng was compiled with TLS support by default. If you recall the good old times: syslog-ng was in /sbin while openssl was installed under /usr, which for historic reasons could reside on a different partition. This restriction was lifted in distros and soon after openssl became a mandatory dependency around syslog-ng 3.8. To make a long story short, TLS and the old ca-dir() method might work with syslog-ng version 3.0+ and definitely available in syslog-ng 3.8+.

Using ca-dir()

The old way of referring to a CA file from the syslog-ng configuration was the use of a ca-dir(). In this case, you did not refer to the file containing CA information directly, but rather to the directory. And referring to the directory was not enough. As mentioned earlier you needed to create a hash of the CA file and create a symbolic link to the CA file based on the hash. If the file was called /etc/syslog-ng/ca.d/cacert.pem then the following commands were necessary:

# cd /etc/syslog-ng/ca.d/
# openssl x509 -noout -hash -in cacert.pem
2704bf71
# ln -s cacert.pem 2704bf71.0

Note that the hash is different for each file and when you create the link, you need to append a .0 to the end of the file name.

While it is well documented and not terribly difficult, still it led to many problems and confusion.

Using ca-file()

Starting with syslog-ng version 3.27, you can use the ca-file() option for the TLS connections instead of ca-dir(). In this case, you need to provide the full path to the CA file instead of the directory.

ca-file("/etc/syslog-ng/ca.d/cacert.pem")

And you are done, no other external commands are necessary.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

Ecto embeds have IDs by default

Posted by Josef Strzibny on June 16, 2020 12:00 AM

When you use Ecto’s embeds_one to embed another schema backed by JSONB, Ecto will assign an ID and complain if you try to replace this embeded record without it.

Imagine the following embeded EmbeddedData within Record:

defmodule Record do
  use Ecto.Schema

  schema "records" do
    embeds_one :data, EmbeddedData
  end
end

defmodule EmbeddedData do
  use Ecto.Schema

  embedded_schema do
    field :field, :string
    field :another_field, :string
  end
end

Although not directly specified in embedded_schema macro call, Ecto will add to the embedded data an ID attribute which will also automatically populate. This is a default behaviour.

As the Ecto docs puts it:

The embedded may or may not have a primary key. Ecto uses the primary keys to detect if an embed is being updated or not. If a primary is not present, :on_replace should be set to either :update or :delete if there is a desire to either update or delete the current embed when a new one is set.

And later:

Primary keys are automatically set up for embedded schemas as well, defaulting to {:id, :binary_id, autogenerate: true}

This tells us to expect ID named as id and to be a string (binary in Elixir).

If we would want to create our Record together with its embedded data, we would most likely be using cast_embed in the parent changeset as follows:

defmodule Record do
  ...
  def changeset(%Record{} = record, attrs \\ %{}) do
    record
    |> cast(attrs, [])
    |> cast_embed(:data)
  end
  ...

Once saved using such changeset, the JSONB would look something like:

{
  "id": "generated-id-....",
  "field": "value",
  "another_field": "another value"
}

The embeded data field of the Record would end up including an id. But why is it important?

If we work further with our Record struct and try to update it by providing only our original data struct without an ID Ecto won’t know how to continue.

...
By default it is not possible to replace or delete embeds and
associations during `cast`. Therefore Ecto requires all existing
data to be given on update. Failing to do so results in this
error message.

If you want to replace data or automatically delete any data
not sent to `cast`, please set the appropriate `:on_replace`
option when defining the relation. The docs for `Ecto.Changeset`
covers the supported options in the "Associations, embeds and on
replace" section.

However, if you don't want to allow data to be replaced or
deleted, only updated, make sure that:

 * If you are attempting to update an existing entry, you
   are including the entry primary key (ID) in the data.

 * If you have a relationship with many children, at least
   the same N children must be given on update.

This happens when you are trying to assign the data field as if it’s a new object.

And as José Valim puts it in one of relevant GitHub issues:

The reason why we require the ID is to know when you are replacing or keeping the same embed. In any case, it is not something we can change, as it would be a breaking change. But the ID should be generated by default.

So if we are using the primary key for our embed (the default behaviour) and we want to just update this JSONB we need to preserve its ID.

To demostrate, the following changeset will preserve this ID automatically if the original record is passed to it:

  ...
  def changeset(%Record{} = record, attrs \\ %{}) do
    id =
      if record.data do
        record.data.id
      else
        nil
      end

    data = attrs["data"] || %{}
    new_data = Map.merge(%{id: id}, data)
    attrs = Map.merge(attrs, %{data: new_data})

    record
    |> cast(attrs, [])
    |> cast_embed(:data)
  end
  ...

[Howto] My own mail & groupware server, part 3: Git server

Posted by Roland Wolters on June 15, 2020 01:23 PM
<figure class="alignright size-thumbnail"></figure>

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #3 of the series and covers the integration of an additional Git server setup.

This post is all about setting up an additional Git server to the existing mail server. Read about the background to this setup in the first post, part 1: what, why, how?, and all about the mail server setup itself in the second post, part 2: initial mail server setup.

Gitea as Git server

I heavily use Git all the time. And there are enough information where I feel more comfortable when I host them only on infrastructure I control. So for the new setup I also wanted to have a Git server, as fast as possible.

In the past I used Gitlab as Git server, but that is very resource intensive and just overkill for my use cases. Thus years ago I already replaced Gitlab with Gitea – a lightweight, painless self-hosted git service. It is quickly set up, simple to use, offers nevertheless all relevant Git features, and simply does it’s job. Gitea itself is a fork of Gogs, which was not really community friendly. These days Gitea is a way more active and prospering project than Gogs.

Background: Nginx as reverse proxy in Mailu

So how do you “attach” Gitea to a running Mailu infrastructure? Mailu itself comes with a set of defined services, and that’s it. There is no plugin or module system to extend it. However, the project does offer special “overrides” directories where additional configuration can be placed – this applies to Nginx as well! That way, a service can be placed right next to other Mailu services, behind the same reverse proxy, and that way benefit from the already existing setup and for example the certificate regeneration. Also, there is no problem with the already used ports 80 and 443, etc.

Overrides can be placed in /data/mailu/overrides/nginx. They are basically just snippets of Nginx configuration. Note though that they are included within the the main server block! That means they can only work on locations, not on server names. This is somewhat unfortunate since I used to address all my old services via sub-domains. git.bayz.de, nc.bayz.de, etc. With the new setup and the limit to locations this is not an option anymore, everything has to work on different working directories: bayz.de/git, bayz.de/nc, etc.

This is somewhat unfortunate, because that also meant that I had to reconfigure clients, and also ask others to reconfigure their clients when using my infrastructure. I would be happy to get back to a pure sub-domain based addressing, but I don’t see how this could be possible without changing the actual Nginx image.

Adding Gitea entry to Nginx Override

Having said that, to add Gitea to Nginx, create this file: /data/mailu/overrides/nginx/git.conf

location /gitea/ {
  proxy_pass http://git:3000/;
}

And that’s it already. More configuration is not needed since Mailu already configures Nginx with reasonable defaults.

This also gives a first hint that it is pretty easy to add further services – I will cover more examples in this ongoing blog post series.

Additional entry to docker compose

To start Gitea itself, add it to Mailu’s docker-compose.yml:

  # gitea

  git:
    image: gitea/gitea:latest
    restart: always
    env_file: mailu.env
    depends_on:
      - resolver
    dns:
      - 192.168.203.254
    volumes:
      - /data/gitea:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "22:22"

Note the shared volumes: that way the Gitea configuration file will be written on your storage in /data/gitea/gitea/conf/app.ini .

Also, we want to set UID and GUID for Gitea via environment variables. Set this in your mailu.env:

###################################
# Gitea settings
###################################
USER_UID=1000
USER_GID=1000

Setting up Gitea basic configuration

Now let’s configure Gitea. It is possible to pre-create a full Gitea configuration and start the container with it. However, documentation on that is sparse and in my tests there were always problems.

So in my case, I just started and stopped the container (docker compose down and up) a few times, edited some configuration, once registered an admin user via the GUI and was done. While this worked, I can only recommend to closely track the logs during that time to ensure that no one else is accessing the container and does mischief!

So, the first step is to start the new Docker compose service. This will write a first vanilla configuration of Gitea. Afterwards, add the correct domain information in the [server] section in the Gitea configuration file app.ini:

Note that the ROOT_URL value ensures the required rewrite of all requests and links so that the setup works flawlessly with the above mentioned Nginx configuration!

Next, bring the service down and up again (Docker compose down and up), login to your new service (here: git.bayz.de/gitea ) and register a new admin user. Note that here you also have to pick the database option. For small systems with only very few concurrent connections sqlite is fine. If you will serve more users here, or automated access, pick Postgresql. However, to make that work you need to bring up another Postgresql container. One of the next posts will introduce one, so you might want to re-think your setup then.

Directly after this admin registration is done, in the Gitea configuration fileapp.ini in the [server] section change the value of DISABLE_REGISTRATION to true. Stop and start the service again, and no new (external) users can register anymore.

But how do we register new users now?

Central services authentication in Mailu: mail

One of the major hassles with my last setup was the authentication. I started with a fully blown OpenLDAP years ago, which was a pain to manage and maintain already. Moving over to FreeIPA meant that I had better interfaces and maybe even a UI, but it was still a complex, tricky service. Also while almost every service out there can be connected to LDAP, that is not always easy or a pleasant experience. And given that I only have a few users on my system, that is hardly worth the trouble.

Mailu offers an interesting approach here: users are stored in a DB, and external services are asked to authenticate against the email services (IMAP, SMTP). I was surprised to learn that indeed many services out there support this or have plugins for that.

Gitea can authenticate against SMTP sources, and I decided to go that route:

  • In Gitea, access “Site Administration”
  • Click on “Authentication Sources”
  • Pick the blue button “Add Authentication Source”
  • As “Authentication Type“, choose SMTP
  • Give it a name
  • As “SMTP Authentication Type“, enter LOGIN
  • As “SMTP Host“, provide the external host name here (more on that further down below) lisa.bayz.de
  • Pick the right “SMTP Port“, 587
  • And limit the “Allowed Domains” if you want, in my case to bayz.de
  • Of course, tick the check box “Enable TLS Encryption” and also the check box “This Authentication Source is Activated

After this is done, log out of Gitea and log in with an existing mail user. It should just work! And that all without any trace of LDAP! Awesome, right?

A word about the SMTP host in the above configuration: do not try to enter here the SMTP docker compose service directly. This will not work: port 587 is managed by the Nginx proxy which acts as mail proxy here, which redirects auth mail requests to the admin portal. The internal SMTP container does not even listen on port 587.

What’s next?

With my private Git server back to live I felt slightly better again. Now I had the infrastructure at my hands I needed to tackle the cloud/file sharing part of all of it to also lay the foundations for the groupware pieces: Nextcloud.

More about that in the next post.

Featured image by Myriam Zilles from Pixabay

<script> __ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-5eed27a579529', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy settings', } } }); }); </script>

Compressed RAM disks

Posted by Richard W.M. Jones on June 15, 2020 10:32 AM

There was a big discussion last week about whether zram swap should be the default in a future version of Fedora.

This lead me to think about the RAM disk implementation in nbdkit. In nbdkit up to 1.20 it supports giant virtual disks up to 8 exabytes using a sparse array implemented with a 2-level page table. However it’s still a RAM disk and so you can’t actually store more real data in these disks than you have available RAM (plus swap).

But what if we compressed the data? There are some fine, very fast compression libraries around nowadays — I’m using Facebook’s Zstandard — so the overhead of compression can be quite small, and this lets you make limited RAM go further.

So I implemented allocators for nbdkit ≥ 1.22, including:

$ nbdkit memory 1T allocator=zstd

Compression ratios can be really good. I tested this by creating a RAM disk and filling it with a filesystem containing text and source files, and was getting 10:1 compression. (Note that filesystems start with very regular, easily compressible metadata, so you’d expect this ratio to quickly drop if you filled the filesystem up with a lot of files).

The compression overhead is small, although the current nbdkit-memory-plugin isn’t very smart about locking so it has rather poor performance under multi-threaded loads anyway. (A fun little project to fix that for someone who loves pthread and C.)

I also implemented allocator=malloc which is a non-sparse direct-mapped RAM disk. This is simpler and a bit faster, but has rather obvious limitations compared to using the sparse allocator.

LaTeX Typesetting – Part 1 (Lists)

Posted by Fedora Magazine on June 15, 2020 07:00 AM

This series builds on the previous articles: Typeset your docs with LaTex and TeXstudio on Fedora and LaTeX 101 for beginners. This first part of the series is about LaTeX lists.

Types of lists

LaTeX lists are enclosed environments, and each item in the list can take a line of text to a full paragraph. There are three types of lists available in LaTeX. They are:

  • Itemized: unordered or bullet
  • Enumerated: ordered
  • Description: descriptive

Creating lists

To create a list, prefix each list item with the \item command. Precede and follow the list of items with the \begin{<type>} and \end{<type>} commands respectively where <type> is substituted with the type of the list as illustrated in the following examples.

Itemized list

\begin{itemize}
    \item Fedora
    \item Fedora Spin
    \item Fedora Silverblue
\end{itemize}
<figure class="wp-block-image size-large"></figure>

Enumerated list

\begin{enumerate}
    \item Fedora CoreOS
    \item Fedora Silverblue
    \item Fedora Spin
\end{enumerate}
<figure class="wp-block-image size-large"></figure>

Descriptive list

\begin{description}
    \item[Fedora 6] Code name Zod
    \item[Fedora 8] Code name Werewolf
\end{description}
<figure class="wp-block-image size-large"></figure>

Spacing list items

The default spacing can be customized by adding \usepackage{enumitem} to the preamble. The enumitem package enables the noitemsep option and the \itemsep command which you can use on your lists as illustrated below.

Using the noitemsep option

Enclose the noitemsep option in square brackets and place it on the \begin command as shown below. This option removes the default spacing.

\begin{itemize}[noitemsep]
    \item Fedora
    \item Fedora Spin
    \item Fedora Silverblue
\end{itemize}
<figure class="wp-block-image size-large"></figure>

Using the \itemsep command

The \itemsep command must be suffixed with a number to indicate how much space there should be between the list items.

\begin{itemize} \itemsep0.75pt
    \item Fedora Silverblue
    \item Fedora CoreOS
\end{itemize}
<figure class="wp-block-image size-large"></figure>

Nesting lists

LaTeX supports nested lists up to four levels deep as illustrated below.

Nested itemized lists

\begin{itemize}[noitemsep]
    \item Fedora Versions
    \begin{itemize}
        \item Fedora 8
        \item Fedora 9
        \begin{itemize}
            \item Werewolf
            \item Sulphur
            \begin{itemize}
                \item 2007-05-31
                \item 2008-05-13
            \end{itemize}
        \end{itemize}
    \end{itemize}
    \item Fedora Spin
    \item Fedora Silverblue
\end{itemize}
<figure class="wp-block-image size-large"></figure>

Nested enumerated lists

\begin{enumerate}[noitemsep]
    \item Fedora Versions
    \begin{enumerate}
        \item Fedora 8
        \item Fedora 9
        \begin{enumerate}
            \item Werewolf
            \item Sulphur
            \begin{enumerate}
                \item 2007-05-31
                \item 2008-05-13 
            \end{enumerate}
        \end{enumerate}
    \end{enumerate}
    \item Fedora Spin
    \item Fedora Silverblue
\end{enumerate}
<figure class="wp-block-image size-large"></figure>

List style names for each list type

<figure class="wp-block-table is-style-stripes">
EnumeratedItemized
\alph*$\bullet$
\Alph*$\cdot$
\arabic*$\diamond$
\roman*$\ast$
\Roman*$\circ$
$-$
</figure>

Default style by list depth

<figure class="wp-block-table is-style-stripes">
LevelEnumerated Itemized
1NumberBullet
2Lowercase alphabetDash
3Roman numeralsAsterisk
4Uppercase alphabetPeriod
</figure>

Setting list styles

The below example illustrates each of the different itemiszed list styles.

% Itemize style
\begin{itemize}
    \item[$\ast$] Asterisk 
    \item[$\diamond$] Diamond 
    \item[$\circ$] Circle 
    \item[$\cdot$] Period
    \item[$\bullet$] Bullet (default)
    \item[--] Dash
    \item[$-$] Another dash
\end{itemize}
<figure class="wp-block-image size-large"></figure>

There are three methods of setting list styles. They are illustrated below. These methods are listed by priority; highest priority first. A higher priority will override a lower priority if more than one is defined for a list item.

List styling method 1 – per item

Enclose the name of the desired style in square brackets and place it on the \item command as demonstrated below.

% First method
\begin{itemize}
    \item[$\ast$] Asterisk 
    \item[$\diamond$] Diamond 
    \item[$\circ$] Circle 
    \item[$\cdot$] period
    \item[$\bullet$] Bullet (default)
    \item[--] Dash
    \item[$-$] Another dash
\end{itemize}

List styling method 2 – on the list

Prefix the name of the desired style with label=. Place the parameter, including the label= prefix, in square brackets on the \begin command as demonstrated below.

% Second method
\begin{enumerate}[label=\Alph*.]
    \item Fedora 32
    \item Fedora 31
    \item Fedora 30
\end{enumerate}

List styling method 3 – on the document

This method changes the default style for the entire document. Use the \renewcommand to set the values for the labelitems. There is a different labelitem for each of the four label depths as demonstrated below.

% Third method
\renewcommand{\labelitemi}{$\ast$}
\renewcommand{\labelitemii}{$\diamond$}
\renewcommand{\labelitemiii}{$\bullet$}
\renewcommand{\labelitemiv}{$-$}

Summary

LaTeX supports three types of lists. The style and spacing of each of the list types can be customized. More LaTeX elements will be explained in future posts.

Additional reading about LaTeX lists can be found here: LaTeX List Structures

Curious case of image based email signatures and Kmail

Posted by Kushal Das on June 15, 2020 03:02 AM

We already talk about why HTML emails are bad, but that is the default in most of the email service providers. HTML emails means some code is getting executed and rendered on your system. Maybe on a browser, or on a desktop email client.

Many people do not use any HTML tag in their emails, but then they have fancy email signatures. A lot of time they have fancy image generated on a website and they use the generated image URL as signature. This means every time someone opened the email (with HTML rendering on) the third party company will be able to track those usages. We don't know what happens next to all of these tracking information.

Last week I was trying out various desktop email clients available on Fedora 32, and noticed a strange thing on Kmail/Kontact, the email client of KDE. I run my Unoon tool to monitor all processes for any network connection on system. And, suddenly it popped a notification about Kmail connecting to mysignatures.io. I was surprised for a second, as Kmail also disables loading of any remote resource (say images) and does not render HTML email by default.

Screenshot of Unoon

Then I figured that if I click on reply button (the compose window), it fetches the image from the signature (or any <img> tag). This means the HTML is getting rendered somehow, even if it is not showing to the user. After I filed a bug upstream, I also pinged my friend ADE. He helped to reproduce it and also find more details on the same. Now, we are waiting for a fix. I hope this does not involve JS execution during that internal rendering.

I also checked for same behavior in Thunderbid, and it does not render in similar way.

Episode 201 – We broke CVSSv3, now how do we fix it?

Posted by Josh Bressers on June 15, 2020 12:01 AM

Josh and Kurt talk about CVSSv3 and how it’s broken. We started with a blog post to explain why the NVD CVSS scores are so wrong, and we ended up researching CVSSv3 and found out it’s far more broken than any of us expected in ways we didn’t expect. NVD isn’t broken, CVSSv3 is. How did we get here? Are there any options that work today? Where should we go next?

<audio class="wp-audio-shortcode" controls="controls" id="audio-1770-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_201_We_broke_CVSSv3_now_how_do_we_fix_it.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_201_We_broke_CVSSv3_now_how_do_we_fix_it.mp3</audio>

Show Notes

Announce: Entangle “Potassium“ release 3.0 – an app for tethered camera control & capture

Posted by Daniel Berrange on June 14, 2020 09:45 PM
I am pleased to announce a new release 3.0 of Entangle is available for download from the usual location:
  https://entangle-photo.org/download/
This release has a mixture of new features and bug fixes, as well as improved translations
  • Ensure picture directory is an absolute path
  • Add ability to flip image during preview
  • Fix warnings about string length
  • Convert appdata file to metainfo and add missing info
  • Another attempt to fix build deps on enum headers
  • Display remaining shot count & ETA in repeat shooter plugin
  • Remove use of deprecated GObject macros
  • Remove use of deprecated GLib APIs
  • Ensure glib min version checks at build time
  • Convert to use GObject macros for declaring boilerplate
  • Bulk reformat code using clang-format rules
  • Force a default filename pattern if setting is invalid
  • Validate filename pattern when updating preferences
  • Rename desktop file and icon file to match application ID
  • Add ability to render text messages over image
  • Add a countdown timer for photobox plugin capture
  • Disable context menu in image browser when photobox plugin is active
  • Run live view when counting down to capturein photobox plugin
  • Fix crash releasing popup windows
  • Add context menu option for opening popup image window
  • Drop drag support for opening popup image window
  • Display greeting when opening popup image window
  • Display greeting when entering photobox plugin
  • Disable view finder after capturing image in preview mode
Thanks are due to all who have contributed to this new release whether through bug reports, feature requests, translations or code patches.

I’m (not) Here in Facebook

Posted by Adam Young on June 13, 2020 10:06 PM

(To the tune of Payphone by Maroon 5)

I'm here on Facebook I came for a Quick Look
I didn't mean to spend the entire day
On Pictures My Dad took of Brisket my Mom Cooked
I should not be wasting my whole life this way

Yeah now I'm forced to remember
The People in seventh grade
Cuz one just posted a picture
A Jazz band concert we played

Now its too late to make it
To the grocery store for supplies
All that time that I wasted
All of my Fridge has been emptied out

I've Wasted my nights
Should have turned out the lights
But I was paralyzed
My nieces replies
My brother recorded from
a conversation recorded off of FaceTime.

I came to Facebook my evening's forsook
All of my hours I wasted on here
Video Mom took of Water in the Brook
It runs really fast this time of year

If I can find some vittles after this
I might still get to sleep by six
When I'm at  my desk I will be sick
One more stupid link I have to click

I made a mess of tomorrow
I was on Facebook today
And that excuse sounds so hollow
Cept You messed up the same way

You can't expect me to like
Political Postings you share
I know we agreed before
But these stupid arguments need shooting down

I've Wasted my nights
Should have turned out the lights
But I was paralyzed
My nephew's replies
My sister recorded from
Stories that she read on line

I came to Facebook my Weekends caput
Both of my Days off I wasted on posts
Flags up at half mast, Stories of Dumbass
Trying to count who responds the most

If I can break my trend here after all
I might still get to sleep by Fall
When I'm at  my desk I will sink
And give in to the urge to click that link

Yeah yeah no way to hang up
So I can't find a way in my head
Though my wife is dragging me to bed
Too many messages left unread

Man, stop that crap
Asking for some birthday money
to a cause that just sounds funny
Why it wasn't you who made up that format
of the meme that you use to torment
At least the wave features turned off
Click bait a false cause
And all of your friends got spammed from some button
Course you should have known better
Some juvenile porno-grafitti
I must remove from my time line
You talked a good game until you took that last shot
and the candidate you backed just got picked up for pot
While we once were friends but now its over tops
Stupid racists remarks got me slamming on blocks
Bone going off and away, this ain't what I'm looking for
Now Its me that is gonna be free from this place
Taking my friend list with me

I'm not on FaceBook, that site is caput
Not gonna spend no more time on you
One day it was what it took, removed the whole book
My time on FaceBook is here by though

I've got so much more time in my day
Now that I threw my account away
Cuz you know that site is full of it
One more stupid posting I'd be sick

I'm off of Facebook....

Already Met

Posted by Adam Young on June 13, 2020 09:19 PM
<figure class="wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="329" src="https://www.youtube.com/embed/q49MBQWf3HY?feature=oembed" width="584"></iframe>
</figure>

Lyrics By Debbie Grant and Adam Young
Music By Adam Young

Copyright 2020

I’m staying in my room tonight to embrace my outer curmudgeon
I Do not Want to be put on Display or put up with your judgin
I hate making conversation I have no desire to go
And I’ve Already met everybody I want to know

I now deny all requests for me to step out side my home.
It hereby is my deepest wish just to be left all alone
I require no conspirators, I have nothing to overthrow
And I’ve Already met everybody I want to know

There’s too much stuff in my poor brain
Far more than it can contain
They’re piled on my bed and spilling on the floor
If I do not contain them they’ll be pushing down the door
I need to put some memories into long term store
Until then I do not have the space for anymore

My view of humanity is dialed up to myopic
my antisocial tendencies are set on misanthropic
There is too much here already, I’m on humanity overload
And I’ve Already met everybody I want to know

Access to my person space shall here on not be Granted.
Here inside my person space shall I here on thus be planted
Like a Mushroom in the dark That Is how I want to grow
And I’ve Already met everybody I want to know

I cannot bear the thought of any place that is too loud
For me one makes company more than that defines a crowd
So you go on with out me I remain here stowed
For I’ve Already met everybody I want to know

Here are the chords:

Verse:

D- A- Bb F
G- D- G- A7
D- A- Bb C7
G- C7 F

Bridge

G- D-
G- D-
A- C7
A- C7
G- C7
A- D7

Anyone that is worrying about me based on the tone of this song; I was just channeling. I’m an extrovert. Debbie is the one that came up with the line that drives the song

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on June 13, 2020 04:39 PM
New status scheduled: We are running from new datacentre with limited capacity for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Fedora websites, Fedora Wiki, Zodbot IRC bot

Fedora Update Weeks 10--24

Posted by Elliott Sales de Andrade on June 13, 2020 04:01 AM
It’s been quite a long while since the last packaging update. If you’ve not followed me on Twitter, you may not have seen that I started working on Matplotlib. While I have not stopped packaging, I have had somewhat less time for some of the extras, like writing up these posts. One major impetus for putting this one together is that Fedora 30 has reached End-of-Life. I did fall behind slightly with R packages again, so that lead to a few batches of many updates at once.

Résultats des élections de Fedora 06/20

Posted by Charles-Antoine Couret on June 12, 2020 08:16 PM

Comme je vous le rapportais il y a peu, Fedora a organisé des élections pour renouveler partiellement le collège de ses organes FESCo, Mindshare et Council.

Le scrutin est comme toujours un vote par valeurs. Nous pouvons attribuer à chaque candidat un certain nombre de points, dont la valeur maximale est celui du nombre de candidat, et le minimum 0. Cela permet de montrer l'approbation à un candidat et la désapprobation d'un autre sans ambiguïté. Rien n'empêchant de voter pour deux candidats avec la même valeur.

Les résultats pour le Conseil sont :

  # votes |  noms
  - --------+----------------------
  654	Aleksandra Fedorova
  - --------+----------------------
  591	Till Maas
  314	James Cassell
  303	Alberto Rodriguez Sanchez

À titre indicatif le score maximal possible était de 267 * 4 votes soit 1068.

Les résultats pour le FESCo sont (seuls les quatre premiers sont élus) :

  # votes |  noms
- --------+----------------------
    1507  | Neal Gompa (ngompa)
    1450  | Stephen Gallagher (sgallagh)
    1372  | Igor Raits (ignatenkobrain)
    1148  | Clément Verna (cverna)
- --------+----------------------
    1124  | Justin Forbes (jforbes)
     997  | Chris Murphy (chrismurphy)
     937  | Petr Šabata (psabata)
     904  | Frantisek Zatloukal (frantisekz)
     755  | James Cassell (cyberpear)
     730  | Michal Novotný (clime)

À titre indicatif le score maximal possible était de 273 * 10 soit 2730.

Les résultats pour le Mindshare sont donc :

     # votes |  noms
     - --------+----------------------
     586	Maria Leandro
     - --------+----------------------
     420	Sumantro Mukherjee
     288	Alessio Ciregia
     188	Daniel Lara

À titre indicatif le score maximal possible était de 220 * 4 soit 880.

Nous pouvons noter que globalement le nombre de votants pour chaque scrutin avec un réel enjeu était proche aux alentours de 275-250 votants ce qui est proche de l'édition précédente. Les scores sont aussi plutôt éparpillés.

Bravo aux participants et aux élus et le meilleur pour le projet Fedora.

Facilitation, collaboration, and webcams: A story about Principles of Authentic Participation

Posted by Justin W. Flory on June 12, 2020 08:10 PM
First-ever overnight hackathon in Albania for sustainable goals

The post Facilitation, collaboration, and webcams: A story about Principles of Authentic Participation appeared first on Justin W. Flory's blog.

Justin W. Flory's blog - Free Software, music, travel, and life reflections

This is the story about the facilitation of the Principles of Authentic Participation.

This post does not describe what the Principles are (click that link to learn more about them). This post describes the story behind the Principles, and how our Sustain Working Group worked together over three months of virtual facilitation during the COVID–19 crisis to build these Principles.

Overview

This blog post is a story, or perhaps open source lore. So, here is the abridged summary:

  1. The Sticky Idea: How did a discussion topic at a one-day open source sustainability conference evolve into a three-month extended collaboration?
  2. Facilitation, Roosevelt-style: The people are here. How do you facilitate a conversation with no scope and few bounds?
  3. Is there a next chapter to this story?: The Working Group is winding down. What happens to the Principles next?

If you are hooked, read on.

The Sticky Idea

How does a discussion topic at a one-day conference evolve into an inter-organizational, international collaboration that spans three months?

When the accountability and transparency discussion groups formed at Sustain Summit 2020, none of us knew what would come after the event. Not to mention, there were several different sustainability topics explored at the Summit.

So, the conversation about corporate accountability was about the same as every other conversation during that morning: someone was motivated enough to step up and say, “I’ll do it – I’ll facilitate this conversation!”

Open Source Accountability Goals

Duane O’Brien volunteered to lead facilitation on defining goals for open source accountability. Duane proposed four goals to iterate on in the Summit break-out groups:

  1. Set and publish a goal for open source contribution relative to value capture
  2. Adhere to principles of authentic participation
  3. Publish documentation of open source policies, processes, and project governance
  4. Well defined reporting process that is publicly available

The morning discussions broadly focused on these goals. After the ice was broken and conversation was flowing, themes and patterns emerged in the stories we shared with each other. Later that day, Allen Gunn asked me if I would lead an afternoon discussion session. The second goal, these principles of authentic participation, were personally interesting to me, and the morning group was engaged too. So I said, “Yes, I’ll do it!” Even though I did not really have any idea what I was going to do yet.

Facilitation of Authentic Participation discussion

After lunch, I gathered folks for the discussion group to discuss what authentic participation means. If we could propose a basic set of principles that we agree on, could this be a useful tool for the pain points of stories shared in the morning session?

The afternoon discussion was insightful, but lacked firm conclusions. We had great ideas and lots of stories, but nothing to tie them together. I collected email addresses of folks who wanted to continue engaging on the Principles of Authentic Participation. However, I wasn’t sure what the next step would be at the time.

At the Summit, I committed to facilitation of a public Discourse forum discussion, but some attendees voiced that Discourse was not accessible for them. To compromise without exhausting myself across too many platforms, I promised to host a few online discussions for folks to gather and talk about these things again later.

The embers were hot on this discussion at the Sustain Summit. But it was still just embers. How do we get these embers to “spark” into something bigger? Enter the Fireside Chats.

Facilitation, Roosevelt-style

So, skip ahead a couple weeks. I was ready to push the conversation forward. The time was right for the first follow-up email to the discussion group participants. As promised, I opened a Discourse discussion that summarized our notes from the conference and asked open-ended questions. Later on, I announced the first of four Fireside Chats. The Fireside Chats became the primary vehicle of collaboration for the working group.

Text-based communications are my preference. But video?? I would have to swallow my introverted shyness if I was going to lead this. I never facilitated an online discussion group before. There were also not many public examples to learn from either. The style I took to the Fireside Chats was mostly my own. I relied on my past experience of facilitating open source project meetings and development to drive these Fireside Chats. And I borrowed a little inspiration from former American president Franklin D. Roosevelt’s fireside chats during the 1930s/1940s.

For the first Fireside Chat on 2020 February 28th, I had no idea what I was doing. I prepared a loose agenda, but I left it broad so people could bring their own interests and passions into the conversation. I figured doing this would allow people to bring their own needs, desires, and wants to the conversation. It was unrealistic to expect a collaboration driven by my own motivations.

A successful collaboration requires all participants to have an opportunity to satisfy their own personal motivations for showing up in the first place. So, my approach centered our collaborative work on the group and not just myself, to avoid a high initial interest that dwindles down over time.

How did facilitation start?

The first Fireside Chat was exploratory. It was our first time talking about the Principles since the Sustain Summit. We caught back up on where we left off, detailed what we wanted to get out of this collaboration, and began scoping out what we thought we could accomplish together.

Although the first chat was mostly unstructured, it was essential to to identify themes and ideas that led to more focused, structured discussions for the next three Fireside Chats. The Discourse thread was also useful as an accessory for the Fireside Chats. I published notes from each Fireside Chat on the Discourse thread, and there was some asynchronous discussion between Fireside Chats.

Beyond the first Fireside Chat, the agendas became easier for me to write and the feedback became more focused. Fortunately, most of this work happened in public on the Discourse thread. So, if you are curious for more details on how the final three Fireside Chats went, take a look at the discussion thread.

Is there a next chapter to this story?

For now, the Principles of Authentic Participation Working Group is going dormant. We met our original goal of drafting some basic principles.

So, now what happens? So, let’s try to predict the future! (That can’t be that hard, right?)

My hope is that the Principles of Authentic Participation leads to more story-telling about what it means to authentically contribute to open source, whether you are an individual or an organization. To help curate the stories, I created a template to encourage folks to share them with us. The template provides question that makes it easy for a maintainer to copy and paste the story into our published Principles of Authentic Participation website.

Whether this hope comes true or not, we will see. But the Principles have a life of their own now. It doesn’t mean the Working Group will never meet again, or that we won’t revisit these ideas over time. But these Principles are now the “property” of the community to continue building. I will continue to participate where I can to curate stories about the Principles.

Closing thoughts

My hope in sharing this story is to help other facilitators and activists in the open source world approach digital-only organizing. Digital facilitation and organization is a skill we are all learning, for better or worse, in a COVID-19 world. But it isn’t a new skill. Lots of folks have been doing this for a long time, especially in the digital-first world of open source.

So, I hope this paints a picture of how we pulled off the Principles of Authentic Participation and how others can take what we did and improve on our processes.

It is possible to work collaboratively with new people on digital initiatives across different backgrounds and sectors. Remote facilitation is someone being brave enough to step up and lead, even if they have no idea what they are doing. After all… isn’t that what many other white American men like me do anyways? So can you.

Fedora 32: Simple Local File-Sharing with Samba

Posted by Fedora Magazine on June 12, 2020 02:19 PM

Sharing files with Fedora 32 using Samba is cross-platform, convenient, reliable, and performant.

What is ‘Samba’?

Samba is a high-quality implementation of Server Message Block protocol (SMB). Originally developed by Microsoft for connecting windows computers together via local-area-networks, it is now extensively used for internal network communications.

Apple used to maintain it’s own independent file sharing called “Apple Filing Protocol (AFP)“, however in recent times, it also has also switched to SMB.

In this guide we provide the minimal instructions to enable:

  • Public Folder Sharing (Both Read Only and Read Write)
  • User Home Folder Access
Note about this guide: The convention '~]$' for a local user command prompt, and '~]#' for a super user prompt will be used.

Public Sharing Folder

Having a shared public place where authenticated users on an internal network can access files, or even modify and change files if they are given permission, can be very convenient. This part of the guide walks through the process of setting up a shared folder, ready for sharing with Samba.

Please Note: This guide assumes the public sharing folder is on a Modern Linux Filesystem; other filesystems such as NTFS or FAT32 will not work. Samba uses POSIX Access Control Lists (ACLs).

For those who wish to learn more about Access Control Lists, please consider reading the documentation: "Red Hat Enterprise Linux 7: System Administrator's Guide: Chapter 5. Access Control Lists", as it likewise applies to Fedora 32.

In General, this is only an issue for anyone who wishes to share a drive or filesystem that was created outside of the normal Fedora Installation process. (such as a external hard drive).

It is possible for Samba to share filesystem paths that do not support POSIX ACLs, however this is out of the scope of this guide.

Create Folder

For this guide the /srv/public/ folder for sharing will be used.

The /srv/ directory contains site-specific data served by a Red Hat Enterprise Linux system. This directory gives users the location of data files for a particular service, such as FTP, WWW, or CVS. Data that only pertains to a specific user should go in the /home/ directory.

Red Hat Enterprise Linux 7, Storage Administration Guide: Chapter 2. File System Structure and Maintenance: 2.1.1.8. The /srv/ Directory
Make the Folder (will provide an error if the folder already exists).
~]# mkdir --verbose /srv/public

Verify folder exists:
~]$ ls --directory /srv/public

Expected Output:
/srv/public

Set Filesystem Security Context

To have read and write access to the public folder the public_content_rw_t security context will be used for this guide. Those wanting read only may use: public_content_t.

Label files and directories that have been created with the public_content_rw_t type to share them with read and write permissions through vsftpd. Other services, such as Apache HTTP Server, Samba, and NFS, also have access to files labeled with this type. Remember that booleans for each service must be enabled before they can write to files labeled with this type.

Red Hat Enterprise Linux 7, SELinux User’s and Administrator’s Guide: Chapter 16. File Transfer Protocol: 16.1. Types: public_content_rw_t

Add /srv/public as “public_content_rw_t” in the system’s local filesystem security context customization registry:

Add new security filesystem security context:
~]# semanage fcontext --add --type public_content_rw_t "/srv/public(/.*)?"

Verifiy new security filesystem security context:
~]# semanage fcontext --locallist --list

Expected Output: (should include)
/srv/public(/.*)? all files system_u:object_r:public_content_rw_t:s0

Now that the folder has been added to the local system’s filesystem security context registry; The restorecon command can be used to ‘restore’ the context to the folder:

Restore security context to the /srv/public folder:
$~]# restorecon -Rv /srv/public

Verify security context was correctly applied:
~]$ ls --directory --context /srv/public/

Expected Output:
unconfined_u:object_r:public_content_rw_t:s0 /srv/public/

User Permissions

Creating the Sharing Groups

To allow a user to either have read only, or read and write accesses to the public share folder create two new groups that govern these privileges: public_readonly and public_readwrite.

User accounts can be granted access to read only, or read and write by adding their account to the respective group (and allow login via Samba creating a smb password). This process is demonstrated in the section: “Test Public Sharing (localhost)”.

Create the public_readonly and public_readwrite groups:
~]# groupadd public_readonly
~]# groupadd public_readwrite

Verify successful creation of groups:
~]$ getent group public_readonly public_readwrite

Expected Output: (Note: x:1...: number will probability differ on your System)
public_readonly:x:1009:
public_readwrite:x:1010:

Set Permissions

Now set the appropriate user permissions to the public shared folder:

Set User and Group Permissions for Folder:
~]# chmod --verbose 2700 /srv/public
~]# setfacl -m group:public_readonly:r-x /srv/public
~]# setfacl -m default:group:public_readonly:r-x /srv/public
~]# setfacl -m group:public_readwrite:rwx /srv/public
~]# setfacl -m default:group:public_readwrite:rwx /srv/public

Verify user permissions have been correctly applied:
~]$ getfacl --absolute-names /srv/public

Expected Output:
file: /srv/public
owner: root
group: root
flags: -s-
user::rwx
group::---
group:public_readonly:r-x
group:public_readwrite:rwx
mask::rwx
other::---
default:user::rwx
default:group::---
default:group:public_readonly:r-x
default:group:public_readwrite:rwx
default:mask::rwx
default:other::---

Samba

Installation

~]# dnf install samba

Hostname (systemwide)

Samba will use the name of the computer when sharing files; it is good to set a hostname so that the computer can be found easily on the local network.

View Your Current Hostname:
~]$ hostnamectl status

If you wish to change your hostname to something more descriptive, use the command:

Modify your system's hostname (example):
~]# hostnamectl set-hostname "simple-samba-server"
For a more complete overview of the hostnamectl command, please read the previous Fedora Magazine Article: "How to set the hostname on Fedora".

Firewall

Configuring your firewall is a complex and involved task. This guide will just have the most minimal manipulation of the firewall to enable Samba to pass through.

For those who are interested in learning more about configuring firewalls; please consider reading the documentation: "Red Hat Enterprise Linux 8: Securing networks: Chapter 5. Using and configuring firewall", as it generally applies to Fedora 32 as well.
Allow Samba access through the firewall:
~]# firewall-cmd --add-service=samba --permanent
~]# firewall-cmd --reload

Verify Samba is included in your active firewall:
~]$ firewall-cmd --list-services

Output (should include):
samba

Configuration

Remove Default Configuration

The stock configuration that is included with Fedora 32 is not required for this simple guide. In particular it includes support for sharing printers with Samba.

For this guide make a backup of the default configuration and create a new configuration file from scratch.

Create a backup copy of the existing Samba Configuration:
~]# cp --verbose --no-clobber /etc/samba/smb.conf /etc/samba/smb.conf.fedora0

Empty the configuration file:
~]# > /etc/samba/smb.conf

Samba Configuration

Please Note: This configuration file does not contain any global definitions; the defaults provided by Samba are good for purposes of this guide.
Edit the Samba Configuration File with Vim:
~]# vim /etc/samba/smb.conf

Add the following to /etc/samba/smb.conf file:

# smb.conf - Samba Configuration File

# The name of the share is in square brackets [],
#   this will be shared as //hostname/sharename

# There are a three exceptions:
#   the [global] section;
#   the [homes] section, that is dynamically set to the username;
#   the [printers] section, same as [homes], but for printers.

# path: the physical filesystem path (or device)
# comment: a label on the share, seen on the network.
# read only: disable writing, defaults to true.

# For a full list of configuration options,
#   please read the manual: "man smb.conf".

[global]

[public]
path = /srv/public
comment = Public Folder
read only = No

Write Permission

By default Samba is not granted permission to modify any file of the system. Modify system’s security configuration to allow Samba to modify any filesystem path that has the security context of public_content_rw_t.

For convenience, Fedora has a built-in SELinux Boolean for this purpose called: smbd_anon_write, setting this to true will enable Samba to write in any filesystem path that has been set to the security context of public_content_rw_t.

For those who are wishing Samba only have a read-only access to their public sharing folder, they may choose skip this step and not set this boolean.

There are many more SELinux boolean that are available for Samba. For those who are interested, please read the documentation: "Red Hat Enterprise Linux 7: SELinux User's and Administrator's Guide: 15.3. Samba Booleans", it also apply to Fedora 32 without any adaptation.
Set SELinux Boolean allowing Samba to write to filesystem paths set with the security context public_content_rw_t:
~]# setsebool -P smbd_anon_write=1

Verify bool has been correctly set:
$ getsebool smbd_anon_write

Expected Output:
smbd_anon_write --> on

Samba Services

The Samba service is divided into two parts that we need to start.

Samba ‘smb’ Service

The Samba “Server Message Block” (SMB) services is for sharing files and printers over the local network.

Manual: “smbd – server to provide SMB/CIFS services to clients

Enable and Start Services

For those who are interested in learning more about configuring, enabling, disabling, and managing services, please consider studying the documentation: "Red Hat Enterprise Linux 7: System Administrator's Guide: 10.2. Managing System Services".
Enable and start smb and nmb services:
~]# systemctl enable smb.service
~]# systemctl start smb.service

Verify smb service:
~]# systemctl status smb.service

Test Public Sharing (localhost)

To demonstrate allowing and removing access to the public shared folder, create a new user called samba_test_user, this user will be granted permissions first to read the public folder, and then access to read and write the public folder.

The same process demonstrated here can be used to grant access to your public shared folder to other users of your computer.

The samba_test_user will be created as a locked user account, disallowing normal login to the computer.

Create 'samba_test_user', and lock the account.
~]# useradd samba_test_user
~]# passwd --lock samba_test_user

Set a Samba Password for this Test User (such as 'test'):
~]# smbpasswd -a samba_test_user

Test Read Only access to the Public Share:

Add samba_test_user to the public_readonly group:
~]# gpasswd --add samba_test_user public_readonly

Login to the local Samba Service (public folder):
~]$ smbclient --user=samba_test_user //localhost/public

First, the ls command should succeed,
Second, the mkdir command should not work,
and finally, exit:
smb: \> ls
smb: \> mkdir error
smb: \> exit

Remove samba_test_user from the public_readonly group:
gpasswd --delete samba_test_user public_readonly

Test Read and Write access to the Public Share:

Add samba_test_user to the public_readwrite group:
~]# gpasswd --add samba_test_user public_readwrite

Login to the local Samba Service (public folder):
~]$ smbclient --user=samba_test_user //localhost/public

First, the ls command should succeed,
Second, the mkdir command should work,
Third, the rmdir command should work,
and finally, exit:
smb: \> ls
smb: \> mkdir success
smb: \> rmdir success
smb: \> exit

Remove samba_test_user from the public_readwrite group:
~]# gpasswd --delete samba_test_user public_readwrite

After testing is completed, for security, disable the samba_test_user‘s ability to login in via samba.

Disable samba_test_user login via samba:
~]# smbpasswd -d samba_test_user

Home Folder Sharing

In this last section of the guide; Samba will be configured to share a user home folder.

For example: If the user bob has been registered with smbpasswd, bob’s home directory /home/bob, would become the share //server-name/bob.

This share will only be available for bob, and no other users.

This is a very convenient way of accessing your own local files; however naturally it carries at a security risk.

Setup Home Folder Sharing

Give Samba Permission for Public Folder Sharing

Set SELinux Boolean allowing Samba to read and write to home folders:
~]# setsebool -P samba_enable_home_dirs=1

Verify bool has been correctly set:
$ getsebool samba_enable_home_dirs

Expected Output:
samba_enable_home_dirs --> on

Add Home Sharing to the Samba Configuration

Append the following to the systems smb.conf file:

# The home folder dynamically links to the user home.

# If 'bob' user uses Samba:
# The homes section is used as the template for a new virtual share:

# [homes]
# ...   (various options)

# A virtual section for 'bob' is made:
# Share is modified: [homes] -> [bob]
# Path is added: path = /home/bob
# Any option within the [homes] section is appended.

# [bob]
#       path = /home/bob
# ...   (copy of various options)


# here is our share,
# same as is included in the Fedora default configuration.

[homes]
        comment = Home Directories
        valid users = %S, %D%w%S
        browseable = No
        read only = No
        inherit acls = Yes

Reload Samba Configuration

Tell Samba to reload it's configuration:
~]# smbcontrol all reload-config

Test Home Directory Sharing

Switch to samba_test_user and create a folder in it's home directory:
~]# su samba_test_user
samba_test_user:~]$ cd ~
samba_test_user:~]$ mkdir --verbose test_folder
samba_test_user:~]$ exit

Enable samba_test_user to login via Samba:
~]# smbpasswd -e samba_test_user

Login to the local Samba Service (samba_test_user home folder):
$ smbclient --user=samba_test_user //localhost/samba_test_user

Test (all commands should complete without error):
smb: \> ls
smb: \> ls test_folder
smb: \> rmdir test_folder
smb: \> mkdir home_success
smb: \> rmdir home_success
smb: \> exit

Disable samba_test_user from login in via Samba:
~]# smbpasswd -d samba_test_user

Fedora program update: 2020-24

Posted by Fedora Community Blog on June 12, 2020 12:00 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora this week. Congratulations to the winners of the Fedora 32 elections. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. Announcements Help wanted Upcoming meetings Releases CPE update Announcements Orphaned packages […]

The post Fedora program update: 2020-24 appeared first on Fedora Community Blog.

Instalando Gentoo con LVM,LUKS, SELinux

Posted by Alvaro Castillo on June 12, 2020 06:10 AM

Hemos hablado una vez de Gentoo en un post anterior, por lo que voy a ir más al grano en este post sobre cómo lo he conseguido instalar y cómo ayudarte si tienes dudas haciéndolo. Te recomiendo que si no tienes una pantalla dónde puedas consultar la información de este post, que utilices un medio live que te permita interactuar con todos estos pasos, porque es muy importante que no te pierdas al llevar a cabo este post. También es importante TENER TIEMPO para hacer esta instalación y no esperes que este proceso se hace en un par de horas. Se compilará mucho código fuente y este proceso ES COSTOSO en todos sus aspectos, por lo que te recomiendo QUE NO HAGAS muchas compilaciones a la vez.

Consideraciones

  • Configurar la BIOS de tu equipo para que arranque en modo UEFI y sin tener activado el secure boot.
  • Tener mucho tiempo libre y pensar las cosas antes de ejecutarlas
  • Tener un tazón y una cafetera enorme de café al lado.
  • Tener un buen respaldo en la silla y tiene que ser cómodo, nada de taburetes de 100 montaditos

Hardware que utilicé

Estas son algunas de las especificaciones de Hardware con los que hice la instalación:

Creando un live

En mi caso, como prefiero ver vídeos, distraerme y ver la documentación bien y de forma clara, prefiero realizar este proceso desde una imagen live con un entorno de escritorio como es Calculate Linux antes de utilizar la imagen que Gentoo libera para la instalación que es todo en terminal. No obstante, si quieres, puedes utilizar los siguientes comandos que utilicé para generar el pendrive con Calculate Linux para crearte tu pendrive con la imagen de Gentoo.

Siguiendo la documentación de Calculate Linux, insertamos el USB. Ejecutamos dmesg veremos cuál es el pendrive reconocido en el sistema:

$ dmesg
[ 6451.987041] usb 1-11: new high-speed USB device number 6 using xhci_hcd
[ 6452.167412] usb 1-11: New USB device found, idVendor=0951, idProduct=1666, bcdDevice= 1.00
[ 6452.167417] usb 1-11: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 6452.167421] usb 1-11: Product: DataTraveler 3.0
[ 6452.167424] usb 1-11: Manufacturer: Kingston
[ 6452.167426] usb 1-11: SerialNumber: 002618A369C3B1A0B7AC022F
[ 6452.168867] usb-storage 1-11:1.0: USB Mass Storage device detected
[ 6452.169331] scsi host3: usb-storage 1-11:1.0
[ 6453.203673] scsi 3:0:0:0: Direct-Access     Kingston DataTraveler 3.0 PMAP PQ: 0 ANSI: 6
[ 6453.204285] sd 3:0:0:0: Attached scsi generic sg1 type 0
[ 6453.204565] sd 3:0:0:0: [sdb] 121110528 512-byte logical blocks: (62.0 GB/57.8 GiB)
[ 6453.205397] sd 3:0:0:0: [sdb] Write Protect is off
[ 6453.205403] sd 3:0:0:0: [sdb] Mode Sense:...

Fedora 32 elections results

Posted by Fedora Community Blog on June 12, 2020 01:13 AM
Fedora 26 Supplementary Wallpapers: Vote now!

The Fedora 32 election cycle has concluded. Here are the results for each election. Congratulations to the winning candidates, and thank you all candidates for running in this election! Results Council One Council seat was open this election. A total of 267 ballots were cast, meaning a candidate could accumulate up to 1068 votes (267 […]

The post Fedora 32 elections results appeared first on Fedora Community Blog.

Council policy proposal: withdrawing support from events

Posted by Fedora Community Blog on June 11, 2020 10:27 PM
Fedora community elections

The Fedora Council is considering a change in policy that better defines how the Council will handle withdrawing from sponsored events. The policy as proposed by Ben Cotton, with edits from the Mindshare Committee is: The Fedora Council may choose to withdraw Fedora’s support from events or other activities that involve fiscal sponsorship or use […]

The post Council policy proposal: withdrawing support from events appeared first on Fedora Community Blog.

PXE and Kickstart: repos

Posted by Adam Young on June 11, 2020 07:41 PM

My last PXE boot attempt got into the Kickstart stage and then failed due to the repo set up. The VM SCreen looked like this.

<figure class="wp-block-image"></figure>

I made the following changes to /var/www/html/kickstart/kickstart.conf

# Use graphical install
 cmdline
....
#repo --name="AppStream" --baseurl=file:///run/install/repo/AppStream
repo --name="ISO" --baseurl=http://192.168.122.1/rhel8.1/AppStream

This let me skip the graphical install and also set up the repo to be the one from the mounted ISO. I restarted the VM and it seems to have worked. As of now, the install is progressing.

PXE Setup: Debugging Kickstart

Posted by Adam Young on June 11, 2020 05:24 PM

Once I re-enabled DHCPD and TFTP, my Machines got through the basics of PXE, but then failed Kickstart. Here’s the debugging:

<figure class="wp-block-image"><figcaption>Error messages in the VM console,m pointing at the Kickstart failure causes</figcaption></figure>

The two messages that seem to be relevant point to missing files; install.img and squashfs.img. I am fairly certain those are on the install medium. Lets look:

ls /var/www/html/rhel8.1/

Bupkis (nothin). The mount point was not redone after reboot. It needs to be in the fstab to make that happen. I added the following line.

/var/lib/libvirt/images/rhel-8.1-x86_64-dvd.iso /var/www/html/rhel8.1/ iso9660 loop,ro 0 0

Now I can mount using the command mount /var/www/html/rhel8.1/ and it works without producing any output. This should happend at reboot time too….which I will test once I fix everything.

Now Kickstart proceedes, but I see the install screen with errors on it. Debugging that is next.

PXE Setup Part the First

Posted by Adam Young on June 11, 2020 05:05 PM

PXE is conglomeration of tools used to get a new operating system onto a computer. It is based on two protocols: DHCP and TFPT. I used PXER a long time ago at Penguin and have always wanted to set it up for my home personal use. I’m doing that now for my lab. My goal is to first be able to provision virtual machines, and then to provision physical boxes. I need to do a full install of RHEL 7 and RHEL 8, which means I also need Kickstart to automate the install process. I had it working, but after rebooting the NUC it is running on it broke. Here’s my debugging.

Start by booting up the VM. It fails looking like this:

<figure class="wp-block-image"><figcaption>DHCP requiest was not serviced.</figcaption></figure>

So, I jumped to conclusion and though it was the tfpt server. It wasn’t but that needed to be restarted anyway:

# systemctl status tftp
● tftp.service - Tftp Server
   Loaded: loaded (/usr/lib/systemd/system/tftp.service; indirect; vendor prese>
   Active: inactive (dead)
     Docs: man:in.tftpd
[root@nuzleaf opt]# systemctl enable tftp
[root@nuzleaf opt]# systemctl start tftp

But the real issue was the dhcpd service:

# systemctl status dhcpd
● dhcpd.service - DHCPv4 Server Daemon
   Loaded: loaded (/usr/lib/systemd/system/dhcpd.service; enabled; vendor prese>
   Active: failed (Result: exit-code) since Sat 2020-06-06 14:39:16 EDT; 4 days>
     Docs: man:dhcpd(8)
           man:dhcpd.conf(5)
 Main PID: 1414 (code=exited, status=1/FAILURE)

Jun 06 14:39:16 nuzleaf.home.younglogic.net dhcpd[1414]: have been made to the >
Jun 06 14:39:16 nuzleaf.home.younglogic.net dhcpd[1414]: it work better with th>
Jun 06 14:39:16 nuzleaf.home.younglogic.net dhcpd[1414]: 
Jun 06 14:39:16 nuzleaf.home.younglogic.net dhcpd[1414]: Please report issues w>
Jun 06 14:39:16 nuzleaf.home.younglogic.net dhcpd[1414]: https://bugzilla.redha>
Jun 06 14:39:16 nuzleaf.home.younglogic.net dhcpd[1414]: 
Jun 06 14:39:16 nuzleaf.home.younglogic.net dhcpd[1414]: exiting.
Jun 06 14:39:16 nuzleaf.home.younglogic.net systemd[1]: dhcpd.service: Main pro>
Jun 06 14:39:16 nuzleaf.home.younglogic.net systemd[1]: dhcpd.service: Failed w>
Jun 06 14:39:16 nuzleaf.home.younglogic.net systemd[1]: Failed to start DHCPv4 >
lines 1-17/17 (END)

So I started that. But the fact that it failed above is going to come back to haunt me.

 systemctl status dhcpd
● dhcpd.service - DHCPv4 Server Daemon
   Loaded: loaded (/usr/lib/systemd/system/dhcpd.service; enabled; vendor prese>
   Active: active (running) since Thu 2020-06-11 13:01:43 EDT; 1s ago
     Docs: man:dhcpd(8)
           man:dhcpd.conf(5)
 Main PID: 80498 (dhcpd)
   Status: "Dispatching packets..."
    Tasks: 1 (limit: 48048)
   Memory: 4.9M
   CGroup: /system.slice/dhcpd.service
           └─80498 /usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -gro>

Jun 11 13:01:43 nuzleaf.home.younglogic.net dhcpd[80498]: 
Jun 11 13:01:43 nuzleaf.home.younglogic.net dhcpd[80498]: No subnet declaration>
Jun 11 13:01:43 nuzleaf.home.younglogic.net dhcpd[80498]: ** Ignoring requests >
Jun 11 13:01:43 nuzleaf.home.younglogic.net dhcpd[80498]:    you want, please w>
Jun 11 13:01:43 nuzleaf.home.younglogic.net dhcpd[80498]:    in your dhcpd.conf>
Jun 11 13:01:43 nuzleaf.home.younglogic.net dhcpd[80498]:    to which interface>
Jun 11 13:01:43 nuzleaf.home.younglogic.net systemd[1]: Started DHCPv4 Server D>
Jun 11 13:01:43 nuzleaf.home.younglogic.net dhcpd[80498]: 
Jun 11 13:01:43 nuzleaf.home.younglogic.net dhcpd[80498]: Sending on   Socket/f>
Jun 11 13:01:43 nuzleaf.home.younglogic.net dhcpd[80498]: Server starting servi>
lines 1-22/22 (END)

With those two changes, my VM boots again…until the Kickstart fails. Next up…debugging Kickstart.

شرکت در رویداد AWS Summit Online

Posted by Fedora fans on June 11, 2020 03:15 PM
AWS-Summit

AWS-Summit

شرکت آمازون یکی از بزرگترین Cloud provider ها می باشد که رویدادی به نام AWS Summit دارد. این رویداد به صورت آنلاین برگزار می گردد و فرصت مناسبی جهت آشنایی بیشتر با سرویس های AWS و بالا بردن دانش فنی خود در زمینه ی سرویس های ابری می باشد.

شرکت در رویداد AWS Summit Online رایگان می باشد و تمام جلسات به صورت انگلیسی می باشند که زیرنویس هایی به زبان های اسپانیایی، فرانسوی، ایتالیایی و آلمانی نیز در دسترس می باشد.

رویداد AWS Summit Online در تاریخ June 17, 2020 برگزار خواهد شد که جهت اطلاعات بیشتر در مورد این رویداد و همچنین ثبت نام می توانید به لینک پایین مراجعه کنید:

https://aws.amazon.com/events/summits/online/emea/?trk=ep_card

 

PHP version 7.3.19 and 7.4.7

Posted by Remi Collet on June 11, 2020 02:30 PM

RPMs of PHP version 7.4.7 are available in remi repository for Fedora 32 and remi-php74 repository for Fedora 30-31 and Enterprise Linux  7 (RHEL, CentOS).

RPMs of PHP version 7.3.19 are available in remi repository for Fedora 30-31 and remi-php73 repository for Enterprise Linux  6 (RHEL, CentOS).

emblem-notice-24.pngNo security fix this month, so no update for version 7.2.31.

emblem-important-2-24.pngPHP version 7.1 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository and as module for Fedora 30-32 and EL-8.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.4 installation (simplest):

yum-config-manager --enable remi-php74
yum update

or, the modular way (Fedora and EL 8):

dnf module enable php:remi-7.4
dnf update php\*

Parallel installation of version 7.4 as Software Collection

yum install php74

Replacement of default PHP by version 7.3 installation (simplest):

yum-config-manager --enable remi-php73
yum update php\*

or, the modular way (Fedora and EL 8):

dnf module enable php:remi-7.3
dnf update php\*

Parallel installation of version 7.3 as Software Collection

yum install php73

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-8 RPMs are build using RHEL-8.1
  • EL-7 RPMs are build using RHEL-7.8
  • EL-6 RPMs are build using RHEL-6.10
  • EL-7 builds now use libicu62 (version 62.1)
  • EL builds now uses oniguruma5php (version 6.9.5, instead of outdated system library)
  • oci8 extension now uses Oracle Client version 19.6 (excepted on EL-6)
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php72 / php73 / php74)

Insider 2020-06: edge; log management layer; WSL;

Posted by Peter Czanik on June 11, 2020 11:01 AM

Dear syslog-ng users,


This is the 82nd issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.


NEWS

Syslog-ng on the edge

After many years of pushing all computing from on-site to the cloud or huge data centers, there is a new trend: edge computing. There can be many reasons, legal or practical, why data should be processed locally instead of being sent to a central location as soon as it is created. Edge computing was a central theme of the recently held Red Hat Summit. Luckily syslog-ng is well prepared for this use case right from the beginning. While most people only know that syslog-ng can act as a client or a server, it can also collect, process and forward log messages. In syslog-ng terminology it is called a relay, but on the edge you might want to combine server and a relay functionality into one.

https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-on-the-edge

Creating a dedicated log management layer

Event logging is a central source of information both for IT security and operations, but different teams use different tools to collect and analyze log messages. The same log message is often collected by multiple applications. Having each team using different tools is complex, inefficient and makes systems less secure. Using a single application to create a dedicated log management layer independent of analytics instead, however, has multiple benefits.

Using syslog-ng is a lot more flexible than most log aggregation tools provided by log analytics vendors. This is one of the reasons why my talks and blogs focused on how to make your life easier using its technical advantages. Of course, I am aware of the direct financial benefits as well. If you are interested in that part, talk to my colleagues on the business side. They can help you to calculate how much you can save on your SIEM licenses when syslog-ng collects log messages and ensures that only relevant messages reach your SIEM and only at a predicatively low message rate. You can learn more about this use case on our Optimizing SIEM page.

In this blog, I will focus on a third aspect: simplifying complexity. This was the focus of many of my conference discussions before the COVID-19 pandemic. If we think a bit more about it, we can see that this is not really a third aspect, but a combination of the previous two instead. Using the flexibility of syslog-ng, we create a dedicated log management layer in front of different log analytics solutions. By reducing complexity, we can save in many ways: on computing and human resources, and on licensing when using commercial tools for log analysis as well.

https://www.syslog-ng.com/community/b/blog/posts/creating-a-dedicated-log-management-layer

I will cover this topic more in depth in my upcoming talk at the Pass the SALT conference: https://pass-the-salt.org/

Using syslog-ng in WSL

Windows Subsystem for Linux (WSL) is an optional feature of Windows 10 for developers who want the power of Linux (especially the Linux shell) on their Windows desktops. Of course, it is more than just a shell: you can easily install and run any command line applications (but not GUI). As a Linux desktop user, I do not need WSL to access a Linux shell, but as I am often asked how syslog-ng runs in WSL, I finally gave it a try.

The recurring questions are if syslog-ng runs at all in WSL and what the performance compared to syslog-ng installed on Linux is. As I run openSUSE Leap 15.1 as my main operating system on my laptop, I used that in WSL as well. I tested not just WSL 1, which has been generally available for years, but also the upcoming WSL 2, which brings tons of performance improvements. As WSL 2 involves virtualization, I also tested syslog-ng in Vmware Workstation running on Windows. In all cases, I used the latest syslog-ng 3.26 from my unofficial syslog-ng repository for openSUSE and a minimally modified syslog-ng.conf to enable the network source. Benchmarking was done both from localhost and from a small Xeon server on the local network, attached through Gigabit Ethernet.

https://www.syslog-ng.com/community/b/blog/posts/using-syslog-ng-in-wsl

CONFERENCES

WEBINARS


Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on June 11, 2020 06:48 AM
Service 'COPR Build System' now has status: good: Everything seems to be working.

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on June 11, 2020 06:48 AM
Service 'COPR Build System' now has status: good: Everything seems to be working.

Getting off Facebook.

Posted by Adam Young on June 11, 2020 01:03 AM

Leave a comment here if you are a Facebook friend that wants to stay in touch.

[Howto] Create your own cloud gaming server to stream games to Fedora

Posted by Roland Wolters on June 10, 2020 02:37 PM
<figure class="alignright size-medium is-resized"></figure>

A few months back I wanted to give a game a try which only runs on Windows and requires a dedicated GPU. Since I have neither of those, a decided to set up my own Windows cloud gaming server to stream the game to my Linux machine.

Dozens of years ago there was one game I played day and night. For weeks, months, maybe even years. Till today I can still remember the distinct soundtrack which makes the hair stand up on the back of my neck: UFO: Enemy Unknown. I loved the game! A few years ago I also played one of the open source games inspired by UFO quite some time, UFO: AI. That was fun.

Sequels to the original game were released, two over the last couple of years. But they never really were an option since they required Windows (or so I thought) and above all, time. However, few months ago I first realized that one of the sequels, XCOM: Enemy Unknown, was available for Android. Since I have a brand new flagship Android tablet I gave it a shot – and it was great! But since the Android version was seriously limited, I played it again on Linux. That barely worked with my limited Intel GPU. But it was playable, and I had fun.

I was infected with the urge to play the game more – and when a thid sequel was announced, I at least wanted to play the second one, XCOM 2. But how? My GPU was too limited and eGPUs are expensive and often involve a lot of hassle – even if I would be willing to buy a Windows license. So I searched if cloud gaming could do the trick.

Cloud Gaming Services

The idea of cloud gaming is that heavy machines in the data center do the rendering, and the client machine only displays the end result. That shifts the burden of the powerful GPU towards the data center, and the client only needs to have simple graphics to show a stream of images. This does however require a rather responsive broad band connection between the client and the data center.

This principle is not new, but got new attention recently when Google announced their cloud gaming offer Stadia. I checked if any cloud gaming services offered my game of choice – and was available on Linux. Unfortunately, the results were disappointing:

  • Stadia: no XCOM2, no Linux client via Chrome Browser (thanks to zesoup)
  • GeForce Now: no XCOM2, no Linux client
  • Playstation Now: XCOM2 available, but no Linux client
  • Vortex: no XCOM2, no Linux client

Some of the above can be used on Linux with the help of Lutris, which uses Wine in the background. But for me that would only count as a last resort. I was not that desperate yet.

However, not all was lost yet: some services are not tied to a certain game catalog, but instead offer a generic server and client onto which you can install your games. The research results were first promising: shadow.tech offers machines for just that and a working Linux client! However, they are not available at my place.

The solution: Parsec

So with all ready-to-consume options out of the picture, I was almost willing to give up (or give Lutris and Playstation Now a chance, or even buy a eGPU). But then I stumbled upon something interesting: Parsec, a client for interactive game streaming.

Parsec is a high performance, low latency 60 FPS remote access product connecting you to your computer from anywhere.

Parsec features

That itself didn’t solve my problem. But it opened a window to a new solution: in the past, the company offered cloud hosted game servers on their own. Players could connect to it with their Parsec client and play games on them together – or on their own. The Parsec promise is that their client is fast enough for a reasonable good experience.

The server offer was canceled some time ago – but there was no one stopping me launching my own server and connect the Parsec client to it. And that is what I did. Read on to learn how to do that yourself.

Step 1: Getting a Windows cloud server with a reasonable GPU

What is needed is a cloud hosted Windows machine with a reasonable GPU. In best case the data center hosting the machine should not be on the other side of the planet. AWS, Azure, GCP and other have such offers. But there is even a better route: during my research I found Paperspace, a company specialized on providing access to GPU or AI cloud platforms. That is perfect for this use case!

Paperspace does not really advertise their support for gaming platforms. But after I signed up and looked what was needed to create my first cloud server I found a Parsec template:

<figure class="wp-block-image size-large"></figure>

That makes the entire process very easy!

  • Sign up with Paperspace, get billing sorted out (yes, this stuff costs money)
  • Get to Core -> Compute -> Machines, create a new machine
  • From Public Templates, get the Parsec cloud gaming template
  • Pick the right size for your games; for me a P4000 was enough.
  • Make sure to add a public IP and enough storage. Many today’s games easily consume dozens of GB
  • Set the auto-shutdown timer. No need to waste money.
  • Start the machine.

And that’s it already. Once the machine starts, you will notice a Parsec icon on the home screen. Time to get that working.

Step 2: Get Parsec

Parsec has clients for Linux based operating systems such as Ubuntu and Raspberry. There is even an AppImage or a Snap – unfortunately not a Flatpak yet. However, if you are not willing to use AppImage or Snap for whatever reason, you can download the Ubuntu deb and create a RPM out of it. There is even a handy script for that. Any way, get it installed.

Sign up to Parsec, start the client, log in, and you are almost there:

<figure class="wp-block-image size-large"></figure>

Step 3: Play

After Parsec is all set, just start the cloud server, start Parsec there (maybe log in to your Parsec account), connect to the session on your client – and you are good to go: You can start playing!

For a first test I just watched some Youtube videos and was surprised by the quality. Next I logged in to my Steam account, got my XCOM2 installed and played along happily!

Performance and user experience

But how good is the performance? Well, that depends mostly on one factor: network. Due to unfortunate circumstances I was “able” to test this setup with three very distinct networks in a short time frame:

  • A rather slowish, unstable WiFi with a lot of jitter
  • A LTE connection, provided to me via WiFi hotspot
  • A top-notch, high performance mesh WiFi

When you have slow pings (everything below 25 ms) and/or a lot of jitter, I cannot recommend that you go this path. Otherwise it can be a serious option!

The first network I was on was horrible slow, and the experience was horrible. XCOM2 has basically permanent background music, and the constant interruptions in the music and audio sequences were in fact the worst for me.

The LTE based network was slightly better, but still far from a native feeling. I was able to get a good experience out of this and have fun, but that about was it.

However, the third option, WiFi on almost wired quality, was so good that in times I forgot that I was not playing the game natively. There was no visible lag, the graphics were crystal clear, the music was never interrupted, etc. I was impressed – and had great sessions that way!

I can only recommend to always keep an eye on the connection quality reported in the Parsec overlay:

<figure class="aligncenter size-large"></figure>

As Parsec mentions:

At 60 frames per second, 1 frame is around 16ms. By combining decode, encode and network, you’ll have the amount of frames the client lags behind.

Parsec about lag latency

Having this in mind, the above screenshot shows a connection with an unfortunate lag, leading to a not-that-good experience.

Recap

If you don’t have the hardware and/or software to play your favorite game, cloud gaming can be a solution for your problem. And if there is no proper offering out there, it is possible to get this working on your own.

Running your own cloud gaming server is surprisingly easy and not too expensive. It does feel somewhat weird in the beginning especially if you usually only use clouds for your professional work. But it is a fun experience, and the results can be staggering – if your network is up for the job!

Featured image by Martin Str from Pixabay

Sortie de jquery-async-gravatar v1.1.2

Posted by Guillaume Kulakowski on June 10, 2020 06:34 AM

Je viens de mettre à jour mon plugins jQuery pour gérer de façon asynchrone le chargement de Gravatars. Au programme : Migration de grunt-qunit vers Karma. La raison est simple: grunt-qunit n’offrait pas de couverture de code ! Et lorsqu’on frôle les 95% c’est dommage de ne pas l’afficher ! Côté CI/CD, je poursuis la […]

Cet article Sortie de jquery-async-gravatar v1.1.2 est apparu en premier sur Guillaume Kulakowski's blog.

Cockpit 221

Posted by Cockpit Project on June 10, 2020 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 221.

Support for Cross-Origin-Resource-Policy

Cockpit’s web server now sets the Cross-Origin-Resource-Policy header to same-origin. Web browsers use this header to prevent other web sites from loading individual HTML, JavaScript, or image resources from Cockpit via <img>, <script>, or similar tags. This is mostly a precaution – currently there are no known vulnerabilities with including resources that way, but as this is not a legitimate use case, there is no reason to allow it.

Accounts: Hide some buttons when access is limited

Instead of disabling the appropriate buttons when the logged in user doesn’t have administrative access, they will now simply disappear. This is a general theme that we want to roll out all over Cockpit. The Overview, Storage, and Network pages have already been changed like this in earlier releases.

Developers: Importing “base1/patternfly.css” is deprecated

This pre-compiled stylesheet will be dropped in the future in favor of projects shipping their own CSS. This API is not maintainable, as Cockpit cannot offer a PatternFly 3 API forever, and PatternFly 4 also changes quickly enough that one style sheet for all projects is not robust enough.

The Cockpit plugins that are using only PatternFly 4 should follow the example from starter-kit on how to import PatternFly 4 stylesheets. g The Cockpit plugins which are still relying on PatternFly 3 should follow the migration from the deprecated API to the new PatternFly stylesheet import approach as implemented in this cockpit-podman commit.

Try it out

Cockpit 221 is available now:

Bug Hunting in Python

Posted by Jeremy Cline on June 09, 2020 01:23 PM

Having given the blog a good four years to really settle in and get comfortable, I thought it was about time I wrote a post. I don’t want to strain myself, so it’ll be short and incomplete (don’t worry, I’ll pad it with lots of debugging output so you can give that page key a workout).

There are a myriad ways to debug Python applications. My first stop is typically pdb. It’s simple and ships with Python.

When the Python process in question makes pdb difficult to use like when it’s being run by systemd rpdb makes it very easy to remotely debug the program. rpdb even supports SIGTRAP which is very helpful in situations where the program is hanging and you want to quickly find out where: just install the signal handler with import rpdb; rpdb.handle_trap(), run the process until it seems to get stuck, send the process SIGTRAP (consult man 7 signal for the signal number on your platform, it’s likely 5) and connect to the debugger with nc 127.0.0.1 4444.

Occasionally, though, (r)pdb isn’t enough. This could be because the bug involves a C extension or Python itself. For example, during Fedora’s rebase to Python 3.9b1, a package failed to rebuild. Adam Williamson diagnosed the build failure and isolated the package test that resulted in the process hanging forever. Unfortunately, rpdb was insufficient in this case. Upon sending the process a TRAP signal, we’re not rewarded with a remote pdb session:

dev in packaging/python-stompest/stompest/src/twisted on  master [?] via 🐍 v3.9.0b1 (stompest)
❯ python -m pytest -vvvvv -s --ignore stompest/twisted/tests/async_client_integration_test.py

platform linux -- Python 3.9.0b1, pytest-5.4.3, py-1.8.1, pluggy-0.13.1 -- /home/jcline/.virtualenvs/stompest/bin/python3
cachedir: .pytest_cache
rootdir: /home/jcline/packaging/python-stompest/stompest/src/twisted
collected 14 items

stompest/twisted/tests/async_client_test.py::AsyncClientConnectTimeoutTestCase::test_connected_timeout <- ../../../stompest-715f358b8503320ea42bd4de6682916339943edc/src/twisted/stompest/twisted/tests/async_client_test.py INFO:twisted:--> stompest.twisted.tests.async_client_test.AsyncClientConnectTimeoutTestCase.test_connected_timeout <--
INFO:twisted:Factory starting on 44893
INFO:twisted:Starting factory <twisted.internet.protocol.Factory object at 0x7f037f652130>
INFO:stompest.twisted.protocol:Connecting to localhost:44893 ...
INFO:twisted:Starting factory <stompest.twisted.protocol.StompFactory object at 0x7f037f669be0>
DEBUG:stompest.twisted.tests.broker_simulator:Connection made
DEBUG:stompest.twisted.protocol:Sending CONNECT frame [version=1.0]
DEBUG:stompest.twisted.tests.broker_simulator:Received CONNECT frame [version=1.0]
ERROR:stompest.twisted.listener:Disconnect because of failure: STOMP broker did not answer on time [timeout=1.0]
INFO:stompest.twisted.listener:Disconnecting ... [reason=STOMP broker did not answer on time [timeout=1.0]]
INFO:stompest.twisted.listener:Disconnected: Connection was closed cleanly.
DEBUG:stompest.twisted.listener:Calling disconnected errback: STOMP broker did not answer on time [timeout=1.0]



Trace/breakpoint trap (core dumped)

Not great. Never fear, though, for gdb is here to save us. The first thing we can do is take a look at that core dump:

# In this fantasy there's only one core dump
dev in packaging/python-stompest/stompest/src/twisted on  master [?] via 🐍 v3.9.0b1 (stompest) 
❯ coredumpctl list
TIME                            PID   UID   GID SIG COREFILE  EXE
Tue 2020-06-09 09:55:44 EDT   37144  1000  1000   5 present   /usr/bin/python3.9

dev in packaging/python-stompest/stompest/src/twisted on  master [?] via 🐍 v3.9.0b1 (stompest) 
❯ coredumpctl debug python3
           PID: 37144 (python3)
           UID: 1000 (jcline)
           GID: 1000 (jcline)
        Signal: 5 (TRAP)
     Timestamp: Tue 2020-06-09 09:55:43 EDT (1h 0min ago)
  Command Line: python3 -m pytest -vvvvv -s --ignore stompest/twisted/tests/async_client_integration_test.py
    Executable: /usr/bin/python3.9
       Storage: /var/lib/systemd/coredump/core.python3.1000.a5660049602e418c9ed5d7737a890e5c.37144.1591710943000000000000.lz4
       Message: Process 37144 (python3) of user 1000 dumped core.
                
                Stack trace of thread 37144:
                #0  0x00007f038e37157e PyException_GetContext (libpython3.9.so.1.0 + 0x18d57e)
                #1  0x00007f038e2ffa38 _PyErr_SetObject (libpython3.9.so.1.0 + 0x11ba38)
                #2  0x00007f038e30895d gen_send_ex (libpython3.9.so.1.0 + 0x12495d)
                #3  0x00007f038e3c4a81 _gen_throw (libpython3.9.so.1.0 + 0x1e0a81)
                #4  0x00007f038e3c497a gen_throw (libpython3.9.so.1.0 + 0x1e097a)
                #5  0x00007f038e30a180 method_vectorcall_VARARGS (libpython3.9.so.1.0 + 0x126180)
                #6  0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #7  0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #8  0x00007f038e2f9a6b function_code_fastcall (libpython3.9.so.1.0 + 0x115a6b)
                #9  0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #10 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #11 0x00007f038e2ebf59 _PyEval_EvalCode (libpython3.9.so.1.0 + 0x107f59)
                #12 0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #13 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #14 0x00007f038e2eccc3 _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108cc3)
                #15 0x00007f038e2ebf59 _PyEval_EvalCode (libpython3.9.so.1.0 + 0x107f59)
                #16 0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #17 0x00007f038e2ef9fa _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x10b9fa)
                #18 0x00007f038e2f9a6b function_code_fastcall (libpython3.9.so.1.0 + 0x115a6b)
                #19 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #20 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #21 0x00007f038e2f9a6b function_code_fastcall (libpython3.9.so.1.0 + 0x115a6b)
                #22 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #23 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #24 0x00007f038e2eb9a1 _PyEval_EvalCode (libpython3.9.so.1.0 + 0x1079a1)
                #25 0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #26 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #27 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #28 0x00007f038e2ebf59 _PyEval_EvalCode (libpython3.9.so.1.0 + 0x107f59)
                #29 0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #30 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #31 0x00007f038e2eccc3 _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108cc3)
                #32 0x00007f038e2ebf59 _PyEval_EvalCode (libpython3.9.so.1.0 + 0x107f59)
                #33 0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #34 0x00007f038e2ef9fa _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x10b9fa)
                #35 0x00007f038e2f9a6b function_code_fastcall (libpython3.9.so.1.0 + 0x115a6b)
                #36 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #37 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #38 0x00007f038e2f9a6b function_code_fastcall (libpython3.9.so.1.0 + 0x115a6b)
                #39 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #40 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #41 0x00007f038e2eb9a1 _PyEval_EvalCode (libpython3.9.so.1.0 + 0x1079a1)
                #42 0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #43 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #44 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #45 0x00007f038e2f9a6b function_code_fastcall (libpython3.9.so.1.0 + 0x115a6b)
                #46 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #47 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #48 0x00007f038e2ebf59 _PyEval_EvalCode (libpython3.9.so.1.0 + 0x107f59)
                #49 0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #50 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #51 0x00007f038e2eccc3 _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108cc3)
                #52 0x00007f038e3088da gen_send_ex (libpython3.9.so.1.0 + 0x1248da)
                #53 0x00007f038e30157e method_vectorcall_O (libpython3.9.so.1.0 + 0x11d57e)
                #54 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #55 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #56 0x00007f038e2ebf59 _PyEval_EvalCode (libpython3.9.so.1.0 + 0x107f59)
                #57 0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #58 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #59 0x00007f038e2eccc3 _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108cc3)
                #60 0x00007f038e2ebf59 _PyEval_EvalCode (libpython3.9.so.1.0 + 0x107f59)
                #61 0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #62 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #63 0x00007f038e2eccc3 _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108cc3)
                
                Stack trace of thread 37145:
                #0  0x00007f038e1d4a24 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12a24)
                #1  0x00007f038e1d4b28 __new_sem_wait_slow (libpthread.so.0 + 0x12b28)
                #2  0x00007f038e35da5e PyThread_acquire_lock_timed (libpython3.9.so.1.0 + 0x179a5e)
                #3  0x00007f038e36c1c1 acquire_timed (libpython3.9.so.1.0 + 0x1881c1)
                #4  0x00007f038e36bfcb lock_PyThread_acquire_lock (libpython3.9.so.1.0 + 0x187fcb)
                #5  0x00007f038e2fa892 method_vectorcall_VARARGS_KEYWORDS (libpython3.9.so.1.0 + 0x116892)
                #6  0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #7  0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #8  0x00007f038e2eb9a1 _PyEval_EvalCode (libpython3.9.so.1.0 + 0x1079a1)
                #9  0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #10 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #11 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #12 0x00007f038e2eb9a1 _PyEval_EvalCode (libpython3.9.so.1.0 + 0x1079a1)
                #13 0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #14 0x00007f038e302380 method_vectorcall (libpython3.9.so.1.0 + 0x11e380)
                #15 0x00007f038e356d0c _PyObject_CallNoArg.lto_priv.17 (libpython3.9.so.1.0 + 0x172d0c)
                #16 0x00007f038e3cf9e9 calliter_iternext (libpython3.9.so.1.0 + 0x1eb9e9)
                #17 0x00007f038e2ed165 _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x109165)
                #18 0x00007f038e2ebd6f _PyEval_EvalCode (libpython3.9.so.1.0 + 0x107d6f)
                #19 0x00007f038e2f97b6 _PyFunction_Vectorcall (libpython3.9.so.1.0 + 0x1157b6)
                #20 0x00007f038e2ef9fa _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x10b9fa)
                #21 0x00007f038e2f9a6b function_code_fastcall (libpython3.9.so.1.0 + 0x115a6b)
                #22 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #23 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #24 0x00007f038e2f9a6b function_code_fastcall (libpython3.9.so.1.0 + 0x115a6b)
                #25 0x00007f038e2f44aa call_function (libpython3.9.so.1.0 + 0x1104aa)
                #26 0x00007f038e2ece9e _PyEval_EvalFrameDefault (libpython3.9.so.1.0 + 0x108e9e)
                #27 0x00007f038e2f9a6b function_code_fastcall (libpython3.9.so.1.0 + 0x115a6b)
                #28 0x00007f038e302380 method_vectorcall (libpython3.9.so.1.0 + 0x11e380)
                #29 0x00007f038e3b1346 t_bootstrap (libpython3.9.so.1.0 + 0x1cd346)
                #30 0x00007f038e3b12f4 pythread_wrapper (libpython3.9.so.1.0 + 0x1cd2f4)
                #31 0x00007f038e1cb432 start_thread (libpthread.so.0 + 0x9432)
                #32 0x00007f038e63a9d3 __clone (libc.so.6 + 0x1019d3)

Hmm. If we wanted to work hard, we could perhaps do some more investigation on the core dump, but let’s play with another one of gdb’s features. gdb is a general-purpose debugger that can, among other things, attach to running processes. Running gdb --pid=<pytest pid> will attach us to the running process where we can poke around. gdb’s default interface is rather… minimal, but can be customized with projects like gdb-dashboard and that’s what I’m using in the output below.

If you’re running on Fedora, you’ll likely be presented with a warning about missing debuginfo when you attach to the process and should install it using the provided command before continuing. As you can see here, I’ve already installed the debuginfo and debugsource packages for Python 3.9.

─── Output/messages ────────────────────────────────────────────────────────────
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
PyException_GetContext (self=0x7fd61fe53580) at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/exceptions.c:350
350	    PyObject *context = _PyBaseExceptionObject_cast(self)->context;
─── Assembly ───────────────────────────────────────────────────────────────────
~
~
~
~
~
 0x00007fd62eb87570  PyException_GetContext+0  mov    0x28(%rdi),%rax
 0x00007fd62eb87574  PyException_GetContext+4  test   %rax,%rax
 0x00007fd62eb87577  PyException_GetContext+7  jne    0x7fd62eb8757a <PyException_GetContext+10>
 0x00007fd62eb87579  PyException_GetContext+9  retq   
 0x00007fd62eb8757a  PyException_GetContext+10 addq   $0x1,(%rax)
─── Breakpoints ────────────────────────────────────────────────────────────────
─── Expressions ────────────────────────────────────────────────────────────────
─── History ────────────────────────────────────────────────────────────────────
─── Memory ─────────────────────────────────────────────────────────────────────
─── Registers ──────────────────────────────────────────────────────────────────
   rax 0x00007fd61fe53580   rbx 0x00007fd61fe53580      rcx 0x0000000000000000
   rdx 0x00007fd61fe53a00   rsi 0x00007fd62ed18a20      rdi 0x00007fd61fe53580
   rbp 0x00007fd62ed18a20   rsp 0x00007ffdc884cd58       r8 0x0000000000000004
    r9 0x00005609a47be5f0   r10 0x00007fd61fe53580      r11 0x0000000000000050
   r12 0x00007fd61fe53a00   r13 0x00007fd61fe53580      r14 0x00005609a47bd420
   r15 0x00007fd62ed25540   rip 0x00007fd62eb87570   eflags [ IF ]            
    cs 0x00000033            ss 0x0000002b               ds 0x00000000        
    es 0x00000000            fs 0x00000000               gs 0x00000000        
─── Source ─────────────────────────────────────────────────────────────────────
 345  }
 346  
 347  PyObject *
 348  PyException_GetContext(PyObject *self)
 349  {
 350>     PyObject *context = _PyBaseExceptionObject_cast(self)->context;
 351      Py_XINCREF(context);
 352      return context;
 353  }
 354  
─── Stack ──────────────────────────────────────────────────────────────────────
[0] from 0x00007fd62eb87570 in PyException_GetContext+0 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/exceptions.c:350
[1] from 0x00007fd62eb15a38 in _PyErr_SetObject+456 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/errors.c:145
[2] from 0x00007fd62eb1e95d in _PyErr_SetNone+17 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/errors.c:199
[3] from 0x00007fd62eb1e95d in PyErr_SetNone+17 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/errors.c:199
[4] from 0x00007fd62eb1e95d in gen_send_ex+509 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/genobject.c:241
[5] from 0x00007fd62ebdaa81 in _gen_throw+225 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/genobject.c:513
[6] from 0x00007fd62ebda97a in gen_throw+122 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/genobject.c:535
[7] from 0x00007fd62eb20180 in method_vectorcall_VARARGS+176 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/descrobject.c:313
[8] from 0x00007fd62eb0a4aa in _PyObject_VectorcallTstate+46 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Include/cpython/abstract.h:118
[9] from 0x00007fd62eb0a4aa in PyObject_Vectorcall+80 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Include/cpython/abstract.h:127
[+]
─── Threads ────────────────────────────────────────────────────────────────────
[2] id 37996 name python3 from 0x00007fd62e9eaa24 in futex_abstimed_wait_cancelable+42 at ../sysdeps/nptl/futex-internal.h:320
[1] id 37995 name python3 from 0x00007fd62eb87570 in PyException_GetContext+0 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/exceptions.c:350
─── Variables ──────────────────────────────────────────────────────────────────
arg self = 0x7fd61fe53580: {ob_refcnt = 19,ob_type = 0x5609a4c818d0}
loc context = <optimized out>
────────────────────────────────────────────────────────────────────────────────
Missing separate debuginfos, use: dnf debuginfo-install expat-2.2.8-2.fc32.x86_64
>>> 

At a glance, we see two operating system threads are running. Let’s start by looking at what thread 2 is up to:

>>> thread 2
[Switching to thread 2 (Thread 0x7fd61fe19700 (LWP 37996))]
#0  futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7fd618001450) at ../sysdeps/nptl/futex-internal.h:320
320	  int err = lll_futex_clock_wait_bitset (futex_word, expected,
>>> bt
0  futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7fd618001450) at ../sysdeps/nptl/futex-internal.h:320
#1  do_futex_wait (sem=sem@entry=0x7fd618001450, abstime=0x0, clockid=0) at sem_waitcommon.c:112
#2  0x00007fd62e9eab28 in __new_sem_wait_slow (sem=sem@entry=0x7fd618001450, abstime=0x0, clockid=0) at sem_waitcommon.c:184
#3  0x00007fd62e9eaba1 in __new_sem_wait (sem=sem@entry=0x7fd618001450) at sem_wait.c:42
#4  0x00007fd62eb73a5e in PyThread_acquire_lock_timed (lock=0x7fd618001450, microseconds=<optimized out>, intr_flag=1) at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/thread_pthread.h:463
#5  0x00007fd62eb821c1 in acquire_timed (lock=0x7fd618001450, timeout=<optimized out>) at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Modules/_threadmodule.c:63
...

This looks a bit intimidating if you’re not familiar with C. Fortunately, gdb knows about Python:

>>> py-bt
Traceback (most recent call first):
  File "/usr/lib64/python3.9/threading.py", line 312, in wait
    waiter.acquire()
  File "/usr/lib64/python3.9/queue.py", line 171, in get
    self.not_empty.wait()
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/_threads/_threadworker.py", line 45, in work
    for task in iter(queue.get, _stop):
  File "/usr/lib64/python3.9/threading.py", line 888, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.9/threading.py", line 950, in _bootstrap_inner
    self.run()
  File "/usr/lib64/python3.9/threading.py", line 908, in _bootstrap
    self._bootstrap_inner()

One thing to note here is that the call stack is “upside down” when compared to Python tracebacks you’re likely used to, with the most recent call at the top of the stack. Perhaps unsurprisingly given the C stack, we see this thread is waiting for something. This doesn’t seem particularly unusual, so lets see what the other thread is up to:

>>> thread 1
[Switching to thread 1 (Thread 0x7fd62e881740 (LWP 37995))]
#0  PyException_GetContext (self=<StompCancelledError at remote 0x7fd61fe53580>) at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/exceptions.c:350
350	    PyObject *context = _PyBaseExceptionObject_cast(self)->context;
>>> py-bt
Traceback (most recent call first):
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/internet/defer.py", line 1672, in _inlineCallbacks
    return succeed(False)
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/internet/defer.py", line 1475, in gotResult
    _inlineCallbacks(r, g, status)
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/internet/defer.py", line 910, in _runCallbacks
    if iscoroutine(coro) or isinstance(coro, GeneratorType):
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/internet/defer.py", line 568, in _startRunCallbacks
    self._runCallbacks()
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/internet/defer.py", line 501, in errback
    self._startRunCallbacks(fail)
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/internet/defer.py", line 2488, in _inlineCallbacks
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/internet/defer.py", line 1475, in gotResult
    _inlineCallbacks(r, g, status)
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/internet/defer.py", line 910, in _runCallbacks
    if iscoroutine(coro) or isinstance(coro, GeneratorType):
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/internet/defer.py", line 568, in _startRunCallbacks
    self._runCallbacks()
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/internet/defer.py", line 501, in errback
    self._startRunCallbacks(fail)
  File "/home/jcline/packaging/python-stompest/stompest/src/twisted/stompest/twisted/listener.py", line 111, in onConnectionLost
    connection.disconnected.errback(self._disconnectReason)
  File "/home/jcline/packaging/python-stompest/stompest/src/twisted/stompest/twisted/client.py", line 347, in <lambda>
    yield self._notify(lambda l: l.onConnectionLost(self, reason))
  File "/home/jcline/packaging/python-stompest/stompest/src/twisted/stompest/twisted/client.py", line 337, in _notify
    yield notify(listener)
  File "/home/jcline/.virtualenvs/stompest/lib64/python3.9/site-packages/twisted/internet/defer.py", line 1674, in _inlineCallbacks
  ...

This call stack is very deep due to pytest, pluggy, trail, and twisted so I have truncated it. The most interesting call is the top one in any case. It appears we’re in the process of throwing an exception, which as a general rule shouldn’t hang so this thread looks a lot more suspicious than the other thread. However, it’s not clear why it’s hanging, so it’s time to roll up our sleeves at what CPython is actually up to:

>>> bt
#0  PyException_GetContext (self=<StompCancelledError at remote 0x7fd61fe53580>) at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/exceptions.c:350
#1  0x00007fd62eb15a38 in _PyErr_SetObject (tstate=0x5609a47bd420, exception=<type at remote 0x7fd62ed18a20>, value=StopIteration()) at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/errors.c:145
#2  0x00007fd62eb1e95d in _PyErr_SetNone (exception=<optimized out>, tstate=<optimized out>) at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/errors.c:199
#3  PyErr_SetNone (exception=<optimized out>) at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/errors.c:199
#4  gen_send_ex (gen=0x7fd61fe82c10, arg=<optimized out>, exc=<optimized out>, closing=<optimized out>) at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/genobject.c:241
#5  0x00007fd62ebdaa81 in _gen_throw (gen=0x7fd61fe82c10, close_on_genexit=<optimized out>, typ=<optimized out>, val=<optimized out>, tb=<optimized out>) at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/genobject.c:513
#6  0x00007fd62ebda97a in gen_throw (gen=0x7fd61fe82c10, args=args@entry=(<type at remote 0x5609a4c818d0>, <StompCancelledError at remote 0x7fd61fe53580>, <traceback at remote 0x7fd61f5f18c0>)) at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/genobject.c:535

Okay, so it looks like we’re in the exception portion of CPython (not shocking given our look at the Python stack) in a function getting some “context”. It doesn’t appear that anything is intentionally waiting on some event, so lets step through the program a bit to see what we’re busy doing. It turns out PyException_GetContext doesn’t do much and returns to the caller just fine:

>>> n
>>> n
>>> n
─── Output/messages ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
147	                if (context == value) {
─── Assembly ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 0x00007fd62eb15a7e  _PyErr_SetObject+526 mov    %rax,%r15
 0x00007fd62eb15a81  _PyErr_SetObject+529 jmp    0x7fd62eb15a0d <_PyErr_SetObject+413>
 0x00007fd62eb15a83  _PyErr_SetObject+531 mov    %rax,%rdi
 0x00007fd62eb15a86  _PyErr_SetObject+534 mov    %rax,0x8(%rsp)
 0x00007fd62eb15a8b  _PyErr_SetObject+539 callq  0x7fd62eaf6e20 <_Py_DECREF>
 0x00007fd62eb15a90  _PyErr_SetObject+544 mov    0x8(%rsp),%r10
 0x00007fd62eb15a95  _PyErr_SetObject+549 cmp    %r10,%r12
 0x00007fd62eb15a98  _PyErr_SetObject+552 jne    0x7fd62eb15a2d <_PyErr_SetObject+445>
 0x00007fd62eb15a9a  _PyErr_SetObject+554 jmpq   0x7fd62ea65487 <_PyErr_SetObject-721897>
 0x00007fd62eb15a9f  _PyErr_SetObject+559 xor    %edx,%edx
─── Breakpoints ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── Expressions ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── History ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── Memory ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─── Registers ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
    rax 0x00007fd61fe53580     rbx 0x00007fd61fe53580    rcx 0x0000000000000000       rdx 0x00007fd61fe53a00    rsi 0x00007fd62ed18a20    rdi 0x00007fd61fe53580    rbp 0x00007fd62ed18a20
    rsp 0x00007ffdc884cd60      r8 0x0000000000000004     r9 0x00005609a47be5f0       r10 0x00007fd61fe53580    r11 0x0000000000000050    r12 0x00007fd61fe53a00    r13 0x00007fd61fe53580
    r14 0x00005609a47bd420     r15 0x00007fd62ed25540    rip 0x00007fd62eb15a90    eflags [ IF ]                 cs 0x00000033             ss 0x0000002b             ds 0x00000000        
     es 0x00000000              fs 0x00000000             gs 0x00000000        
─── Source ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 142             to inline the call to PyException_GetContext. */
 143          if (exc_value != value) {
 144              PyObject *o = exc_value, *context;
 145              while ((context = PyException_GetContext(o))) {
 146                  Py_DECREF(context);
 147>                 if (context == value) {
 148                      PyException_SetContext(o, NULL);
 149                      break;
 150                  }
 151                  o = context;
 152              }
─── Stack ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[0] from 0x00007fd62eb15a90 in _PyErr_SetObject+544 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/errors.c:147
[1] from 0x00007fd62eb1e95d in _PyErr_SetNone+17 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/errors.c:199
[2] from 0x00007fd62eb1e95d in PyErr_SetNone+17 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/errors.c:199
[3] from 0x00007fd62eb1e95d in gen_send_ex+509 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/genobject.c:241
[4] from 0x00007fd62ebdaa81 in _gen_throw+225 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/genobject.c:513
[5] from 0x00007fd62ebda97a in gen_throw+122 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/genobject.c:535
[6] from 0x00007fd62eb20180 in method_vectorcall_VARARGS+176 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Objects/descrobject.c:313
[7] from 0x00007fd62eb0a4aa in _PyObject_VectorcallTstate+46 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Include/cpython/abstract.h:118
[8] from 0x00007fd62eb0a4aa in PyObject_Vectorcall+80 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Include/cpython/abstract.h:127
[9] from 0x00007fd62eb0a4aa in call_function+154 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/ceval.c:5099
[+]
─── Threads ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[2] id 37996 name python3 from 0x00007fd62e9eaa24 in futex_abstimed_wait_cancelable+42 at ../sysdeps/nptl/futex-internal.h:320
[1] id 37995 name python3 from 0x00007fd62eb15a90 in _PyErr_SetObject+544 at /usr/src/debug/python39-3.9.0~b1-1.fc32.x86_64/Python/errors.c:147
─── Variables ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
arg tstate = 0x5609a47bd420: {prev = 0x5609a49fd300,next = 0x0,interp = 0x5609a47be370,frame = Frame 0…, exception = <type at remote 0x7fd62ed18a20>: {ob_refcnt = 7,ob_type = 0x7fd62ed237c0 <PyType_Type>}, value = StopIteration(): {ob_refcnt = 1,ob_type = 0x7fd62ed18a20 <_PyExc_StopIteration>}
loc o = <StompCancelledError at remote 0x7fd61fe53580>: {ob_refcnt = 19,ob_type = 0x5609a4c818d0}, context = <StompCancelledError at remote 0x7fd61fe53580>: {ob_refcnt = 19,ob_type = 0x5609a4c818d0}, exc_value = <StompCancelledError at remote 0x7fd61fe53580>: {ob_refcnt = 19,ob_type = 0x5609a4c818d0}, tb = 0x0: Cannot access memory at address 0x0
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
>>> 

Now here’s something a little more interesting. That while ((context = PyException_GetContext(o))) is just the sort of thing that could hang forever. We can see that the loop retrieves a context from o, checks it against a value, and then updates o to be context. Those variables are visible in the variables section above, but for clarity:

>>> p o
$1 = <StompCancelledError at remote 0x7fd61fe53580>
>>> p context
$2 = <StompCancelledError at remote 0x7fd61fe53580>
>>> p value
$3 = StopIteration()
>>>

Well, well, well. Since o is the exact same as context, the update at the end of the loop that is presumably supposed to create an exit condition will never occur.

Now that we know what’s wrong, it’s a very good time to do some hunting to determine if this bug has already been found. This is a process I can give very little advice on since it appears to be a rule of the universe that all bug reports are duplicates. What I did in this case was:

dev in ~/devel/c/cpython on  3.9 via 🐍 v3.8.3
❯ git log --grep="SetObject"  # Maybe the function name is in a commit fixing it

commit 7f77ac463cff219e0c8afef2611cad5080cc9df1
Author: Miss Islington (bot) <31488909+miss-islington@users.noreply.github.com>
Date:   Fri May 22 14:35:22 2020 -0700

    bpo-40696: Fix a hang that can arise after gen.throw() (GH-20287)
    
    This updates _PyErr_ChainStackItem() to use _PyErr_SetObject()
    instead of _PyErr_ChainExceptions(). This prevents a hang in
    certain circumstances because _PyErr_SetObject() performs checks
    to prevent cycles in the exception context chain while
    _PyErr_ChainExceptions() doesn't.
    (cherry picked from commit 7c30d12bd5359b0f66c4fbc98aa055398bcc8a7e)
    
    Co-authored-by: Chris Jerdonek <chris.jerdonek@gmail.com>

Ding ding ding. We can check out the GitHub pull request and Python issue to see that all this debugging has happened before (and all this debugging will happen again).

The quality of commit messages vary from project to project, so this won’t always work, but it’s a good start. If nothing obvious shows up in the commit log, try testing with the development branch of whatever is causing you pain. Maybe it’s fixed. If that fails, skim through the issue tracker (if the project has an issue tracker) and hope for the best.

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on June 09, 2020 12:35 PM
New status scheduled: Move to new datacentre for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Fedora websites, Fedora Wiki, Zodbot IRC bot

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on June 09, 2020 12:35 PM
New status scheduled: Move to new datacentre for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Fedora websites, Fedora Wiki, Zodbot IRC bot