Fedora People

Harassment, expulsion and rejecting the Code of Conduct

Posted by Daniel Pocock on September 24, 2020 07:20 PM

The Fedora Council is currently considering a proposal on Community Publishing Platforms which appears to apply Fedora's Code of Conduct in more places.

Consider the following quote from the Code of Conduct:

Members of the Fedora community should be respectful when dealing with other contributors as well as with people outside the Fedora community and with users of Fedora.

Animal Farm

It seems quite reasonable. Think again: George Orwell's Animal Farm was seen as incredibly disrespectful to friends in the Soviet Union, comparing Stalin's totalitarian regime to a bunch of pigs. Russia was a British ally. Most publishers didn't want to touch the book for this very reason. It should be clear that there is no way Animal Farm's pig metaphor is compatible with the Code of Conduct as it is currently written.

Red Hat owns the Fedora trademark and any trademark owner has a right to control the trademark. At the same time, when somebody does work and contributes intellectual property to any free software project, they have a right to assert that they are the Developer and this will inevitably involve using the trademark too. With at least one organization, Debian, refusing to recognise all Developers equally, it is important to scrutinize this issue. After all, not fully and respectfully crediting every Developer is as unpleasant as plagiarism.

There are many signs that Fedora and Red Hat have no intention of going to such extremes. Nonetheless, it is always a good idea to learn from the mistakes of other organizations that are trying to re-invent intellectual property rights to beat people over the head.

Oddly enough, both Red Hat's trademark rights and the Developers' authorship rights can be traced back to the same place, Article 27 of the Universal Declaration of Human Rights.

  1. Everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.
  2. Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.

Last time I cited this article in a blog post, the blog was immediately censored from Planet Debian and Planet Mozilla. I feel that some free software organizations take a one-sided view of these rights, they insist on the freedoms to use free software while showing contempt for any other rights, such as volunteers' right to speak or vote.

It is an interesting moment to look at the stated goals of some of these organizations. Fedora's Mission page defines the Four Foundations. The first foundation is Freedom, which includes:

We are dedicated to free software and content. Advancing software and content freedom is a central community goal, which we accomplish through the software and content we promote.

Mozilla's Manifesto states:

We are committed to an internet that promotes civil discourse, human dignity, and individual expression. We are committed to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts.

When that blog post encouraged critical thinking about the rights of volunteers, this is how Mozilla burned their own manifesto:

Fedora's Community Publishing Platforms proposal on its own doesn't appear very controversial until you think through all of this context. To understand why there is a problem, we need to consider how Codes of Conduct have been misused.

The term Code of Conduct has been used to give smears an appearance of credibility. If a smear against a volunteer is based on the Code of Conduct then it suggests there may have been some misconduct. To suggest misconduct, with the weight of the organization's reputation, is an attempt to harm somebody.

Whenever I see a reference to a Code of Conduct in a free software community, the first thing that comes to mind is that I resigned from some of my voluntary roles after the loss of a family member and the people who were oblivious to that immediately started the CoC-insult routine. Each time I see another volunteer being insulted and attacked with this weapon, in any free software community, it also reminds me of the same attacks on my family and I.

Looking through the history of our industry, before Code of Conduct mania, we can find many examples of people resigning or forking projects without any public acrimony. An example is Theo de Raadt leaving NetBSD to start OpenBSD, Wikipedia dedicates only a few words about personality clashes to the split. When Paul Allen left active operations at Microsoft in 1983, there was no public shaming on account of his medical condition. He continued to support the company in various ways. Since we have Codes of Conduct, each time there is some resignation, each time somebody forks a project, a narrative of wrongdoing can be falsified.

Codes of Conduct have created a form of extremism. People can't resign gracefully and move on with their life: some organisations retrospectively change a resignation into an expulsion. This is illogical, you can't expel or fire somebody after they resign. Alternatively, if somebody resigns and a vindictive leader feels cheated because he can't indulge himself expelling the person, the organization uses the Code of Conduct to publicly declare they have been banned.

Think of teenagers breaking up, both parties rushing to their social media accounts to publicly claim who-dumped-who first. Do we really want free software projects to be governed with the mentality of adolescent romance?

Leaving a voluntary role in a free software organization has become akin to leaving North Korea: any officials who want to resign have to run across the border to the south while getting shot at.

In 2011, immediately after the arrest of Dominique Strauss-Kahn (DSK) on false rape charges, the police sought to publicly shame and humiliate him by taking him, the head of the IMF, on a perp-walk, handcuffed, in front of the TV cameras. Yet after it became obvious that the police had been fooled, they came off looking like keystone cops.

Most countries don't indulge in these perp walks. Some countries, like Germany and Switzerland, give significant privacy protections to anybody accused of a crime up until a trial has been completed. Proponents of Codes of Conduct have sought to usher in the practice of perp walks as if these things automatically go together.

The most prominent examples of this have been the attacks on Jacob Appelbaum and Richard Stallman. In the Appelbaum case, I recently documented how existence of first-hand accounts was falsified and amplified across multiple free software communities. Appelbaum was accused of using his membership of free software projects to achieve privilege escalation in other organizations: yet the only evidence of privilege escalation is those using their Codes of Conduct to propagate a one-sided story. The effect of Codes of Conduct in that instance was to induce ongoing harassment of Appelbaum, his friends and neighbours. Using Codes of Conduct and the associated processes induced organizations as far away as Australia to take sides and jump to conclusions. The Code of Conduct was inferior in every way to the proper authorities for handling such complaints, this underlines how important it is to reject these inferior Codes of Conduct.

In parallel, the Fedora community is currently investigating a Code of Ethics and that may provide a credible way forward. Many professional organizations have a Code of Ethics and these codes typically have safety mechanisms encouraging dispute resolution rather than vendettas. I feel the Code of Ethics initiative is far more important than the proposed Community Publishing Platforms policy. I strongly believe these documents need to be worked on together, rather than rushing through the Community Publishing Platforms policy in two weeks.

The difference between a Code of Conduct, as we know it in free, open source software and a Code of Ethics in a professional body is much like the difference between adolescent romances and a well organized professional sporting team.

The ACM's Code of Ethics provides interesting reading. Consider the paragraph on Conflicts of Interest:

Computing professionals should be forthright about any circumstances that might lead to either real or perceived conflicts of interest or otherwise tend to undermine the independence of their judgment.

This has far reaching consequences. If we go back to the Appelbaum example, we can see that people who made decisions on behalf of the Tor organization had been personally and romantically involved in the case. These people were therefore ineligible to use Tor's name to make a public statement. They had the option to make statements on a personal basis but not in their official capacity.

One of the questions that arises whenever a Code of Conduct is introduced is the question of who will interpret it. What does this mean, interpreting the Code of Conduct? Most Codes of Conduct are incredibly short and overgeneralised so you can actually read anything you like into them. When a free software organization appoints some person or group to interpret their Code of Conduct, they are giving them a weapon that they can use at will, the smear of suggesting misconduct whenever such a conclusion is politically useful, even after somebody has resigned.

You can have your say on the Community Publishing Platforms proposal by subscribing to the Fedora Council discussion list.

8 years of my work on AArch64

Posted by Marcin 'hrw' Juszkiewicz on September 23, 2020 03:33 PM

Back in 2012 AArch64 was something new, unknown yet. There was no toolchain support (so no gcc, binutils or glibc). And I got assigned to get some stuff running around it.

OpenEmbedded

As there was no hardware cross compilation was the only way. Which meant OpenEmbedded as we wanted to have wide selection of software available.

I learnt how to use modern OE (with OE Core and layers) by building images for ARMv7 and checking them on some boards I had floating around my desk.

Non-public toolchain work

Some time later first non-public patches for binutils and gcc arrived in my inbox. Then eglibc ones. So I started building and on 12th September 2012 I was able to build helloworld:

12:38 hrw@puchatek:aarch64-oe-linux$ ./aarch64-oe-linux-gcc ~/devel/sources/hello.c -o hello
12:38 hrw@puchatek:aarch64-oe-linux$ file hello
hello: ELF 64-bit LSB executable, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.39, not stripped
12:39 hrw@puchatek:aarch64-oe-linux$ objdump -f hello

hello:     file format elf64-littleaarch64
architecture: aarch64, flags 0x00000112: 
EXEC_P, HAS_SYMS, D_PAGED 
start address 0x00000000004003e0

Then images followed. Several people at Linaro (and outside) used those images to test misc things.

At that moment we ran ARMv8 Fast models (quite slow system emulator from Arm). There was a joke that Arm developers formed a queue for single core 10 GHz x86-64 cpus to get AArch64 running faster.

Toolchain became public

Then 1st October 2012 came. I entered Linaro office in Cambridge for AArch64 meeting and was greeted with “glibc patches went to public ML” information. So I rebased my OpenEmbedded repository, updated patches, removed any traces of non-public ones and published whole work.

Building on AArch64

My work above added support for AArch64 as a target architecture. But can it be used as a host? One day I decided to check and ran OpenEmbedded on AArch64.

After one small patch it worked fine.

X11 anyone?

As I had access to Arm Fast model I was able to play with graphics. So one day in January 2013 I did a build and and started Xorg. Through next years I had fun when people wrote that they got X11 running on their AArch64 devices ;D

Two years later I had Applied Micro Mustang at home (still have it). Once it had working PCI Express support I added graphics card and started X11 on hardware.

Then went debugging why Xorg requires configuration file and one day with help from Dave Airlie, Mark Salter and Matthew Garrett I got two solutions for the problem. Do not remember did any of them went upstream but some time later problem was solved.

Few years later I met Dave Airlie at Linux Plumbers. We introduced to each other and he said “ah, you are the ‘arm64 + radeon guy’” ;D

AArch64 Desktop week

One day in September 2015 I had an idea. PCIe worked, USB too. So I did AArch64 desktop week. Connected monitors, keyboard, mouse, speakers and used Mustang instead of my x86-64 desktop.

It was fun.

Distributions

First we had nothing. Then I added AArch64 target into OpenEmbedded.

Same month Arm released Foundation model so anyone was able to play with AArch64 system. No screen, just storage, serial and network but it was enough for some to even start building whole distributions like Debian, Fedora, OpenSUSE, Ubuntu.

At that moment several patches were shared by all distributions as it was faster way than waiting for upstreams. I saw multiple versions of some of them during my journey of fixing packages in some distributions.

Debian and Ubuntu

In February 2013 Debian/Ubuntu team presented their AArch64 port. It was their first architecture bootstrapped without using external toolchains. Work was done in Ubuntu due to different approach to development than Debian has. All work was merged back so some time later Debian also had AArch64 port.

Fedora

Fedora team started early — October 2012, right after toolchain became public. Used Fedora 17 packages and switched to Fedora 19 during work.

When I joined Red Hat in September 2013 one of my duties was fixing packages in Fedora to get them built on AArch64.

OpenSUSE

In January 2014 first versions of QEMU support arrived and people moved from using Foundation model. March/April OpenSUSE team did massive amount of builds to get their distribution built that way.

RHEL

Fedora bootstrap also meant RHEL 7 bootstrap. When I joined Red Hat there were images ready to use in models. My work was testing them and fixing packages. There were multiple times when AArch64 fix helped to build also on ppc64le and s390x architectures.

Hardware I played with

First Linux capable hardware was announced in June 2013. I got access to it at Red Hat. Building and debugging was much faster than using fast models ;D

Applied Micro Mustang

Soon Applied Micro Mustangs were everywhere. Distributions used them to build packages etc. Even without support for half of hardware (no PCI Express, no USB).

I got one in June 2014. Running UEFI firmware out of the box. At first months I had a feeling that firmware is developed at Red Hat as we had fresh versions often right after first patches for missing hardware functionality were written. In reality it was maintained by Applied Micro and we had access to sources so there were some internal changes in testing (that’s why I had firmware versions like ‘0.12-rh’).

All those graphics cards I collected to test how PCI Express works. Or testing USB before it was even merged into Linux mainline kernel. Using virtualization for development of armhf build fixes (8 cores, 12 gigabytes of ram and plenty of storage beat all armv7 hardware I had).

I stopped using Mustang around 2018. It is still under my desk.

For those who use: make sure you have 3.06.25 firmware.

96boards

In February 2015 Linaro announced 96boards initiative. The plan was to make small, unified SBC with different Arm chips. Both 32- and 64-bit ones.

First ones were ‘Consumer Edition’. Small, limited to basic connectivity. Now there are tens of them. 32-bit, 64-bit, fpga etc. Choose your poison ;D

Second ones were ‘Enterprise Edition’. Few attempts existed, most of them did not survived prototype phase. There was joke that full length PCI Express slot and two USB ports requirements are there because I wanted to have AArch64 desktop ;D

Too bad that nothing worth using came from EE spec.

Servers

As Linaro assignee I have access to several servers from Linaro members. Some are mass-market ones, some never made to market. We had over hundred X-Gene1 based systems (mostly as m400 cartridges in HPe Moonshot chassis’) and shutdown them in 2018 as they were getting more and more obsolete.

Main system I use for development is one of those ‘never went to mass-market’ ones. 46 cpu cores, 96 GB of ram make it nice machine for building container images, Debian packages or running virtual machines in OpenStack.

Desktop

For some time I was waiting for some desktop class hardware to have development box more up-to-date than Mustang. Months turned into years. I no longer wait as it looks like there will be no such thing.

Solidrun has made some attempts in this area. First with Macchiatobin and later with Honeycomb. I did not used any of them.

Cloud

When I (re)joined Linaro in 2016 I became part of team working on getting OpenStack working on AArch64 hardware. We used Liberty, Mitaka, Newton releases and then changed way we work and started contributing more. And more. Kolla, Nova, Dib and other projects. Added aarch64 nodes to OpenDev CI.

The effect of it was Linaro Developer Cloud used by hundreds of projects to speed-up their aarch64 porting, tens of projects hosting their CI systems etc.

Two years later Amazon started offering aarch64 nodes in AWS.

Summary

I spent half of my life with Arm on AArch64. Had great moments like building helloworld as one of first people outside of Arm Ltd. Got involved in far more projects then ever thought. Met new friends, visited several places in the world I would probably never go otherwise.

I also got grumpy and complained far too many times that AArch64 market is ‘cheap but limited sbc or fast but expensive servers and nearly nothing in between’. Wrote some posts about missing systems targeting software developers and lost hope that such will happen.

NOTE: It is 8 years of my work on AArch64. I work with Arm since 2004.

Fedora 32 : Testing the Bookworm software.

Posted by mythcat on September 22, 2020 08:49 PM

The current version of Bookworm (v1.1.2) supports eBooks in the following file formats: EPUB, PDF, MOBI, FB2, CBR, CBZ.

First, I install this software with dnf tool:

[root@desk mythcat]# dnf install bookworm.x86_64 
...
Installed:
bookworm-1.1.3-0.1.20200414git.c7c3643.fc32.x86_64

Complete!

I tested with some old EPUB and PDF files and I'm not very happy with formatting text on the page.

HI tested with some old EPUB and PDF files and I'm not very happy with the formatting for certain texts on the page, like source code in programming.

Bookworm does one thing and does it well for this simple reader.

You can help this project on GitHub.

Talos II quickstart

Posted by Daniel Pocock on September 22, 2020 02:30 PM

The Talos II and Blackbird motherboards, for the POWER9 chipset, have attracted increasing attention recently due to a variety of factors including FSF's Respect Your Freedom certification and the increasing unease about vulnerabilities or deliberate backdoors in Intel ME.

There are some simply cool things going on in the OpenPOWER space, like Microwatt, an implementation of the POWER instruction set that runs on an FPGA and boots Linux. If you don't trust the chips from IBM, Microwatt is a really interesting alternative.

Do you need a workstation class computer?

If you don't actually need a workstation class computer then any of the systems mentioned here are going to look quite expensive. If you do need a workstation then there are ways to build the Talos II or Blackbird and ensure you get value for money.

As an example, for a small IT support team of 2 to 4 people, it is possible to build a multi-seat configuration (example for Fedora), connecting all four users directly to the same Talos II computer. The cost of the computer is split up to 4 ways but any one user can exploit the power of the system when needed. The upcoming AMD Radeon Big Navi GPUs, which were leaked this week, are rumoured to have 16GB of video RAM, easily enough to attach four 4k displays.

cat on HP Z800 workstation

Does everything work as well as on x86?

Some users have expressed frustration that they could install a free operating system like Fedora, Debian or FreeBSD but a small handful of applications wouldn't work. One of the reasons for this is that Fedora and Debian builds their ppc64el kernels with 64k pages enabled while some applications only work with the more typical 4k page size. Simply recompiling my kernel packages without 64k pages was enough to resolve those issues for me.

Blackbird or Talos II Lite?

There has been a lot of attention on the low-cost Blackbird motherboards. The Talos II Lite is only a little bit more expensive.

Don't let the size of the Talos II Lite deter you: for most people, I feel it should be the default choice. Talos II and Talos II Lite only fit in the largest cases but this is an advantage: large cases are often quieter because they have larger fans. Large fans run at lower speeds to move the same volume of air and you are less likely to hear them.

I feel the Blackbird is an alternative for people who want a portable system or people who are extremely concerned about cost or space while Talos II Lite is the default.

Here is a comparison:

BlackbirdTalos II Lite
CPUs4 or 8 coreAny, up to 22 core
RAM speed2 channels4 channels
CostAdd about 10%
GPU widthMax 2 slots, or use a riser, vertical mountingNot limited
SizeMicro ATXNeeds EATX full tower case

How much time does it take choosing parts?

Some people enjoy building their own computers while other people prefer to buy a ready-made system from HP or Lenovo.

US users have the option to simply buy a whole workstation, fully assembled, from Raptor. Users in other countries may prefer to buy the motherboard and CPU from Raptor while sourcing everything else from a local supplier. This reduces shipping costs and provides access to local warranty support.

Raptor's wiki lists many components that people have tested in various ways. If you don't have time to read all that and potentially try different bits and pieces, here is a list of suggestions based on my own build:

ComponentModelComment
MotherboardTalos II LiteOnly a little bit more expensive than the Blackbird motherboard, but gives you 4 memory channels and a wider space for a GPU
CPUIBM POWER9, 4 or 8 core, DD2.3 / v2The v2 chips have the fix for Spectre
CoolingWhen available, water cooling from Vikings or Raptor's High Speed Fan (HSF)
CaseFractal Design Define 7 XLEverything just fits in this case. The holes in the bottom are in the right locations for mounting the Talos II motherboards.
PSUSeasonic Prime TX fanless 700W or similarYou could also choose the Prime TX 650 or 850, the fan only comes on when load is over 40%
RAM4 ECC DIMMs, for example, Samsung 16GB ECC RDIMM M393A2K40CB2-CTDBenefit from 4 memory channels. Anything you don't uses is automatically used for caching.
GPUAny AMD Radeon, many people reported success with Sapphire Pulse Radeon RX 580The AMD Radeon cards provide a more open driver, amdgpu

The storage system, including the type of disks and the controller, is largely at your discretion and may depend on your existing disks. If your GPU uses the 16x slot then you will need a storage controller that fits in the remaining 8x slot. An OCuLink HBA, M.2 NVMe HBA or a simple PCIe SSD could be a good choice.

If you don't regularly build PCs then I strongly recommend buying a computer toolkit including a flexible screwdriver extension. The extension allows you to reach into corners of the case and easily insert or remove the supports for the motherboard. Use of an anti-static wrist strap is essential when assembling expensive computer components.

Operating system installation

I've personally tested Debian, Fedora and OpenBSD installers. Any of these can simply be placed on a USB stick and they work in the Talos II just like other computers.

As noted earlier, it may be a good idea to try recompiling the kernel. You can do that after the installation.

Be careful if you are creating a Btrfs root filesystem as it can only be mounted by kernels with the same page size as the kernel that created it.

Summary

Moving from Intel-based computing to a POWER-based architecture is actually a lot less hassle than some of the other challenges we have to deal with as free software professionals. The recent publications of emails at Techrights has got people talking about what things were like in the GNU/Linux halcyon days when the Open Source definition was being thrashed out in a communications channel that is anything but open. When I started out with free software in the mid-nineties, before the kernel reached version 1.0, it was a lot more challenging than getting started on POWER today. By comparison, the Talos II experience could be compared to a 5-star hotel on the edge of a safari park.

Assembly effort can be minimized by using a list of parts somebody else tested, such as my list above.

If you feel uncomfortable about Intel and AMD taking customers for granted and putting more and more concealed artifacts into each generation of their product then the only alternative is for users to divide our energy between different platforms. In the workstation category right now, OpenPOWER is the only credible competitor for x86, RISC V still appears to be a long way off.

The upfront expense of this system can often be justified if you have ways to fully utilize it, for example, with a multi-seat configuration for colleagues or family members. After all, there was once a time when IBM believed there was only a market for five computers that everybody would share or maybe it was fake news.

New badge: Fedora 33 i18n Test Day !

Posted by Fedora Badges on September 22, 2020 02:11 PM
Fedora 33 i18n Test DayYou helped to test Fedora 33 i18n features

Conversion to Collection - YAML roundtrip with ruamel

Posted by Linux System Roles on September 22, 2020 12:00 PM

The System Roles team is working on making the roles available as a collection. One of the challenges is that we have to continue to support the old style roles for the foreseeable future due to customers using older versions of Ansible. So rather than just create a github repository for the collection and do a one-time conversion of all of the roles to collection format, we have decided to keep the existing github role structure, and instead use a script to build the collection for publishing in Galaxy.

Using the collections: keyword

One strategy is to use the collections: keyword in the play. For example:

- name: Apply the kernel_settings role
  hosts: all
  roles:
    - kernel_settings
  tasks:
    - name: use the kernel_settings module
      kernel_settings:
        ...

To use this role from a collection fedora.system_roles, you could use the collections: keyword:

- name: Apply the kernel_settings role
  hosts: all
  collections:
    - fedora.system_roles
  roles:
    - kernel_settings
  tasks:
    - name: use the kernel_settings module
      kernel_settings:
        ...

However, the guidance we have received from the Ansible team is that we should use FQRN (Fully Qualified Role Name) and FQCN (Fully Qualified Collection Name) to avoid any naming collisions or ambiguity, and not to rely on the collections: keyword. This means we have a lot of conversion to do. For Ansible YAML files, the two main items are:

  • convert references to role ROLENAME and linux-system-roles.ROLENAME to fedora.system_roles.ROLENAME
  • convert references to modules to use the FQCN e.g. some_module: to fedora.system_roles.some_module:

Using regular expressions to search/replace strings

One solution is to use a regular expression match - just look for references to linux-system-roles.ROLENAME and convert them to fedora.system_roles.ROLENAME. This works pretty well, but there is no guarantee that there is some odd use of linux-system-roles.ROLENAME not related to a role keyword. It would be much better and safer if we could only change those places where the role name is used in the semantic context of an Ansible role reference. For modules, it is quite tricky to do this search/replace using a regexp. To complicate matters, in the network role, the module name network_connections is also used as a role variable name. I’m not sure how one would write a regexp that could detect the semantic context and only replace the string network_connections with fedora.system_roles.network_connections in the context of usage as an Ansible module.

Using the Ansible parser

The next solution was to use the Ansible parser (ansible.parsing.dataloader.DataLoader) to read in the files with the full semantic information. We took inspiration from the ansible-lint code for this, and used similar heuristics to determine the file and node types:

  • file location - files in the vars/ and defaults/ directories are not tasks/ files
  • Ansible type - a tasks file has type AnsibleSequence not AnsibleMapping
  • node type - a play has one of the play keywords like gather_facts, tasks, etc.

For task nodes, we then use ansible.parsing.mod_args.ModuleArgsParser to parse out the module name (as is done in ansible-lint).

For role references, we look for

  • a task with a module include_role or import_role with a name parameter
  • a play with a roles keyword
  • a meta with a dependencies keyword

A role in a roles or dependencies may be referenced as

roles/dependencies:
  - ROLENAME
# OR
  - name: ROLENAME
    vars: ...
# OR
  - role: ROLENAME
    vars: ...

This allowed us to easily identify where the ROLENAME was referenced as a role rather than something else, and to identify where the role modules were used.

The next problem - how to write out these converted files? Just using a plain YAML dump, even if nicely formatted, does not preserve all of our pre/post YAML doc, comments, formatting, etc. We thought it was important to keep this as much as possible:

  • keep license headers in files
  • helps visually determine if the collection conversion was successful
  • when bugs come from customers using the collection, we can much better debug and fix the source role if the line numbers and formatting match
  • we’ll use this code when we eventually convert our repos in github to use the collection format

Using Ansible and ruamel

The ruamel.yaml package has the ability to “round-trip” YAML files, preserving comments, quoting, formatting, etc. We borrowed another technique from ansible-lint which parses and iterates Ansible files using both the Ansible parser and the ruamel parser “in parallel” (ansible-lint is also comment aware). This is an excerpt from the role file parser class:

    def __init__(self, filepath, rolename):
        self.filepath = filepath
        dl = DataLoader()
        self.ans_data = dl.load_from_file(filepath)
        if self.ans_data is None:
            raise LSRException(f"file is empty {filepath}")
        self.file_type = get_file_type(self.ans_data)
        self.rolename = rolename
        self.ruamel_yaml = YAML(typ="rt")
        self.ruamel_yaml.default_flow_style = False
        self.ruamel_yaml.preserve_quotes = True
        self.ruamel_yaml.width = None
        buf = open(filepath).read()
        self.ruamel_data = self.ruamel_yaml.load(buf)
        self.ruamel_yaml.indent(mapping=2, sequence=4, offset=2)
        self.outputfile = None
        self.outputstream = sys.stdout

The class uses ans_data for looking at the data using Ansible semantics, and uses ruamel_data for doing the modification and writing.

    def run(self):
        if self.file_type == "vars":
            self.handle_vars(self.ans_data, self.ruamel_data)
        elif self.file_type == "meta":
            self.handle_meta(self.ans_data, self.ruamel_data)
        else:
            for a_item, ru_item in zip(self.ans_data, self.ruamel_data):
                self.handle_item(a_item, ru_item)

    def write(self):
        def xform(thing):
            if self.file_type == "tasks":
                thing = re.sub(LSRFileTransformerBase.INDENT_RE, "", thing)
            return thing
        if self.outputfile:
            outstrm = open(self.outputfile, "w")
        else:
            outstrm = self.outputstream
        self.ruamel_yaml.dump(self.ruamel_data, outstrm, transform=xform)

    def handle_item(self, a_item, ru_item):
        """handle any type of item - call the appropriate handlers"""
        ans_type = get_item_type(a_item)
        self.handle_vars(a_item, ru_item)
        self.handle_other(a_item, ru_item)
        if ans_type == "task":
            self.handle_task(a_item, ru_item)
        self.handle_task_list(a_item, ru_item)

    def handle_task_list(self, a_item, ru_item):
        """item has one or more fields which hold a list of Task objects"""
        for kw in TASK_LIST_KWS:
            if kw in a_item:
                for a_task, ru_task in zip(a_item[kw], ru_item[kw]):
                    self.handle_item(a_task, ru_task)

The concrete class that uses this code provides callbacks for tasks, vars, meta, and other, and the callback can change the data. a_task is the task node from the Ansible parser, and ru_task is the task node from the ruamel parser. role_modules is a set of names of the modules provided by the role. prefix is e.g. fedora.system_roles.

    def task_cb(self, a_task, ru_task, module_name, module_args, delegate_to):
        if module_name == "include_role" or module_name == "import_role":
            rolename = ru_task[module_name]["name"]
            lsr_rolename = "linux-system-roles." + self.rolename
            if rolename == self.rolename or rolename == lsr_rolename:
                ru_task[module_name]["name"] = prefix + self.rolename
        elif module_name in role_modules:
            # assumes ru_task is an orderreddict
            idx = tuple(ru_task).index(module_name)
            val = ru_task.pop(module_name)
            ru_task.insert(idx, prefix + module_name, val)

This produces an output file that is very close to the input - but not quite.

Problems with this approach

  • We can’t make ruamel do proper indentation of lists without having it do the indentation at the first level. For example:
- name: first level
  block:
    - name: second level
      something: something

comes out as

  - name: first level
    block:
      - name: second level
        something: something

This is why we have the xform hack in the write method.

  • Even with the hack, comments are not indented correctly
- name: first level
  # comment here
  block:
    # comment here
    - name: second level
      something: something

comes out as

  - name: first level
  # comment here
    block:
    # comment here
      - name: second level
        something: something

One approach would be to have xform skip the removal of the two extra spaces at the beginning of the line if the first non-space character in the line is #. However, if you have shell scripts or embedded config files with comments in them, these will then not be indented correctly, leading to problems. So for now, we just live with improperly indented Ansible comments.

  • Line wrapping is not preserved

We use yamllint and have had to use some creative wrapping/folding to abide by the line length restriction e.g.

    - "{{ ansible_facts['distribution'] }}_\
        {{ ansible_facts['distribution_version'] }}.yml"
    - "{{ ansible_facts['distribution'] }}_\
        {{ ansible_facts['distribution_major_version'] }}.yml"

is converted to

    - "{{ ansible_facts['distribution'] }}_{{ ansible_facts['distribution_version']\
        \ }}.yml"
    - "{{ ansible_facts['distribution'] }}_{{ ansible_facts['distribution_major_version']\
        \ }}.yml"

that is, ruamel imposes its own line length and wrapping convention.

We also didn’t have to worry about how to handle usage of plugins inside of lookup functions, which would seem to be a much more difficult problem.

Foreman development setup with libvirt 2020

Posted by Lukas "lzap" Zapletal on September 22, 2020 12:00 AM

Foreman development setup with libvirt 2020

Thanks to dnsmasq DHCP Foreman plugin, development setup for provisioning can be little bit easier. After git checkout of Foreman core, Smart Proxy and Smart Proxy DHCP Dnsmasq plugin, perform creation of initial directory structure:

# DEVELOPER=lzap
# mkdir /var/lib/libvirt/dnsmasq/foreman-default
# chown $DEVELOPER:dnsmasq /var/lib/libvirt/dnsmasq/foreman-default
# touch /var/lib/dnsmasq/foreman-default.leases
# chown $DEVELOPER:dnsmasq /var/lib/dnsmasq/foreman-default.leases
# mkdir /var/lib/dnsmasq/tftp
# chown $DEVELOPER:dnsmasq /var/lib/dnsmasq/tftp/
# setfacl -m u:$DEVELOPER:r-- /var/lib/libvirt/dnsmasq/default.conf

Copy PXELinux files, this will work also for Grub2 or iPXE:

# cp /usr/share/syslinux/*.{bin,c32,0} /var/lib/dnsmasq/tftp

Finally, change libvirt “default” network configuration in the following way. The difference between the default configuration are the following elements or attributes:

  • tftp
  • bootp
  • dnsmasq:options
  • xmlns:dnsmasq

    <network xmlns:dnsmasq="http://libvirt.org/schemas/network/dnsmasq/1.0"> <name>default</name> <uuid>25fd4c6e-4d9e-45a6-b448-57900c3315f2</uuid> <forward mode="nat"> <nat> <port end="65535" start="1024"> </nat> </forward> <bridge delay="0" name="virbr0" stp="on" zone="trusted"> <mac address="52:54:00:dd:b0:55"> <ip address="192.168.122.1" netmask="255.255.255.0"> <tftp root="/var/lib/dnsmasq/tftp"> <dhcp> <range end="192.168.122.254" start="192.168.122.2"> <bootp file="pxelinux.0"> </dhcp> </ip> <dnsmasq:options> <dnsmasq:option value="dhcp-optsfile=/var/lib/libvirt/dnsmasq/foreman-default/dhcpopts.conf"> <dnsmasq:option value="dhcp-hostsfile=/var/lib/libvirt/dnsmasq/foreman-default/dhcphosts"> <dnsmasq:option value="dhcp-leasefile=/var/lib/dnsmasq/foreman-default.leases"> </dnsmasq:options> </network>

Restart libvirt network named “default” and you are good to go. Note in this setup I haven’t configured DNS, therefore unattended_url must be set to something like http://192.168.122.1:3000.

Creating Windows 10 bootable USB drive

Posted by Lukas "lzap" Zapletal on September 22, 2020 12:00 AM

Creating Windows 10 bootable USB drive

I had to update firmware for my super-ultra-wide LG monitor, however the utility only works in Windows and requires USB-C PC or laptop. I have one, but it’s Fedora only. Luckily, there is a way how to install Windows 10 to USB hdd (or flash).

First of, download Windows 10 ISO from Microsoft’s website. I’ve used MediaCreationTool.exe utility which prepared the ISO for me in about half an hour.

Next up it’s this wonderful freeware tool called Rufus. By default it only shows USB flash drives, but after pressing Ctrl-Alt-F it also shows up USB external HDD drives. Pick a drive, browse the ISO and after the tool asks for type of installation select “Windows 10 To Go”. That’s all, really. Rufus will format the drive and copy necessary files.

After the initial reboot Windows 10 sets them up and allows the trial period without activation. I was able to download a firmware update tool and perform what I needed to do. Everything was a little bit too slow on my USB 3.0 SATA HDD, but it was fine for the job.

I hope that helps someone else using Linux!

PHP extensions status with upcoming PHP 8.0

Posted by Remi Collet on September 21, 2020 06:13 AM

With PHP 8.0 entering stabilization phase, time to check the status of most commonly used PHP extensions (at least, the ones available in my repository).

Here is the (mostly) exhaustive list.

1. Compatible

The last published version is compatible

f

# Name Version RPM State
  apcu_bc 1.0.5 OK
  apfd 1.0.2 OK
  ast 1.0.10 OK
  base58 0.1.4 OK
  bitset 3.0.1 OK
  brotli 0.11.1 OK
  dio 0.1.0 OK
  http 4.0.0beta1 OK
  igbinary 3.1.5 OK
  json_post 1.0.1 OK
  krb5 1.1.5 OK
  lzf 1.6.8 OK
  mailparse 3.1.1 OK
  maxminddb 1.7.0 OK
  mysqlnd_azure 1.1.1 OK
  oauth 2.0.7 OK
  pdlib 1.0.2 OK
  phpiredis 1.0.1 OK
  pq 2.1.8 OK
  raphf 2.0.1 OK
  realpath_turbo 2.0.0 OK
  selinux 0.5.0 OK
  snappy 0.1.12 OK
  solr 2.5.1 OK
  uuid 1.1.0 OK
  vips 1.0.12 OK
  xattr 1.4.0 OK
  xhprof 2.2.1 OK
  xlswriter 1.3.6 OK
  yaml 2.2.0b2 OK
  zstd 0.9.0 OK

 

2. Work in progress

These extensions have been fixed upstream (or PR are available) but no official release.

# Name Version RPM State
  amqp 1.10.21 Fixed by PR #383
  apcu 5.1.18 Fixed upstream
  dbase 7.0.1   Fixed upstream by rev 350634, rev 350637 rev 350638 rev 350639
  env 0.2.1 Fixed by PR # 10
  event 2.5.6   Fixed upstream
  fann 1.1.1 Fixed by PR #42
  geos 1.0.0 See #20, #24, #25
  http_message 0.2.2 Fixed by PR #3
  imagick 3.4.4 Fixed upstream and by PR #346, PR #347, PR #348
  inotify 2.0.0 Fixed upstream
  ip2location 8.0.1 Fixed by PR #9
  leveldb 0.2.1 Fixed by PR #40
  lz4 0.3.6 Fixed upstream and by PR #26
  msgpack 2.1.1 Fixed by PR #148 and by upstream
  psr 1.0.0 Fixed by PR #77
  rdkafka 4.0.3   WIP in PR #383
  redis 5.3.1 Fixed upstream
  rpminfo 0.5.0 Fixed upstream
  rrd 2.0.1 Fixed upstream by rev 350618
  scrypt 1.4.2   Fixed by PR #56
  smbclient 1.0.0 Fixed by PR #73
  ssdeep 1.1.0 Fixed by PR #2
  ssh2 1.2 Fixed by PR #44
  stomp 2.0.5   Fixed by PR #14
  swoole 4.5.4 Fixed upstream
  varnish 1.2.4   Fixed upstream
  xmldiff 1.1.2 Fixed upstream
  xmlrpc 1.0.0-dev Dropped from 8.0, no release planed
  yac 2.2.1 Fixed by PR #115
  yaconf 1.1.0 Test suite fixed by PR #63
  yaz 1.2.3   Fixed by PR #11
  zip 1.19.0 Fixed upstream
  zmq 1.1.3 Fixed buf PR #216

 

3. Not compatible for now (only from 7.4 compatible extensions)

# Name Version State
  datadog_trace 0.48.2  
  ioncube_loader 10.3.4 Not supported
  memcache 4.0.5.2  
  memcached 3.1.5  
  propro 2.1.0 Not supported, ZE API has been removed in 8

 

4. Not tested yet

# Name Version State
  ahocorasick 0.0.6  
  cassandra 1.3.2  
  cmark 1.2.0  
  componere 3.1.1  
  couchbase 2.6.2  
  crypto 0.3.1  
  csv 0.3.1  
  decimal 2.0.0  
  druid 1.0.0  
  ds 1.2.9  
  eio 2.0.4  
  ev 1.0.8  
  gearman 2.0.6  
  gender 1.1.0  
  geoip 1.1.1  
  geospatial 0.2.1  
  gmagick 2.0.5RC1  
  gnupg 1.4.0  
  grpc 1.32.0  
  handlebars 0.9.1  
  hdr_histogram 0.3.0  
  horde_lz4 1.0.10  
  hprose 1.8.0  
  hrtime 0.6.0  
  ice 1.6.2  
  interbase 1.0.0-dev  
  libvirt 0.5.5  
  lua 2.0.7  
  luasandbox 3.0.3  
  mcrypt 1.0.3  
  memprof 2.1.0  
  mogilefs 0.9.3.1  
  mongodb 1.8.0  
  mosquitto 0.4.0  
  mustache 0.9.1  
  mysql 1.0.0-dev Dropped from 7.0, never released
  mysql_xdevapi 8.0.21  
  nsq 3.5.0  
  opencensus 0.2.2  
  parallel 1.1.4  
  parle 0.8.1  
  pcov 1.0.6  
  pcs 1.3.7  
  phalcon 4.0.6  
  php_trie 0.1.0  
  pinba 1.1.2  
  pdflib 4.1.2  
  protobuf 3.13.0  
  radius 1.4.0b1  
  recode 1.0.0-dev Dropped from 7.4, no release planed
  rar 4.0.0  
  runkit7 3.0.1a1  
  sandbox 0.1.3  
  seasclick 0.1.0  
  seaslog 2.1.0  
  scoutapm 1.1.1  
  skywalking 3.3.2  
  snuffleupagus 0.5.0  
  sphinx 1.4.0-dev still pending for 7.0
  stats 2.0.3  
  sqlsrv 5.8.1  
  svm 0.2.3  
  svn 2.0.3  
  sync 1.1.1  
  taint 2.1.0  
  tcpwrap 1.2.0  
  termbox 0.1.3  
  timecop 1.2.10 Some failed tests since 7.2 (related to timelib changes), dead project ?
  trader 0.5.0  
  translit 0.7.0  
  uopz 6.1.2  
  uploadprogress 1.1.3  
  uv 0.2.4  
  vld 0.17.0  
  wddx 1.0.0-dev Dropped from 7.4, no release planed
  xdebug 2.9.7 Version 3 planed 
  xdiff 2.0.1  
  xxtea 1.0.11  
  yaf 3.2.5  
  yar 2.1.2  
  zephir_parser 1.3.4  
  zookeeper 0.7.2  

5. Conclusion

  • Sept 21th: beta4 is released and have nearly final API, so really time to start fixing extensions

Please ping me by mail or on twitter for missing/outdated information.

 

Last updated on September 25th 2020

Episode 216 – Security didn’t find life on Venus

Posted by Josh Bressers on September 21, 2020 12:01 AM

Josh and Kurt talk about how we talk about what we do in the context of life on Venus. We didn’t really discover life on Venus, we discovered a gas that could be created by life on Venus. The world didn’t hear that though. We have a similar communication problem in security. How often are your words misunderstood?

<audio class="wp-audio-shortcode" controls="controls" id="audio-1957-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_216_Security_didnt_find_life_on_Venus.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_216_Security_didnt_find_life_on_Venus.mp3</audio>

Show Notes

Fedora 32 : Can be better? part 011.

Posted by mythcat on September 20, 2020 09:17 PM

Four days ago, the well-known Gnome environment came with a new release.

I guess it will be implemented in Fedora distro soon. 

<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/DZ_P5W9r2JY" width="560"></iframe>

Fedora 32 : Can be better? part 010.

Posted by mythcat on September 20, 2020 10:14 AM

In this tutorial I will show you how can easy learn with a simple example to have a better Fedora distro with SELinux. 

SELinux uses a policy store to keep track of its loaded policy modules and related settings. 

You can see my active policy store name is MLS.

[root@desk mythcat]# sestatus | grep Loaded
Loaded policy name: mls

I want to create policy in the most easy way to denny memory. 

I can use many way to do that or find it on SELinux. 

If you want to deny user domains applications to map a memory region as both executable and writable you can use deny_execmem

This is dangerous and the executable should be reported in bugzilla and is is enabled by default. 

You must turn on the deny_execmem boolean.

setsebool -P deny_execmem 1
Let's use it:
[root@desk mythcat]# setsebool -P deny_execmem 1
[root@desk mythcat]# ausearch -c 'Web Content' --raw | audit2allow -M my-WebContent
******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i my-WebContent.pp

[root@desk mythcat]# semodule -X 300 -i my-WebContent.pp
Let's see if this SELinux is currently loaded:
[root@desk mythcat]# semodule -l | grep Web
my-WebContent

Okular 20.08 — redesigned annotation tools

Posted by Rajeesh K Nambiar on September 20, 2020 08:27 AM

Last year I wrote about some enhancements made to Okular’s annotation tool and in one of those, Simone Gaiarin commented that he was working on redesigning the Annotation toolbar altogether. I was quite interested and was also thinking of ‘modernizing’ the tool — only, I had no idea how much work it would be.

The existing annotation tool works, but it had some quirks and had many advanced options which were documented pretty well in the Handbook but not obvious to an unscrupulous user. For instance, if the user would like to highlight some part of the text, she selects (single-clicks) the highlighter tool, applies it to a block of text. When another part of text is to be highlighted, you’d expect the highlighter tool to apply directly; but it didn’t ‘stick’ — tool was unselected after highlighting the first block of text. There is an easy way to make the annotation tool ‘stick’ — instead of single-click to select the tool, simply double-click, and it persists. Another instance is the ‘Strikeout’ annotation which is not displayed by default, but can be added to the tools list.

Simone, with lots of inputs, testing and reviews from David Hurka, Nate Graham and Albert Astals Cid et al., has pulled off a magnificent rewrite of Okular’s annotation toolbar. To get an idea of the amount of work went into this, see this phabricator task and this invent code review. The result of many months of hardwork is a truly modern, easy to explore-and-use annotation support. I am not aware of any other libre PDF reader with such good annotation features.

<figure class="wp-block-image size-large"><figcaption>Annotation toolbar in Okular 20.08.</figcaption></figure>

Starting from the left, default tools are: Highlight (brush icon), Underline (straight line) and Squiggle (wobbly line), Strike out, Insert text (Typewriter), Inline note, Popup note, Freehand drawing and Shapes (arrows, lines, rectangles etc.). The line thickness, colour, opacity and font of the tools can be customized easily from the drawer. Oh, and the selected annotation tool ‘sticks’ by default (see the ‘pin’ icon at the right end of toolbar).

<figure class="wp-block-gallery columns-2 is-cropped"><figcaption class="blocks-gallery-caption">Line width and colour of ‘Arrow’ tool.</figcaption></figure>

When upgrading to okular-20.08 from a previous version, it will preserve the customized annotation tools created by the user and make those available under ‘Quick annotations’, and these can be quickly applied using Alt+n (Alt-1, Alt-2 etc.) short cuts. It did reset my custom shortcuts keys for navigation (I use Vim keys gg to go to the first page and G to go to the last page), which can be manually added back.

<figure class="wp-block-image size-large is-resized"><figcaption>Custom tools (Quick annotations) can be applied with short cuts.</figcaption></figure>

Here is the new toolbar in action.

<figure class="wp-block-embed is-type-wp-embed is-provider-screencast wp-block-embed-screencast wp-embed-aspect-4-3 wp-has-aspect-ratio">
<iframe class="wp-embedded-content" data-secret="2Wgqgkmegf" frameborder="0" height="315" sandbox="allow-scripts" security="restricted" src="https://diode.zone/videos/embed/d6e94708-53cb-4629-ba56-65dfdf800e6a#?secret=2Wgqgkmegf" title="Okular 20.08 new annotation toolbar" width="560"></iframe>
</figure>

PHP version 7.3.23RC1 and 7.4.11RC1

Posted by Remi Collet on September 18, 2020 08:34 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 7.4.11RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 32-33 or remi-php74-test repository for Fedora 31 and Enterprise Linux 7-8.

RPM of PHP version 7.3.23RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 31 or remi-php73-test repository for Enterprise Linux.

emblem-notice-24.pngPHP version 7.2 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.4 as Software Collection:

yum --enablerepo=remi-test install php74

Parallel installation of version 7.3 as Software Collection:

yum --enablerepo=remi-test install php73

Update of system version 7.4:

yum --enablerepo=remi-php74,remi-php74-test update php\*

or, the modular way (Fedora and RHEL 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 7.3:

yum --enablerepo=remi-php73,remi-php73-test update php\*

or, the modular way (Fedora and RHEL 8):

dnf module reset php
dnf module enable php:remi-7.3
dnf --enablerepo=remi-modular-test update php\*

Notice: version 7.4.11RC1 is also in Fedora rawhide for QA.

x86_64emblem-notice-24.png builds now use Oracle Client version 19.8

emblem-notice-24.pngEL-8 packages are built using RHEL-8.2

emblem-notice-24.pngEL-7 packages are built using RHEL-7.8

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

emblem-notice-24.pngVersion 8.0.0beta4 is also available as Software Collections

Software Collections (php73, php74)

Base packages (php)

A reflection: Gabriele Trombini (mailga)

Posted by Justin W. Flory on September 18, 2020 02:17 AM

The post A reflection: Gabriele Trombini (mailga) appeared first on Justin W. Flory's blog.

Justin W. Flory's blog - Free Software, music, travel, and life reflections

Trigger warning: Grief, death.

Two years passed since we last met in Bolzano. I remember you traveled in for a day to join the 2018 Fedora Mindshare FAD. You came many hours from your home to see us, and share your experiences and wisdom from both the global and Italian Fedora Community. And this week, I learned that you, Gabriele Trombini, passed away from a heart attack. To act like the news didn’t affect me denies my humanity. In 2020, a year that feels like it has taken away so much already, we are greeted by another heart-breaking loss.

But to succumb to the despair and sadness of this year would deny the warm, happy memories we shared together. We shared goals of supporting the Fedora Project but also learning from each other.

So, this post is a brief reflection of your life as I knew you. A final celebration of the great memories we shared together, that I only wish I could have shared with you while you were still here.

Ciao!

We had a unique privilege of meeting first in person before meeting online. At Flock 2015, of course I remember coming to your Fedora-Join session. This was my first introduction to the volunteer-supported mentorship community that exists in Fedora. Even though there was one particularly disruptive audience member, I remember learning from you and noting your long-time experience in the Fedora Community.

After that, we would come to know each other better. As I began a new chapter of my life at my university, we would become frequent collaborators. The Fedora Marketing team was always interesting to me, as part of the group of people who helped our community talk about and share the Fedora Project with others. Underneath your gentle mentorship, I learned the focus areas and history of the Fedora Marketing team.

At some point in 2015 or 2016, you asked me if I would like to chair a Marketing Team meeting. Thus began an early step in my journey from a participant to a facilitator. In a tragically ironic way, it strikes me how I did not see your guidance as mentorship at the time. I always saw our conversations as two friends discussing a shared hobby or interest. Such is the subtle art of teaching and mentorship.

Your many contributions

You were a cornerstone community member of Fedora for many years. Since our connection was from Fedora, it is worth noting the many contributions you made over the years. Long before Fedora or Linux were anything I knew about.

You and Robert Mayr co-authored a book together about Fedora 9, I think for the Italian Linux community. You were a one-time steward of the Fedora Join and Marketing teams. You were an influential member in shaping what Mindshare is today, from the days of the Fedora Outreach Steering Committee, the Fedora Ambassador Steering Committee before that, and grassroots community organizing in Italy even before that.

Beyond the source

But perhaps the memories I treasure most are the ones that don’t have much to do with Fedora at all. I remember learning that “in real life” you were a co-owner of a heating and air conditioning business in Italy. For many years, my family ran a heating and air conditioning company of our own. This was an experience I could always understand. I remember the times when you would go offline for some time. Then I would hear from you eventually, and you would tell me how the busy season kept you away from helping out in Fedora. And in a few words in IRC private messages, I simply knew and smiled.

We would meet at Flock events, but I find Flock is usually tough to get 1×1 time with others. I remember the day you came up and joined us in Bolzano for the 2018 Mindshare FAD. On a weekend day in March, you came and sat in a wine cellar converted to a conference room, where we spent the day recounting pain points and how Mindshare would address them.

And then, our small group went out for dinner. The food we ate and words we said are now faded memories, but the experience lives warmly in my heart as I think about what your life meant to me.

I was saddened to find no photographs or pictures of us together. But I went looking for our last conversations and found these final messages on IRC:

**** BEGIN LOGGING AT Sun Dec  4 17:49:56 2016

Dec 04 17:49:56 <jflory7>   That would be fantastic... I'll definitely let you know if I have plans to visit Italy. :)

Dec 05 07:00:32 <mailga>    jflory7 hope it happens. :)

**** ENDING LOGGING AT Wed Dec  7 00:28:51 2016

I never got to take you up on your offer to visit your home and meet your family. But I am happy that I had the opportunity to partially fulfill that old promise of meeting together in Italy.

Why write this?

I didn’t write this post with an outline, or a template. These words came to me while sitting with my own emotions and feelings. I am writing this because this is an effective coping mechanism for me to process what is lost, but also how to move forward from the loss.

The Fedora Project has given me a lot over the last five years. I have met many wonderful people and contributed to things that matter a great deal to me. But Fedora has also taught me about loss. There are many lessons in life that have nothing to do with work, code, software, or engineering, but have everything to do with how we look at the world.

In the wake of losing you, I think of the kind words and memories we shared that I did not tell you were important to me. I think of how the opportunity is permanently missed for me to share my appreciation of your kindness and friendship. The tragedy of youth is perhaps that I failed to fully appreciate our connection until after you passed.

When writing this, I came to realize something for me. And this will be different for everyone. But I like to think for Gabrielle and me, Fedora was never just about building an operating system. It was about collaborating with other people, human beings, on a digital infrastructure project that mattered, and to share kindness unto others — especially beginners and newcomers.

Rest in peace, amico.

Untitled Post

Posted by Zach Oglesby on September 17, 2020 04:04 AM

Just got around to watching the last two episodes of Clone Wars. There can be no doubt that Asoka is the real hero of the whole “Skywalker” story arc, and maybe more.

Epiphany 3.38 and WebKitGTK 2.30

Posted by Michael Catanzaro on September 16, 2020 05:00 PM

It’s that time of year again: a new GNOME release, and with it, a new Epiphany. The pace of Epiphany development has increased significantly over the last few years thanks to an increase in the number of active contributors. Most notably, Jan-Michael Brummer has solved dozens of bugs and landed many new enhancements, Alexander Mikhaylenko has polished numerous rough edges throughout the browser, and Andrei Lisita has landed several significant improvements to various Epiphany dialogs. That doesn’t count the work that Igalia is doing to maintain WebKitGTK, the WPE graphics stack, and libsoup, all of which is essential to delivering quality Epiphany releases, nor the work of the GNOME localization teams to translate it to your native language. Even if Epiphany itself is only the topmost layer of this technology stack, having more developers working on Epiphany itself allows us to deliver increased polish throughout the user interface layer, and I’m pretty happy with the result. Let’s take a look at what’s new.

Intelligent Tracking Prevention

Intelligent Tracking Prevention (ITP) is the headline feature of this release. Safari has had ITP for several years now, so if you’re familiar with how ITP works to prevent cross-site tracking on macOS or iOS, then you already know what to expect here.  If you’re more familiar with Firefox’s Enhanced Tracking Protection, or Chrome’s nothing (crickets: chirp, chirp!), then WebKit’s ITP is a little different from what you’re used to. ITP relies on heuristics that apply the same to all domains, so there are no blocklists of naughty domains that should be targeted for content blocking like you see in Firefox. Instead, a set of innovative restrictions is applied globally to all web content, and a separate set of stricter restrictions is applied to domains classified as “prevalent” based on your browsing history. Domains are classified as prevalent if ITP decides the domain is capable of tracking your browsing across the web, or non-prevalent otherwise. (The public-friendly terminology for this is “Classification as Having Cross-Site Tracking Capabilities,” but that is a mouthful, so I’ll stick with “prevalent.” It makes sense: domains that are common across many websites can track you across many websites, and domains that are not common cannot.)

ITP is enabled by default in Epiphany 3.38, as it has been for several years now in Safari, because otherwise only a small minority of users would turn it on. ITP protections are designed to be effective without breaking too many websites, so it’s fairly safe to enable by default. (You may encounter a few broken websites that have not been updated to use the Storage Access API to store third-party cookies. If so, you can choose to turn off ITP in the preferences dialog.)

For a detailed discussion covering ITP’s tracking mitigations, see Tracking Prevention in WebKit. I’m not an expert myself, but the short version is this: full third-party cookie blocking across all websites (to store a third-party cookie, websites must use the Storage Access API to prompt the user for permission); cookie-blocking latch mode (“once a request is blocked from using cookies, all redirects of that request are also blocked from using cookies”); downgraded third-party referrers (“all third-party referrers are downgraded to their origins by default”) to avoid exposing the path component of the URL in the referrer; blocked third-party HSTS (“HSTS […] can only be set by the first-party website […]”) to stop abuse by tracker scripts; detection of cross-site tracking via link decoration and 24-hour expiration time for all cookies created by JavaScript on the landing page when detected; a 7-day expiration time for all other cookies created by JavaScript (yes, this applies to first-party cookies); and a 7-day extendable lifetime for all other script-writable storage, extended whenever the user interacts with the website (necessary because tracking companies began using first-party scripts to evade the above restrictions). Additionally, for prevalent domains only, domains engaging in bounce tracking may have cookies forced to SameSite=strict, and Verified Partitioned Cache is enabled (cached resources are re-downloaded after seven days and deleted if they fail certain privacy tests). Whew!

WebKit has many additional privacy protections not tied to the ITP setting and therefore not discussed here — did you know that cached resources are partioned based on the first-party domain? — and there’s more that’s not very well documented which I don’t understand and haven’t mentioned (tracker collusion!), but that should give you the general idea of how sophisticated this is relative to, say, Chrome (chirp!). Thanks to John Wilander from Apple for his work developing and maintaining ITP, and to Carlos Garcia for getting it working on Linux. If you’re interested in the full history of how ITP has evolved over the years to respond to the changing threat landscape (e.g. tracking prevention tracking), see John’s WebKit blog posts. You might also be interested in WebKit’s Tracking Prevention Policy, which I believe is the strictest anti-tracking stance of any major web engine. TL;DR: “we treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities. If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice.” No exceptions.

Updated Website Data Preferences

As part of the work on ITP, you’ll notice that Epiphany’s cookie storage preferences have changed a bit. Since ITP enforces full third-party cookie blocking, it no longer makes sense to have a separate cookie storage preference for that, so I replaced the old tri-state cookie storage setting (always accept cookies, block third-party cookies, block all cookies) with two switches: one to toggle ITP, and one to toggle all website data storage.

Previously, it was only possible to block cookies, but this new setting will additionally block localStorage and IndexedDB, web features that allow websites to store arbitrary data in your browser, similar to cookies. It doesn’t really make much sense to block cookies but allow other types of data storage, so the new preferences should better enforce the user’s intent behind disabling cookies. (This preference does not yet block media keys, service workers, or legacy offline web application cache, but it probably should.) I don’t really recommend disabling website data storage, since it will cause compatibility issues on many websites, but this option is there for those who want it. Disabling ITP is also not something I want to recommend, but it might be necessary to access certain broken websites that have not yet been updated to use the Storage Access API.

Accordingly, Andrei has removed the old cookies dialog and moved cookie management into the Clear Personal Data dialog, which is a better place because anyone clearing cookies for a particular website is likely to also want to clear other personal data. (If you want to delete a website’s cookies, then you probably don’t want to leave its SQL databases intact, right?) He had to remove the ability to clear data from a particular point in time, because WebKit doesn’t support this operation for cookies, but that function is probably  rarely-used and I think the benefit of the change should outweigh the cost. (We could bring it back in the future if somebody wants to try implementing that feature in WebKit, but I suspect not many users will notice.) Treating cookies as separate and different from other forms of website data storage no longer makes sense in 2020, and it’s good to have finally moved on from that antiquated practice.

New HTML Theme

Carlos Garcia has added a new Adwaita-based HTML theme to WebKitGTK 2.30, and removed support for rendering HTML elements using the GTK theme (except for scrollbars). Trying to use the GTK theme to render web content was fragile and caused many web compatibility problems that nobody ever managed to solve. The GTK developers were never very fond of us doing this in the first place, and the foreign drawing API required to do so has been removed from GTK 4, so this was also good preparation for getting WebKitGTK ready for GTK 4. Carlos’s new theme is similar to Adwaita, but gradients have been toned down or removed in order to give a flatter, neutral look that should blend in nicely with all pages while still feeling modern.

This should be a fairly minor style change for Adwaita users, but a very large change for anyone using custom themes. I don’t expect everyone will be happy, but please trust that this will at least result in better web compatibility and fewer tricky theme-related bug reports.

<figure aria-describedby="caption-attachment-9212" class="wp-caption alignnone" id="attachment_9212" style="width: 1920px">Screenshot demonstrating new HTML theme vs. GTK theme<figcaption class="wp-caption-text" id="caption-attachment-9212">Left: Adwaita GTK theme controls rendered by WebKitGTK 2.28. Right: hardcoded Adwaita-based HTML theme with toned down gradients.</figcaption></figure>

Although scrollbars will still use the GTK theme as of WebKitGTK 2.30, that will no longer be possible to do in GTK 4, so themed scrollbars are almost certain to be removed in the future. That will be a noticeable disappointment in every app that uses WebKitGTK, but I don’t see any likely solution to this.

Media Permissions

Jan-Michael added new API in WebKitGTK 2.30 to allow muting individual browser tabs, and hooked it up in Epiphany. This is good when you want to silence just one annoying tab without silencing everything.

Meanwhile, Charlie Turner added WebKitGTK API for managing autoplay policies. Videos with sound are now blocked from autoplaying by default, while videos with no sound are still allowed. Charlie hooked this up to Epiphany’s existing permission manager popover, so you can change the behavior for websites you care about without affecting other websites.

<figure aria-describedby="caption-attachment-9140" class="wp-caption aligncenter" id="attachment_9140" style="width: 525px">Screenshot displaying new media autoplay permission settings<figcaption class="wp-caption-text" id="caption-attachment-9140">Configure your preferred media autoplay policy for a website near you today!</figcaption></figure>

Improved Dialogs

In addition to his work on the Clear Data dialog, Andrei has also implemented many improvements and squashed bugs throughout each view of the preferences dialog, the passwords dialog, and the history dialog, and refactored the code to be much more maintainable. Head over to his blog to learn more about his accomplishments. (Thanks to Google for sponsoring Andrei’s work via Google Summer of Code, and to Alexander for help mentoring.)

Additionally, Adrien Plazas has ported the preferences dialog to use HdyPreferencesWindow, bringing a pretty major design change to the view switcher:

<figure aria-describedby="caption-attachment-9146" class="wp-caption aligncenter" id="attachment_9146" style="width: 1920px">Screenshot showing changes to the preferences dialog<figcaption class="wp-caption-text" id="caption-attachment-9146">Left: Epiphany 3.36 preferences dialog. Right: Epiphany 3.38. Note the download settings are present in the left screenshot but missing from the right screenshot because the right window is using flatpak, and the download settings are unavailable in flatpak.</figcaption></figure>

User Scripts

User scripts (like Greasemonkey) allow you to run custom JavaScript on websites. WebKit has long offered user script functionality alongside user CSS, but previous versions of Epiphany only exposed user CSS. Jan-Michael has added the ability to configure a user script as well. To enable, visit the Appearance tab in the preferences dialog (a somewhat odd place, but it really needs to be located next to user CSS due to the tight relationship there). Besides allowing you to do, well, basically anything, this also significantly enhances the usability of user CSS, since now you can apply certain styles only to particular websites. The UI is a little primitive — your script (like your CSS) has to be one file that will be run on every website, so don’t try to design a complex codebase using your user script — but you can use conditional statements to limit execution to specific websites as you please, so it should work fairly well for anyone who has need of it. I fully expect 99.9% of users will never touch user scripts or user styles, but it’s nice for power users to have these features available if needed.

HTTP Authentication Password Storage

Jan-Michael and Carlos Garcia have worked to ensure HTTP authentication passwords are now stored in Epiphany’s password manager rather than by WebKit, so they can now be viewed and deleted from Epiphany, which required some new WebKitGTK API to do properly. Unfortunately, WebKitGTK saves network passwords using the default network secret schema, meaning its passwords (saved by older versions of Epiphany) are all leaked: we have no way to know which application owns those passwords, so we don’t have any way to know which passwords were stored by WebKit and which can be safely managed by Epiphany going forward. Accordingly, all previously-stored HTTP authentication passwords are no longer accessible; you’ll have to use seahorse to look them up manually if you need to recover them. HTTP authentication is not very commonly-used nowadays except for internal corporate domains, so hopefully this one-time migration snafu will not be a major inconvenience to most users.

New Tab Animation

Jan-Michael has added a new animation when you open a new tab. If the newly-created tab is not visible in the tab bar, then the right arrow will flash to indicate success, letting you know that you actually managed to open the page. Opening tabs out of view happens too often currently, but at least it’s a nice improvement over not knowing whether you actually managed to open the tab or not. This will be improved further next year, because Alexander is working on a completely new tab widget to replace GtkNotebook.

<video class="wp-video-shortcode" controls="controls" height="295" id="video-9094-1" preload="metadata" width="525"><source src="https://blogs.gnome.org/mcatanzaro/files/2020/09/Screencast-from-09-15-2020-082111-PM.webm?_=1" type="video/webm">https://blogs.gnome.org/mcatanzaro/files/2020/09/Screencast-from-09-15-2020-082111-PM.webm</video>

New View Source Theme

Jim Mason changed view source mode to use a highlight.js theme designed to mimic Firefox’s syntax highlighting, and added dark mode support.

<figure aria-describedby="caption-attachment-9122" class="wp-caption aligncenter" id="attachment_9122" style="width: 997px">Image showing dark mode support in view source mode<figcaption class="wp-caption-text" id="caption-attachment-9122">Embrace the dark.</figcaption></figure>

And More…

  • WebKitGTK 2.30 now supports video formats in image elements, thanks to Philippe Normand. You’ll notice that short GIF-style videos will now work on several major websites where they previously didn’t.
  • I added a new WebKitGTK 2.30 API to expose the paste as plaintext editor command, which was previously internal but fully-functional. I’ve hooked it up in Epiphany’s context menu as “Paste Text Only.” This is nice when you want to discard markup when pasting into a rich text editor (such as the WordPress editor I’m using to write this post).
  • Jan-Michael has implemented support for reordering pinned tabs. You can now drag to reorder pinned tabs any way you please, subject to the constraint that all pinned tabs stay left of all unpinned tabs.
  • Jan-Michael added a new import/export menu, and the bookmarks import/export features have moved there. He also added a new feature to import passwords from Chrome. Meanwhile, ignapk added support for importing bookmarks from HTML (compatible with Firefox).
  • Jan-Michael added a new preference to web apps to allow running them in the background. When enabled, closing the window will only hide the the window: everything will continue running. This is useful for mail apps, music players, and similar applications.
  • Continuing Jan-Michael’s list of accomplishments, he removed Epiphany’s previous hidden setting to set a mobile user agent header after discovering that it did not work properly, and replaced it by adding support in WebKitGTK 2.30 for automatically setting a mobile user agent header depending on the chassis type detected by logind. This results in a major user experience improvement when using Epiphany as a mobile browser. Beware: this functionality currently does not work in flatpak because it requires the creation of a new desktop portal.
  • Stephan Verbücheln has landed multiple fixes to improve display of favicons on hidpi displays.
  • Zach Harbort fixed a rounding error that caused the zoom level to display oddly when changing zoom levels.
  • Vanadiae landed some improvements to the search engine configuration dialog (with more to come) and helped investigate a crash that occurs when using the “Set as Wallpaper” function under Flatpak. The crash is pretty tricky, so we wound up disabling that function under Flatpak for now. He also updated screenshots throughout the  user help.
  • Sabri Ünal continued his effort to document and standardize keyboard shortcuts throughout GNOME, adding a few missing shortcuts to the keyboard shortcuts dialog.

Epiphany 3.38 will be the final Epiphany 3 release, concluding a decade of releases that start with 3. We will match GNOME in following a new version scheme going forward, dropping the leading 3 and the confusing even/odd versioning. Onward to Epiphany 40!

From a diary of AArch64 porter — drive-by coding

Posted by Marcin 'hrw' Juszkiewicz on September 16, 2020 04:53 PM

Working on AArch64 often means changing code in some projects. I did that so many times that I am unable to say where I have some commits. Such thing got a name: drive-by coding.

Definition

Drive-by coding is situation when you appear in some software project, do some changes, get them merged and then disappear to never be seen again.

Let’s build something

All starts from simple thing: I have/want to build some software. But for some reason it does not cooperate. Sometimes it is simple architecture check missing, sometimes atomic operations are not present, intrinsics are missing or anything else.

First checks

Then comes moment of looking at build errors and trying to work out some solution. Have I seen that bug before? Does it look familiar?

If this is something new then quick Google search for error message. And checking bug reports/issues on project’s website/repo. There can be ready to use patches, information how to fix it or even some ideas why does it happen.

If this is system call failure in some tests then I check my syscalls table are those ones handled on aarch64 and try to change code if they are not (legacy ones like open, symlink, rename).

Simple fixes

When I started working with AArch64 (in 2012) there were moments when many projects were easy to fix. If atomics were issue then copying them from Linux kernel was usually solution (if license allowed).

Architecture checks with pile of #ifdef __X86_64__ or similar ones which are trying to do decide for simple things like “32/64” or “little/big endian”. Nowadays such ones do not happen as often as it was.

SIMD intrinsics can be a problem. All those vst1q_f32_x2(), vld1q_f32_x2 and similar. I do not have to understand them to know that it usually means that C compiler lacks some backports as those functions were added into gcc and llvm already (like it was with Pytorch recently).

Complex stuff

There are moments when getting software to build needs something more complicated. Like I wrote above, I usually start with searching for error message and checking was it an issue in some other projects. And how it got solved. If I am lucky then patch can be done in short time and send for review upstream (once it builds and passes tests).

Sometimes all I can do is reporting issue upstream and hope that someone will care enough to respond. Usually it ends with at least discussion on potential ways to fix, sometimes hints or even patches to test.

Projects response

Projects usually accept patches, review them and merge. In several cases it took longer than expected, sometimes there was larger amount of those so they remember me (at least for some time). It helps when I have something for those project again months/years later.

There are projects where I prefer to forget that they exist. Complicated contribution rules, crazy CI setup, weird build systems (ever heard about ‘bazel’?). Or comments in ‘we do not give a shit about non-x86’ style (with a bit polished language). Been there, fixed something to get stuff working and do not want to go back.

Summary

Drive-by coding’ reminds me going abroad for conferences. People think that you saw interesting places when in reality you spent most of time inside of hotel and/or conference centre.

It is similar with code. I was in several projects, usually had no idea what they do, how they work. Came, looked shortly, fixed something and went back home.

Parsing PAN-OS logs using syslog-ng

Posted by Peter Czanik on September 16, 2020 11:48 AM

Version 3.29 of syslog-ng was released recently including a user-contributed feature: the panos-parser(). It is parsing log messages from PAN-OS (Palo Alto Networks Operating System). Unlike some other networking devices, the message headers of PAN-OS syslog messages are standards-compliant. However, if you want to act on your messages (filtering, alerting), you still need to parse the message part. The panos-parser() helps you create name-value pairs from the message part of the logs.

From this blog you can learn why it is useful to parse PAN-OS log messages and how to use the panos-parser().

Before you begin

In order to use the panos-parser(), you need to install syslog-ng 3.29 or later. Most Linux distributions feature earlier versions, but the https://www.syslog-ng.com/3rd-party-binaries page of the syslog-ng website has some pointers to 3rd party repositories featuring up-to-date binaries.

You also need some PAN-OS log messages. If you are reading this blog, you most likely have some Palo Alto Networks devices at hand and your ultimate goal is to collect logs from those devices. In this blog I will use the sample log messages found in the configuration snippet implementing the panos-parser(). Even if you have “real” PAN-OS logs, it is easier to get started configuring and testing syslog-ng this way.

Why is it useful?

As mentioned earlier, PAN-OS devices send completely valid syslog messages. You are not required to do any additional parsing on them. Syslog-ng can interpret the message headers without any additional configuration and save the logs properly:

Apr 14 16:48:54 localhost 1,2020/04/14 16:48:54,unknown,SYSTEM,auth,0,2020/04/14 16:48:54,,auth-fail,,0,0,general,medium,failed authentication for user \'admin\'. Reason: Invalid username/password. From: 10.0.10.55.,1718,0x0,0,0,0,0,,paloalto
Apr 14 16:54:18 localhost 1,2020/04/14 16:54:18,unknown,CONFIG,0,0,2020/04/14 16:54:18,10.0.10.55,,set,admin,Web,Succeeded, deviceconfig system,127,0x0,0,0,0,0,,paloalto

If you look at these logs, you can see that the message part is a list of comma-separated values. That should be easy for the csv-parser(), but the two lists above have a different set of fields and there are a few more message types not included here. You can find the parsers describing these message types in /usr/share/syslog-ng/include/scl/paloalto/panos.conf or in the same directory under /usr/local on most Linux distributions. The panos-parser() can detect what type of log it is and create name-value pairs accordingly. If none of the types match, the parser drops the log. Once you have name-value pairs, it is much easier to create alerts (filters) in syslog-ng or reports in Kibana (if you use the Elasticsearch destination of syslog-ng).

Configuring syslog-ng

In most Linux distributions, syslog-ng is configured in a way so that you can extend it by dropping a configuration file with a .conf extension into the /etc/syslog-ng/conf.d/ directory. In other cases, simply append the below configuration to syslog-ng.conf.

source s_regular { tcp(port(5141)); };
source s_net {
    default-network-drivers(flags(store-raw-message));
};
source s_panosonly { tcp(port(5140) flags(no-parse,store-raw-message)); };

template t_jsonfile {
    template("$(format-json --scope rfc5424 --scope dot-nv-pairs
        --rekey .* --shift 1 --scope nv-pairs --key ISODATE)\n\n");
};
parser p_panos { panos-parser(); };
destination d_frompanos {
    file("/var/log/frompanos" template(t_jsonfile));
};
destination d_other {
    file("/var/log/other");
};
destination d_raw {
    file("/var/log/raw" template("${RAWMSG}\n"));
};
log {
    source(s_regular);
    destination(d_other);
};
log {
    source(s_net);
    destination(d_raw);
    if ("${.app.name}" eq "panos") {
        destination(d_frompanos);
    } else {
        destination(d_other);
    };
};
log {
    source(s_panosonly);
    destination(d_raw);
    parser(p_panos);
    destination(d_frompanos);
};

If you follow my blogs, the above configuration will look familiar: it is based on the configuration I prepared for Cisco in one of my recent blogs, I just replaced the Cisco-specific parts. Some of the text below is also reused, but there are some obvious differences, as the purpose of the parsers is different.

In the following section we will go over this configuration in detail, in the order the statements appear in the configuration. To make your life easier, I copied the relevant snippets below before explaining them.

Sources

source s_regular { tcp(port(5141)); };

This is a pretty regular TCP legacy syslog source on a random high port that I used to create the logs in the “Why is it useful” section. By default, the tcp() source handles all incoming log messages as if they were legacy (RFC3164) formatted and in case of PAN-OS logs it results in properly formatted logs.

source s_net {
    default-network-drivers(flags(store-raw-message));
};

The default-network-drivers() source driver is a kind of a wild card. It opens different UDP and TCP ports using both the legacy and the new RFC5424 syslog protocols. Instead of just expecting everything to be regular syslog, it attempts a number of different parsers on incoming logs, including the panos-parser(). Of course, the extra parsing creates some overhead, but it is not a problem unless you have a very high message rate.

The store-raw-message flag means that syslog-ng preserves the original log message as is. It might be useful for debugging or if a log analysis software expects unmodified PAN-OS messages.

source s_panosonly { tcp(port(5140) flags(no-parse,store-raw-message)); };

The third source, as its name also implies, is only for PAN-OS log messages. Use it when you have a high message rate, you only send PAN-OS log messages at the given port and you are sure that the panos-parser() of syslog-ng can process all of your PAN-OS logs correctly. The no-parse flag means that incoming messages are not parsed automatically as they arrive. As you will see later, panos-parser() parses the incoming messages.

Templates

template t_jsonfile {
    template("$(format-json --scope rfc5424 --scope dot-nv-pairs
        --rekey .* --shift 1 --scope nv-pairs --key ISODATE)\n\n");
};

This is the template, which is often used together with Elasticsearch (without the line breaks at the end). This blog does not cover Elasticsearch, as there are many other blogs covering the topic. However, this template using the JSON template function is also useful here because it shows the name-value pairs parsed from PAN-OS log messages. These JSON formatted logs include all syslog-related fields, name-value pairs parsed from the message, and the date using the ISO standardized format. There is one little trick that might confuse you: the rekey and shift part removes the dots from the front of the name-value pair names. Syslog-ng uses the dot for name-value pairs created by parsers in the syslog-ng configuration library (SCL).

Parsers

parser p_panos { panos-parser(); };

This parser can extract name-value pairs from the log messages of PAN-OS devices. By default, these are stored in name-value pairs starting with .panos., but you can change the prefix using the prefix() parameter. Note that the panos-parser() drops the message if it does not match the rules. This can be a problem when you use it directly instead of using the default-network-drivers() where messages go through a long list of parsers.

Destinations

destination d_frompanos {
    file("/var/log/frompanos" template(t_jsonfile));
};

This is a file destination where logs are JSON-formatted using the template from above. This way you can see all the name-value pairs syslog-ng creates. We use it for PAN-OS log messages.

destination d_other {
    file("/var/log/other");
};

This is a flat file destination using regular syslog formatting. We use it to store non-PAN-OS log messages.

destination d_raw {
    file("/var/log/raw" template("${RAWMSG}\n"));
};

This is yet another file destination. The difference here is that it uses a special template defined in-line. Using the RAWMSG macro, syslog-ng stores the log message without any modifications. This possibility is enabled by utilizing the store-raw-message flag on the source side and it is useful for debugging or when a SIEM or any other analysis software needs the original log message.

Log statements

Previously we defined the building blocks of the configuration. Using the log statements below,we are going to connect these building blocks together. The different building blocks can be used multiple times in the same configuration.

log {
    source(s_regular);
    destination(d_other);
};

This is the simplest log statement: it connects the regular tcp() source with a flat file destination. You can see the results from this in the “Why is it useful” section. The logs look OK, but at a closer look you can also see that they contain tons of structured information. That is where the panos-parser() comes in handy.

log {
    source(s_net);
    destination(d_raw);
    if ("${.app.name}" eq "panos") {
        destination(d_frompanos);
    } else {
        destination(d_other);
    };
};

Unless you have a high message rate, above 50,000 to 200,000 events per second (EPS), or not enough hardware capacity, the recommended way of receiving PAN-OS messages is using the default-network-drivers(). It uses slightly more resources, but if a message does not match the expectations of the panos-parser(), it is still kept. The above log statement receives the message and then stores it immediately in raw format. It can be used for debugging and disabled later.

The if statement below checks if the message is recognized as a PAN-OS log message. If it is, the log is saved as a JSON formatted file. If not, it is saved as a flat file.

Note that the name-value pair is originally called .app.name, but in the output it appears as app.name, as the template removes the dot in front.

log {
    source(s_panosonly);
    destination(d_raw);
    parser(p_panos);
    destination(d_frompanos);
};

If you have a high message rate and you are sure that the panos-parser() detects all of your logs, you can use this solution to collect logs from your PAN-OS devices. For safety and debugging purposes I inserted the raw file destination in front of the parser. This way you can compare the number of lines in the two different file destinations. If they are not equal, check the logs that the panos-parser() discarded.

Testing

For testing I used a few sample log messages from the syslog-ng configuration snippet containing the panos-parser() and saved these messages into a file in /root/panoslogs.txt. Logger generated the regular syslog messages and netcat submitted the PAN-OS messages. Here is the content of /root/panoslogs.txt:

<12>Apr 14 16:48:54 paloalto.test.net 1,2020/04/14 16:48:54,unknown,SYSTEM,auth,0,2020/04/14 16:48:54,,auth-fail,,0,0,general,medium,failed authentication for user \'admin\'. Reason: Invalid username/password. From: 10.0.10.55.,1718,0x0,0,0,0,0,,paloalto
<14>Apr 14 16:54:18 paloalto.test.net 1,2020/04/14 16:54:18,unknown,CONFIG,0,0,2020/04/14 16:54:18,10.0.10.55,,set,admin,Web,Succeeded, deviceconfig system,127,0x0,0,0,0,0,,paloalto

You will get different results epending on to which port you send the logs. You have already seen what happens when you send logs to port 5141. It looks great, but you can also see that a bit of parsing could do wonders to it.

Here is an example of sending logs to port 514:

logger -T --rfc3164 -n 127.0.0.1 -P 514 this is a regular syslog message
cat /root/panoslogs.txt | netcat -4 -n -N -v 127.0.0.1 514

In this case you should see in the files:

localhost:/etc/syslog-ng/conf.d # cat /var/log/other
Sep 14 15:25:54 localhost root: this is a regular syslog message
localhost:/etc/syslog-ng/conf.d # cat /var/log/frompanos
{"panos":{"vsys_name":"","vsys":"","type":"SYSTEM","time_generated":"2020/04/14 16:48:54","subtype":"auth","severity":"medium","serial":"unknown","seqno":"1718","receive_time":"2020/04/14 16:48:54","opaque":"failed authentication for user \\'admin\\'. Reason: Invalid username/password. From: 10.0.10.55.","object":"","module":"general","future_use4":"0","future_use3":"0","future_use2":"0","future_use1":"1","eventid":"auth-fail","dg_hier_level_4":"0","dg_hier_level_3":"0","dg_hier_level_2":"0","dg_hier_level_1":"0","device_name":"paloalto","actionflags":"0x0"},"app":{"name":"panos"},"SOURCE":"s_net","RAWMSG":"<12>Apr 14 16:48:54 paloalto.test.net 1,2020/04/14 16:48:54,unknown,SYSTEM,auth,0,2020/04/14 16:48:54,,auth-fail,,0,0,general,medium,failed authentication for user \\'admin\\'. Reason: Invalid username/password. From: 10.0.10.55.,1718,0x0,0,0,0,0,,paloalto","PROGRAM":"paloalto_panos","PRIORITY":"warning","MESSAGE":"1,2020/04/14 16:48:54,unknown,SYSTEM,auth,0,2020/04/14 16:48:54,,auth-fail,,0,0,general,medium,failed authentication for user \\'admin\\'. Reason: Invalid username/password. From: 10.0.10.55.,1718,0x0,0,0,0,0,,paloalto","ISODATE":"2020-04-14T16:48:54+02:00","HOST_FROM":"localhost","HOST":"paloalto.test.net","FACILITY":"user","DATE":"Apr 14 16:48:54"}

{"panos":{"vsys_name":"","vsys":"","type":"CONFIG","time_generated":"2020/04/14 16:54:18","subtype":"0","serial":"unknown","seqno":"127","result":"Succeeded","receive_time":"2020/04/14 16:54:18","path":" deviceconfig system","host":"10.0.10.55","future_use2":"0","future_use1":"1","dg_hier_level_4":"0","dg_hier_level_3":"0","dg_hier_level_2":"0","dg_hier_level_1":"0","device_name":"paloalto","cmd":"set","client":"Web","admin":"admin","actionflags":"0x0"},"app":{"name":"panos"},"SOURCE":"s_net","RAWMSG":"<14>Apr 14 16:54:18 paloalto.test.net 1,2020/04/14 16:54:18,unknown,CONFIG,0,0,2020/04/14 16:54:18,10.0.10.55,,set,admin,Web,Succeeded, deviceconfig system,127,0x0,0,0,0,0,,paloalto","PROGRAM":"paloalto_panos","PRIORITY":"info","MESSAGE":"1,2020/04/14 16:54:18,unknown,CONFIG,0,0,2020/04/14 16:54:18,10.0.10.55,,set,admin,Web,Succeeded, deviceconfig system,127,0x0,0,0,0,0,,paloalto","ISODATE":"2020-04-14T16:54:18+02:00","HOST_FROM":"localhost","HOST":"paloalto.test.net","FACILITY":"user","DATE":"Apr 14 16:54:18"}

localhost:/etc/syslog-ng/conf.d # cat /var/log/raw
<13>Sep 14 15:25:54 localhost root: this is a regular syslog message
<12>Apr 14 16:48:54 paloalto.test.net 1,2020/04/14 16:48:54,unknown,SYSTEM,auth,0,2020/04/14 16:48:54,,auth-fail,,0,0,general,medium,failed authentication for user \'admin\'. Reason: Invalid username/password. From: 10.0.10.55.,1718,0x0,0,0,0,0,,paloalto
<14>Apr 14 16:54:18 paloalto.test.net 1,2020/04/14 16:54:18,unknown,CONFIG,0,0,2020/04/14 16:54:18,10.0.10.55,,set,admin,Web,Succeeded, deviceconfig system,127,0x0,0,0,0,0,,paloalto

When you send logs to port 5140 where the panos-parser() is the only parser, the results should be pretty similar. The only difference is that the regular syslog message is only saved to the raw file used for debugging, as it is discarded by the panos-parser().

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For contact information, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

resolved and FallbackDNS

Posted by alciregi on September 16, 2020 07:42 AM

Fedora 33 will switch to systemd-resolved for name resolution.

Resolved has a bundled list of DNS used in case of network settings misconfiguration, i.e. the DHCP doesn’t provide the DNS address and probably other cases, for instance when you don’t intentionally set a DNS address in the network configuration.

Cockpit 228

Posted by Cockpit Project on September 16, 2020 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from Cockpit version 228.

Accounts: Allow setting weak passwords

Cockpit now allows users to set up a weak password. The user gets notified that the password is weak, but clicking the submit button again will use the selected password anyway. This is similar to how weak passwords are handled in the Anaconda installer.

Accounts weak password

Changes to remote host logins

Cockpit used to try to log into remote hosts with your initial login password when “Reuse my password for remote connections” was selected during login. This option has been removed, and Cockpit does not promote password reuse across accounts anymore. Instead, Cockpit now helps with setting up SSH keys for users that want automatic and password-less login to remote hosts.

Remote host logins

There is also an earlier demo video to see how it looks like in action.

Machines: Add support for reverting to and deleting VM snapshots

One can now restore a VM guest to a previously created snapshot. Existing snapshots can be deleted as well. Deleting a single snapshot preserves the current state of the VM and does not affect any other snapshot.

VM snapshot actions

Drop cockpit-docker code

As announced half a year ago in Cockpit 215, cockpit-docker is deprecated in favor of cockpit-podman. None of the currently supported operating systems builds the cockpit-docker package any more, thus it was finally removed upstream as well. If you still use it on another distribution or from upstream builds, please migrate now, or build it from an older upstream release.

Try it out

Cockpit 228 is available now:

Fedora 32 : Can be better? part 008.

Posted by mythcat on September 15, 2020 07:31 PM
The Fedora development is not very active in the last programming language.
The main reason is the build new packages and put on the repository.
I think this can be improved with a good tool to solve all dependencies and link all into a good package.
Today I tested the new Python version 3.5.10 released on September 5th, 2020.
I download an unzip the archive and I use these commands to build this python version
[mythcat@desk ~]$ cd Python-3.5.10/
[mythcat@desk Python-3.5.10]$ ./configure
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking for python3.5... no
checking for python3... python3
checking for --enable-universalsdk... no
...
The next command is make:
[mythcat@desk Python-3.5.10]$ make
gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes
-Werror=declaration-after-statement -I. -I./Include -DPy_BUILD_CORE -o Programs/python.o
./Programs/python.c
...
# On Darwin, always use the python version of the script, the shell
# version doesn't use the compiler customizations that are provided
# in python (_osx_support.py).
if test `uname -s` = Darwin; then \
cp python-config.py python-config; \
fi
Then I used make test.
[mythcat@desk Python-3.5.10]$ make test
running build
running build_ext
INFO: Can't locate Tcl/Tk libs and/or headers

Python build finished successfully!
...
For the last part I used this command:
[mythcat@desk Python-3.5.10]$ sudo make install
...
The result of this is ...
[mythcat@desk Python-3.5.10]$ ls
aclocal.m4 config.sub Include Mac Modules platform python README
build configure install-sh Makefile Objects Programs Python setup.py
config.guess configure.ac Lib Makefile.pre Parser pybuilddir.txt python-config Tools
config.log Doc libpython3.5m.a Makefile.pre.in PC pyconfig.h python-config.py
config.status Grammar LICENSE Misc PCbuild pyconfig.h.in python-gdb.py
[mythcat@desk Python-3.5.10]$ ./python
Python 3.5.10 (default, Sep 6 2020, 22:32:07)
[GCC 10.2.1 20200723 (Red Hat 10.2.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
...

Fedora 32 : Can be better? part 009.

Posted by mythcat on September 15, 2020 07:25 PM
The Fedora distro will be better if the development team will come with useful, accurate, and up-to-date information. A very simple example is C and C ++ programming and more precisely how to build programs and packages. Let's take a simple example of creating interfaces with GTK. Let's take a simple example of creating interfaces with GTK that require knowledge of the GCC compiler. First I install gtk3-devel package:
dnf install gtk3-devel 
The Fedora team come with a group install with many feature.
#dnf -y groupinstall "Development Tools"
I test with these examples:
#include <gtk gtk.h="">

int main(int argc,
char *argv[])
{
GtkWidget *window;

gtk_init (&argc, &argv);

window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
gtk_window_set_title (GTK_WINDOW (window), "Hello World");
gtk_widget_show (window);

gtk_main ();

return 0;
}</gtk>
This create a simple window with Hello World title.
#include <gtk gtk.h="">

static void on_window_closed(GtkWidget * widget, gpointer data)
{
gtk_main_quit();
}

int main(int argc, char * argv[])
{
GtkWidget * window, * label;

gtk_init(&argc, &argv);

window = gtk_window_new(GTK_WINDOW_TOPLEVEL);

g_signal_connect( window, "destroy", G_CALLBACK(on_window_closed), NULL);

label = gtk_label_new("Hello, World!");

gtk_container_add(GTK_CONTAINER(window), label);

gtk_widget_show(label);
gtk_widget_show(window);

gtk_main();

return 0;
}</gtk>
This is the same example but you will see a label with te text Hello, World!.
The last example is more complex and involves the use of signals attached to the close button and the OK button.
The main window contains three labels with my name and an editbox in which you have to enter my nickname mythcat or something else.
#include <gtk gtk.h="">

const char *password = "mythcat";

// close the window application
void closeApp(GtkWidget *widget, gpointer data)
{
gtk_main_quit();
}

// show text when you click on button
void button_clicked(GtkWidget *button, gpointer data)
{
const char *password_text = gtk_entry_get_text(GTK_ENTRY((GtkWidget *)data));

if(strcmp(password_text, password) == 0)
printf("Access granted for user: \"%s\"\n",password);
else
printf("Access denied!\n");

}

int main( int argc, char *argv[])
{
GtkWidget *window;
GtkWidget *label1, *label2, *label3;
GtkWidget *hbox;
GtkWidget *vbox;
GtkWidget *ok_button;
GtkWidget *password_entry;

gtk_init(&argc, &argv);

window = gtk_window_new(GTK_WINDOW_TOPLEVEL);

gtk_window_set_title(GTK_WINDOW(window), "Labels, password with one button and layout");
gtk_window_set_position(GTK_WINDOW(window), GTK_WIN_POS_CENTER);
gtk_window_set_default_size(GTK_WINDOW(window), 300, 200);

g_signal_connect(G_OBJECT(window), "destroy", G_CALLBACK(closeApp), NULL);

label1 = gtk_label_new("Catalin");
label2 = gtk_label_new("George");
label3 = gtk_label_new("Festila");

password_entry = gtk_entry_new();
gtk_entry_set_visibility(GTK_ENTRY(password_entry), FALSE);
ok_button = gtk_button_new_with_label("OK");
g_signal_connect(G_OBJECT(ok_button), "clicked", G_CALLBACK(button_clicked),password_entry);

hbox = gtk_box_new(FALSE, 1);
vbox = gtk_box_new(TRUE, 2);

gtk_box_pack_start(GTK_BOX(vbox), label1, TRUE, FALSE, 5);
gtk_box_pack_start(GTK_BOX(vbox), label2, TRUE, FALSE, 5);
gtk_box_pack_start(GTK_BOX(hbox), vbox, FALSE, TRUE, 5);
gtk_box_pack_start(GTK_BOX(hbox), label3, FALSE, FALSE, 5);
gtk_box_pack_start(GTK_BOX(vbox), ok_button, FALSE, FALSE, 5);
gtk_box_pack_start(GTK_BOX(hbox), password_entry, TRUE, FALSE, 5);
gtk_container_add(GTK_CONTAINER(window), hbox);

gtk_widget_show_all(window);

gtk_main();

return 0;
} </gtk>
The result can be seen in the following image:

I put the source code for the last example in a test.c file and compiled it like this:
[mythcat@desk ~]$ gcc test.c $(pkg-config --cflags --libs gtk+-3.0) -o test
[mythcat@desk ~]$ ./test

Trying to block all possible web connections to facebook (with the Chrome browser)

Posted by Jon Chiappetta on September 15, 2020 05:33 AM

The first extension I always install in Chrome is “uBlock Origin” of course to try and prevent as many wasteful ads as possible but it doesn’t specifically target entire web properties such as all of facebooks sub domains that exist out there (for example, if someone puts a fb image or like button on their site and your browser loads that content, it’s another signal they can use with your information even though I don’t have a fb account).

I found a cool extension for Chrome called “Domain Blocker” which lets you specify wildcard sub domain names in a simple list to block any web requests at the browser level directly (no messy etc/hosts file setups or maintenance needed). For example, you can grab a master list of facebook domain names and place some basic regex in it to produce a nice short list to block automatically:

$ curl -sL 'https://raw.githubusercontent.com/jmdugan/blocklists/master/corporations/facebook/all' | tr '.' ' ' | awk '{ print $(NF-1)"."$NF }' | sort | uniq | awk '{ print $1 ; print "*."$1 }'

facebook.com
*.facebook.com
facebook.de
*.facebook.de
facebook.fr
*.facebook.fr
facebook.net
*.facebook.net
fb.com
*.fb.com
fb.me
*.fb.me
fbcdn.com
*.fbcdn.com
fbcdn.net
*.fbcdn.net
fbsbx.com
*.fbsbx.com
fburl.com
*.fburl.com
foursquare.com
*.foursquare.com
freebasics.com
*.freebasics.com
hootsuite.com
*.hootsuite.com
instagram.com
*.instagram.com
internet.org
*.internet.org
m.me
*.m.me
messenger.com
*.messenger.com
online-metrix.net
*.online-metrix.net
tfbnw.net
*.tfbnw.net
thefacebook.com
*.thefacebook.com
wechat.com
*.wechat.com
whatsapp.com
*.whatsapp.com
whatsapp.net
*.whatsapp.net

This will now produce a blocked message if your browser tries to load any content from any of those domains (links, imgs, scripts, frames, etc.):

www.facebook.com is blocked
Requests to the server have been blocked by an extension.
Try disabling your extensions.
ERR_BLOCKED_BY_CLIENT

Reminder: Also make sure to block third party cookies in Chrome’s settings as well, it will help a lot to keep things clean along the way!

Privacy-oriented alternatives to Google Analytics for 2020

Posted by Josef Strzibny on September 15, 2020 12:00 AM

Google Analytics is perhaps the analytics platform of our time. But should it be? It’s many features and the free plan is what made it popular, but its invasion of user privacy should not be overlooked. Here are some good alternatives for 2020.

First, I want to mention privacy-oriented self-hosted solutions. Their Open Source nature provides you an option to host them yourself instead of sending the data to someone else. Second, we look at some of the viable closed-source alternatives.

Open Source

Plausible

I learned about Plausible just recently, but they deserve to be on top of this list for me. Their platform is completely Open Source on GitHub under the MIT license. I personally also like that it’s written in Elixir.

As they say themselves:

Google Analytics collects a lot of personal data and it is a potential liability for your site. Many website owners fail to do this, but you should and must disclose to your visitors your use of Google Analytics to track them.

Plausible gets you cover if you need to comply with GDPR, CCPA, and PECR.

Their cheapest plan starts at $6 a month (just $4 if paid yearly) and for $12 a month you get 100k pageviews across unlimited sites. They also publish a roadmap to keep you up-to-date with what’s coming.

If I needed a JavaScript-based analytics today, Plausible would be my first choice.

Update: I switched to hosted Plausible. It’s the cheapest open source solution giving me what I need.

GoatCounter

GoatCounter is another great option for privacy-oriented analytics not tracking users personal data. As Plausible is fully Open Source licensed under EUPL. The platform is written in Go.

A nice touch is a free plan with 100k pageviews/month for non-commercial use. A commercial plan with unlimited sites, custom domain, and 500k pageviews limit sets you back by $15.

Open Web Analytics

Open Web Analytics is an Open Source PHP-based web analytics software licensed under GPL. OWA has one strong advantage that puts it ahead of the others. Apart from implementing simple analytics, it does give you heatmaps as well.

On the other hand, the author does not offer a hosted variant, so you are on your own. If you are serving your static sites from platforms such as GitHub Pages, you probably won’t be setting up a separate server just to run OWA.

Offen

Offen brands themselves as “fair web analytics” and unlike the rest of the services on this list users themselves are in complete control of their data with their “opt-in only data collection”.

With Offen your users can see all the data that you have about them. All user data is also only stored for 6 months and then automatically deleted. This design probably led to no option for exporting any of this data.

Offen is only available as self-hosted with code on GitHub. Written in Go.

Matomo

Matomo, known to many as Piwik (original project name), is probably the closest self-hosted Google Analytics alternative. Unlike the alternatives mentioned above Matomo comes packed with features like heatmaps, session recordings, A/B testing, visitor profiles, and more. If you cannot live without these advanced features, Matomo is a good choice.

Matomo is a PHP-based analytics platform with a hosted cloud offering starting at €19 a month for 50k pageviews and 3 websites. Some advanced features such as search engine keywords performance are available for self-hosted variant at additional cost. WordPress integration will be plus for many.

Kindmetrics

Kindmetrics is another new addition on the self-hosted analytics list, currenctly in beta and bit off radar. Interestingly enough written in Crystal language on the backend and with Stimulus.js framework on the front-end. The main differentiator is to keep all 3rd-party services EU-based.

Kindmetrics is free during beta but will later offer a 14-day free trial and plans starting at €6 a month for 50k pageviews.

Update: Kindmetrics is now out of beta.

Hosted

Fathom Analytics

Fathom started as an Open Source project and the original v1 version is still available on GitHub as Fathom Lite. The main issue of the Fathom lite is the use of cookies which requires a user consent under GDPR Fathom Lite is actually GDPR compliant. This most likely leaves you to look into hosted Fathom v2.

With starting price of $14 a month you not only get web analytics for 100k pageviews but also recently added uptime monitoring with SMS, email, Telegram and Slack notifications. All plans include unlimited websites.

Another specialty of Fathom is a very generous affiliate program. You’ll get a 25% lifetime commission of all payments of the customers you bring. I paid for Fathom before switching to Plausible and can recommend it.

Simple Analytics

Simple Analytics was my original second contender for the analytics of this blog. The $19 a month starting plan with 100k pageviews is on the more expensive side, but their yearly deal gets you a better price than Fathom at just $9 a month.

What I also really liked about Simple Analytics is the non-aggressive “fair use” policy. They would only upgrade your plan if you overreach your monthly limit for 2 months in the row.

Beampipe

Beampipe is the only one on the list that will give you 10k pageviews completely for free for 5 domains. Beampipe tracking is compliant with GDPR, PECR, and CCPA.

The cheapest paid plan starts at $10 a month for 100k page views for 20 domains and Slack integration. There is GraphQL API for data access as well as opportunity to track goals and conversions.

Metrical

Metrical is a new kid in the space of simple privacy-oriented analytics.

Metrical is currently free and in beta at the time of writing:

We are currently in Beta testing, and you can register for free and have up to 2 websites and 50.000 visits every month. If you want more, you can upgrade to 7$/Month or 50$/Year.

Their paid plan will give you 50k pageviews for $7 a month which is an awesome deal for people starting up and trying to get their projects up and running as 50k gets you far enough.

Episode 215 – Real security is boring

Posted by Josh Bressers on September 14, 2020 12:00 AM

Josh and Kurt talk about attacking open source. How serious is the threat of developers being targeted or a git repo being watched for secret security fixes? The reality of it all is there are many layers in a security journey, the most important things you can do are also the least exciting.

<audio class="wp-audio-shortcode" controls="controls" id="audio-1923-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_215_Real_security_is_boring.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_215_Real_security_is_boring.mp3</audio>

Show Notes

Interpreting DHCP packets

Posted by Adam Young on September 13, 2020 02:55 PM

To capture DHCP packets I ran:

 tcpdump port 67 -i vnet0 -vvvv  -w /tmp/packets.bin

That gave me a binary file 940 bytes long. This is actually 2 packets: the request and the response. This has the IP header, the UDP header, and the DHCP packet payload in it.

First, lets let tcpdump interpret the whole thing for us, and then we can starp picking it apart:

#  tcpdump -r /tmp/packets.bin -vvvv
reading from file /tmp/packets.bin, link-type EN10MB (Ethernet)
19:42:46.742937 IP (tos 0x0, ttl 64, id 278, offset 0, flags [none], proto UDP (17), length 428)
    0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from 52:54:00:94:9e:f2 (oui Unknown), length 400, xid 0xff77df41, secs 4, Flags [none] (0x0000)
	  Client-Ethernet-Address 52:54:00:94:9e:f2 (oui Unknown)
	  Vendor-rfc1048 Extensions
	    Magic Cookie 0x63825363
	    DHCP-Message Option 53, length 1: Discover
	    MSZ Option 57, length 2: 1472
	    ARCH Option 93, length 2: 0
	    NDI Option 94, length 3: 1.2.1
	    Vendor-Class Option 60, length 32: "PXEClient:Arch:00000:UNDI:002001"
	    User-Class Option 77, length 4: 
	      instance#1: ERROR: invalid option
	    Parameter-Request Option 55, length 23: 
	      Subnet-Mask, Default-Gateway, Domain-Name-Server, LOG
	      Hostname, Domain-Name, RP, MTU
	      Vendor-Option, Vendor-Class, TFTP, BF
	      Option 119, Option 128, Option 129, Option 130
	      Option 131, Option 132, Option 133, Option 134
	      Option 135, Option 175, Option 203
	    T175 Option 175, length 48: 2969895296,2249199339,50397184,385941794,16847617,17891585,654377241,16846849,35717377,352387352,16852481,17957121
	    Client-ID Option 61, length 7: ether 52:54:00:94:9e:f2
	    GUID Option 97, length 17: 0.178.35.76.56.225.195.173.69.183.151.210.221.34.14.27.157
	    END Option 255, length 0
19:42:50.798338 IP (tos 0x0, ttl 64, id 666, offset 0, flags [none], proto UDP (17), length 428) Vendor-Class Option 60, length 32: "PXEClient:Arch:00000:UNDI:002001"
    0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from 52:54:00:94:9e:f2 (oui Unknown), length 400, xid 0xff77df41, secs 10, Flags [none] (0x0000)
	  Client-Ethernet-Address 52:54:00:94:9e:f2 (oui Unknown)
	  Vendor-rfc1048 Extensions
	    Magic Cookie 0x63825363
	    DHCP-Message Option 53, length 1: Discover
	    MSZ Option 57, length 2: 1472
	    ARCH Option 93, length 2: 0
	    NDI Option 94, length 3: 1.2.1
	    Vendor-Class Option 60, length 32: "PXEClient:Arch:00000:UNDI:002001"
	    User-Class Option 77, length 4: 
	      instance#1: ERROR: invalid option
	    Parameter-Request Option 55, length 23: 
	      Subnet-Mask, Default-Gateway, Domain-Name-Server, LOG
	      Hostname, Domain-Name, RP, MTU
	      Vendor-Option, Vendor-Class, TFTP, BF
	      Option 119, Option 128, Option 129, Option 130
	      Option 131, Option 132, Option 133, Option 134
	      Option 135, Option 175, Option 203
	    T175 Option 175, length 48: 2969895296,2249199339,50397184,385941794,16847617,17891585,654377241,16846849,35717377,352387352,16852481,17957121
	    Client-ID Option 61, length 7: ether 52:54:00:94:9e:f2
	    GUID Option 97, length 17: 0.178.35.76.56.225.195.173.69.183.151.210.221.34.14.27.157
	    END Option 255, length 0

My last post was a one liner I use to look at the packets with hexdump. This is the first couple lines of output from the packets I captured.

00000000  212 195 178 161 002 000 004 000  000 000 000 000 000 000 000 000  |................|
00000016  000 000 004 000 001 000 000 000  246 092 093 095 025 086 011 000  |.........\]_.V..|

Another way to look at the packets is using emacs and hexl-mode. Both have their uses. Most valuable is the ability to run the file through tcpdump with various flags to see what it gives.

For example, I can run:

#  tcpdump -r /tmp/packets.bin -X
reading from file /tmp/packets.bin, link-type EN10MB (Ethernet)
19:42:46.742937 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 52:54:00:94:9e:f2 (oui Unknown), length 400
	0x0000:  4500 01ac 0116 0000 4011 782c 0000 0000  E.......@.x,....

This shows the first two bytes of the UDP packet data as the 4500 (hex). The same is true of the second packet:

19:42:50.798338 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 52:54:00:94:9e:f2 (oui Unknown), length 400
	0x0000:  4500 01ac 029a 0000 4011 76a8 0000 0000  E.......@.v.....

Hex 45 is Decimal 69. Looking with Hexdump:

[root@nuzleaf ~]# hexdump  /tmp/packets.bin  -e'"%07.8_ad  " 8/1 "%03d " "  " 8/1 "%03d " "  |"' -e'16/1  "%_p"  "|\n"' -v | grep 69
00000048  000 148 158 242 008 000 069 000  001 172 001 022 000 000 064 017  |......E.......@.|

We see 069 000. That matches the 4500.

Next in the decimal sequence is 001 172. In the hex we have 01ac which matches. This offset is 54. We know the whole file is 940 bytes long. That means each packet is probably 470 bytes. 470 minus 54 is still 416 is still longer than the 300 that we know is the length of the DHCP packet.

tcpdump shows us that the length is 400, which is, I think the length of the IP packet.

Possibly the best pattern to match is the MAC address, which we know to be

82, 84, 0, 230, 8, 49 (converted from hex 0x52,0x54,0x00,0xE6,0x08,0x031)

The MAC address is supposed to be put in the packet at an offset of 32 bytes in. The length is 6. We see this pattern on the line that starts with offset 00000096 and continues onto the next line:

00000096  000 000 000 000 000 000 000 000  000 000 000 000 000 000 082 084  |..............RT|
00000112  000 148 158 242 000 000 000 000  000 000 000 000 000 000 000 000  |................|

Working backwards, now, we should be able to pick out the patter for the start of the packet: 01 for the Opcode, 01 for the Hardware type, and 06 for the HW length. We see that here:

00000080  077 216 001 001 006 000 255 119  223 065 000 004 000 000 000 000  |M......w.A......|

So our DHCP payload starts at offset 82. It should continue for 300 bytes.

00000368  050 048 048 049 077 004 105 080  088 069 055 023 001 003 006 007  |2001M.iPXE7.....|
00000384  012 015 017 026 043 060 066 067  119 128 129 130 131 132 133 134  |....+<bcw.......>

That actually lines up with the error reported in the tcpdump:

instance#1: ERROR: invalid option Parameter-Request Option 55, length 23:

The 55 is followed by the length 23, the two values 001 and 003...and that is the end of the packet. 006 007 012 and so on might be part of the packet, but the BOOTP protocol hardcodes the length at 300 and so they get ignored. If we could find the length of the data section of the UPD header, we might know better.

Note that the bad checksum is reported as well. That value should be in the 2 bytes right before the start of the actual UDP data. And the two bytes prior to that is the UDP data length. Looking at the dump above we see that the line off set 00000080 starts with 077 216 before going into the 001 001 006 of the Boot packet. This is the checksum. The line before that ends with the two values 001 152. In Hex that is 0198 (I looked in hexl-mode) which in decimal is 408.

Our DHCP packet, which is supposed to be 300 Bytes long, is 408 bytes long. Is this legal?

Apparently, yes: RFC1531 extended the BOOTP definition. https://tools.ietf.org/html/rfc1531#section-2 DHCP extended the BOOTP packet size by renaming the vendor-specific area to options and giving it a size of 312.

 The options field is now variable length, with the minimum extended
   to 312 octets.

This seems to mean that tcpdump is working with the older format, which is no longer valid. I also need to extend the size of my packet in rust.

Reproducible wheels at SecureDrop

Posted by Kushal Das on September 13, 2020 05:22 AM

screenshot

SecureDrop workstation project's packages are reproducible. We use prebuilt wheels (by us) along with GPG signatures to verify and install them using pip during the Debian package building step. But, the way we built those wheels (standard pip command), they were not reproducible.

To fix this problem, Jennifer Helsby (aka redshiftzero) built a tool and the results are available at https://reproduciblewheels.com/. Every night her tool is building the top 100 + our dependency packages on Debian Buster and verifies the reproducibly of them. She has a detailed write up on the steps.

While this issue was fixed, a related issue was to have reproducible source tarballs. python3 setup.py sdist still does not give us a reproducible tarballs. Conor Schaefer, our CTO at the Freedom of the Press Foundation decided to tackle that issue using a few more lines of bash in our build scripts. Now we have reproducible wheels and source tarballs (based on specified timestamps) for our projects.

On public TLS certificates lifetime

Posted by Fabio Alessandro Locati on September 13, 2020 12:00 AM
On September 1st, 2020, the maximum lifetime of TLS certificates signed by Public Certificate Authority got reduced to 13 months. How did we arrive here, and what’s to come? Let’s start from understanding who decides the maximum lifetime of certificates and many other limitations around them. Who decides the TLS certificate guidelines Ultimately, the client (often a browser or an operating system) identifies the certificate as trustable or not (based on the CA that signed it as well as many other parameters), so the client can decide which parameters to look for and which values are acceptable and which are not.

PHP on the road to the 8.0.0 release

Posted by Remi Collet on September 12, 2020 06:28 AM

Version 8.0.0 Beta 3 is released. It's now enter the stabilisation phase for the developers, and the test phase for the users.

RPM are available in the remi-php80 repository for Fedora  31 and Enterprise Linux  7 (RHEL, CentOS), or in the php:remi-8.0 stream,  and as Software Collection in the remi-safe repository (or remi for Fedora)

 

emblem-important-4-24.pngThe repository provides development versions which are not suitable for production usage.

Also read: PHP 8.0 as Software Collection

emblem-notice-24.pngInstallation : read the Repository configuration and choose installation mode.

Replacement of default PHP by version 8.0 installation, module way (simplest way on Fedora and EL-8):

dnf module disable php
dnf module install php:remi-8.0
dnf update

Replacement of default PHP by version 8.0 installation, repository way (simplest way on EL-7):

yum-config-manager --enable remi-php80
yum update php\*

Parallel installation of version 8.0 as Software Collection (recommended for tests):

yum install php80

emblem-important-2-24.pngTo be noticed :

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php74)

hexdump one byte decimal display

Posted by Adam Young on September 12, 2020 03:34 AM
hexdump  boot-packet.bin  -e'"%07.8_ad  " 8/1 "%03d " "  " 8/1 "%03d " "  |"' -e'16/1  "%_p"  "|\n"' -v

Found here:

rpminspect-1.1 released

Posted by David Cantrell on September 11, 2020 07:07 PM

It has been 3 or 4 months since the last release of rpminspect. Today I release rpminspect 1.1. In addition to five new inspections, there are plenty of bug fixes and a lot of improvements against CI.

The five new inspections include the abidiff and kmidiff inspections. Another inspection I added is the movedfiles inspection, which was requested over a year ago. Implementing it was easy once I improved the peer detection code. It’s common for files to move between subpackages, so this inspection attempts to detect and report that rather than reporting you added a file and removed a file (which is what it used to do).

There has been more work around the configuration file handling. The last release moved to YAML for the configuration file format. This releases moves the configuration file in to /usr/share/rpminspect and out of /etc. There is also no longer a default configuration file so users can have multiple rpminspect-data packages installed and perform rpminspect runs for different products. There are some other changes within /usr/share/rpminspect which are described below.

On the CI front, rpminspect has migrated from Travis-CI to GitHub Actions. The software is built and tested on multiple Linux distributions now to ensure portability. The GitHub Actions also run flake8, black, and shellcheck for the Python and shell code in the tree.

I have also improved reporting a bit for rebased packages. First, rpminspect detects if a comparison of builds is a rebase. If so, then things that would normally report at the VERIFY level are now reported at the INFO level. You can disable this functionality with a command line option. The main reason for this is that when a package is rebased, some changes are expected so we can lower the reporting level. When a package receives a minor update, rpminspect enforces rules more strictly to ensure the packages remain compatible for users. A rebase is determined by looking at the package version number. If the package names match and the version numbers differ, it assumes you are comparing a rebased build. If the name and version number match but the release number is different, it assumes a maintenance update and more strict reporting is enabled.

Here is a summary of the changes in 1.1 since the last release (1.0):

Bug fixes:

  • Don’t assume we have a header or even a list of files (#161)
  • Fix memory corruption in init_rpminspect()
  • Add missing DESC_MOVEDFILES block to inspection_desc()
  • Ensure an int is used for snprintf() in inspect_manpage_path()
  • Only report permissions change if there is a mode_diff (#181)
  • Fix -Werror failures in inspect_abidiff.c
  • Expand find_one_peer() to soft match versioned ELF shared libraries
  • Be sure to close the open file before exiting init_fileinfo()
  • Handle builds that lack all debuginfo packages (#186)
  • Do not assume peer->after_hdr exists (#187)
  • Fix memory leaks in abi.c functions
  • open() failure in readfile() is not fatal, just return NULL
  • Avoid comparing elf files that are not shared libraries
  • Make sure to close open file descriptors from get_elf() calls

New inspections:

  • The %files inspections looks at the %files blocks in the package spec file and checks for forbidden paths as defined in the configuration settings
  • The types inspection compares MIME types between builds and reports changes for verification or informational purposes
  • The movedfiles inspection reports if a file moved between subpackages (#155)
  • The abidiff inspection runs ELF objects through abidiff(1) from the libabigail project to report breaking ABI changes
  • The kmidiff inspection compares Linux kernel images and module directories to check for Kernel Module Interface/KABI compatibility

Major changes:

  • Configuration files move to /usr/share/rpminspect and out of /etc/rpminspect
  • No default configuration file provided (this allows multiple rpminspect-data packages on the system)
  • The configuration file in rpminspect-data-generic is now named generic.yaml
  • Related to Making open source more inclusive by eradicating problematic language, rename some of the configuration files and internal data structures:
    • In the configuration file, rename ipv6_blacklist to forbidden_ipv6_functions
    • In /usr/share/rpminspect, rename stat-whitelist/ to fileinfo/
    • In the code, replace stat_whitelist with fileinfo
    • In the code, replace caps_whitelist with caps
    • In /usr/share/rpminspect, rename abi-checking-whitelist/ to abi/
    • In /usr/share/rpminspect, rename version-whitelist/ to rebaseable/
    • In /usr/share/rpminspect, rename political-whitelist/ to politics/ (this directory now contains per-release files with the format described in the example file)
  • Relicense librpminspect (lib/ and include/) as LGPL-3.0-or-later
  • License the rpminspect-data-generic subpackage as CC-BY-4.0
  • Add abi.c, the code that reads in the ABI compat level files (#144)
  • Add -n/--no-rebase command line option to disable rebase detection
  • Define new configuration file section for the abidiff inspection
  • Define new configuration file section for the kmidiff inspection

Inspections:

  • Only fail the annocheck inspection for RESULT_VERIFY
  • Read debuginfo if available when running the annocheck inspection
  • Skip debuginfo and debugsource packages in the types inspection
  • Report findings for rebased builds in movedfiles, addedfiles, and removedfiles as INFO changes
  • The movedfiles inspection runs before addedfiles and removedfiles to account for moves before the other changes
  • In the filesize inspection, multiply the file size difference before dividing
  • Enable permissions inspection for single build analysis
  • In the filesize inspection, drop extra - from the message about file shrinkage
  • Make sure all RESULT_INFO results are set to NOT_WAIVABLE
  • Fix some specific problems with the permissions inspection
  • Update test_symlink.py tests for new waiver_auth values
  • Report metadata changes for rebased packages as INFO
  • Do not fail the specname inspection when given a non-SRPM
  • For passing upstream inspections, do not report a remedy string
  • Do not fail the lostpayload inspections if it only gives INFO messages
  • Clarify unapproved license message in the license inspection

Test suite:

  • Add test suite cases for the %files inspection
  • Add test suite cases for the types inspection
  • Build execstack test program with -Wl,-z,lazy to eliminate BIND_NOW
  • Drop unnecessary method re-definitions in base test classes
  • Use super() rather than explicitly calling the parent class
  • Call configFile() on object instance rather than using the parent class
  • Improve the error reporting for test result checking
  • Add basic tests for the filesize inspection
  • Optionally check the result message
  • Add further filesize tests for shrinking files
  • Add 24 new test cases to cover the permissions inspection
  • Pass -r GENERIC to rpminspect in the TestCompareKoji class
  • If check_results() raises AssertionError, dump the JSON output
  • Fix test_changelog.py test cases that are failing
  • Fix UnbalancedChangeLogEditCompareKoji
  • Handle rpm versions with x.y.z.w version numbers in test_symlinks.py
  • Add test_addedfiles.py to the integration test suite
  • Update the test suite to cover rpmfluff 0.6
  • Add integration test cases for the abidiff inspection (#144)
  • Add 12 more permissions inspection test cases for setuid file checks

CI:

  • Migrate from Travis-CI to GitHub Actions
  • Create a top-level directory tree called osdeps/ with specifics for each operating system used in CI
  • Run CI on Fedora rawhide, Fedora latest stable, Debian testing, Ubuntu latest, OpenSUSE Leap, OpenSUSE Tumbleweed, CentOS 8, CentOS 7, and Arch Linux
  • Run flake8 and black for Python code in the test suite
  • Run ShellCheck on shell scripts in the tree
  • Upload coverage report to https://codecov.io/gh/rpminspect/rpminspect/
  • Add --diff to the Python format checker

Other Changes:

  • Formatting fixes for make help output
  • Add __attribute__((__sentinel__)) to the run_cmd() prototype
  • Modify add_entry() in init.c to skip duplicate entries
  • Change nls option in meson_options.txt to a boolean
  • Modify meson.build to work with xmlrpc-c installations that lack a pkgconfig script
  • Check all return values of getcwd()
  • Move top level docs to Markdown format
  • Add Makefile target to maintain the AUTHORS.md file
  • Add a copy of the Apache 2.0 license for the 5 files in librpminspect
  • Update the License tag in the rpminspect.spec.in template file and the %license lines
  • Support building on systems that lack <sys/queue.h>
  • Add sl_run_cmd() to librpminspect, which is like run_cmd() but it takes a string_list_t instead of a varargs list
  • Add init_arches() to librpminspect to create a cached ri->arches string_list_t
  • Move free_argv_table() to runcmd.c
  • Store copy of original pointer in strsplit() to free at the end
  • Use mmap() and strsplit() in read_file() rather than a getline() loop
  • Add utils/gate.sh for use from a .git/hooks/pre-push script to ensure changes to the C files in the tree will work across rpminspect runs for a handful of package builds
  • Have check_abi() pass back the ABI compat level found
  • Use read_file() in init_fileinfo(), init_caps(), validate_desktop_contents(), and disttag_driver()
  • Adjust how init_fileinfo() and init_caps() iterate over file contents
  • Fix tox -e format style problems found
  • Trim worksubdir from paths in reported abidiff(1) and kmidiff(1) commands
  • Use FOPEN_MAX for nopenfd parameter in nftw() calls
  • Make sure kmidiff is listed in the spec file

See https://github.com/rpminspect/rpminspect/releases/tag/v1.1 for more information. Builds are available in my Copr repository. I am currently waiting to build in the official Fedora and EPEL collections until my updates for the mandoc package reach the stable updates.

In addition to the new rpminspect release, there is also a new rpminspect-data-fedora release. This data file package contains updates that match the changes in this new release of rpminspect. The new rpminspect-data-fedora release is available in my Copr repo. It will be available in the official collections once the new rpminspect package is built.

SecureDrop package build breakage due to setuptools

Posted by Kushal Das on September 11, 2020 12:29 PM

A few days ago, setuptools 50.0.0 release caused breakage to many projects. SecureDrop package builds was also broken. We use dh-virtualenv tool to build the packages. Initially, we tried to use the experimental build system from dh-virtualenv. We could specify the version of the setuptools to be installed in the virtualenv while creating it.

This approach worked for Xenial builds. As we are working to have proper builds on Focal (still work in progress), that was broken due to the above-mentioned change.

So, we again tried to use Python's venv module itself to create the virtual environment and use the wheels from the /usr/share/python-wheels directory to build the virtual environment. Which works very nicely on Xenial, but on Focal the default setuptools version is 44.0.0, which also failed to install the dependencies.

Now, we are actually getting the setuptools 46.0.0 wheel and replacing the build container's default setuptools wheel. The team spent a lot of time in debugging and finding a proper fix for the package builds. Hopefully, we will not get a similar breakage on the same kind of dependency error soon (the actual package dependencies are pinned via hashes).

Untitled Post

Posted by Zach Oglesby on September 11, 2020 12:11 AM

Tried to use neovim for 3 months, can’t do it, going back to Emacs.

worse is better: making late buffer swaps tear

Posted by Adam Jackson on September 10, 2020 07:40 PM

In an ideal world, every frame your application draws would appear on the screen exactly on time. Sadly, as anyone living in the year 2020 CE can attest, this is far from an ideal world. Sometimes the scene gets more complicated and takes longer to draw than you estimated, and sometimes the OS scheduler just decides it has more important things to do than pay attention to you.

When this happens, for some applications, it would be best if you could just get the bits on the screen as fast as possible rather than wait for the next vsync. The Present extension for X11 has a option to let you do exactly this:

If 'options' contains PresentOptionAsync, and the 'target-msc'
is less than or equal to the current msc for 'window', then
the operation will be performed as soon as possible, not
necessarily waiting for the next vertical blank interval. 

But you don't use Present directly, usually, usually Present is the mechanism for GLX and Vulkan to put bits on the screen. So, today I merged some code to Mesa to enable the corresponding features in those APIs, namely GLX_EXT_swap_control_tear and VK_PRESENT_MODE_FIFO_RELAXED_KHR. If all goes well these should be included in Mesa 21.0, with a backport to 20.2.x not out of the question. As the GLX extension name suggests, this can introduce some visual tearing when the buffer swap does come in late, but for fullscreen games or VR displays that can be an acceptable tradeoff in exchange for reduced stuttering.

James is available for hire!

Posted by James Just James on September 10, 2020 12:00 PM
TL;DR: I’m available for hire. Experienced at Linux/Golang/Mentoring/etc. More information below! A drawing of me coding away. A photo of me giving a talk to a sold-out crowd. Background: Two years ago, I left my job at Red Hat to work on mgmt config full-time. I’m still passionate about this project, and I’m proud of the progress that has been made in the last two years, but it’s time for me to explore new opportunities as well.

Elixir macros return AST

Posted by Josef Strzibny on September 10, 2020 12:00 AM

Macros are a powerfull part of the Elixir language and projects such as Absinth would not even be possible without them. To start writing your macros in Elixir one has to understand one simple thing. Macro functions have to return a partial abstract syntax tree.

What are macros anyway?

The term macro in this sense comes from the times of assembly languages where a macro defines how to expand a single language statement into a number of instructions. It’s a shortcut on how to write less to achieve more. They also enable expressing something that is not possible in the core language.

In Elixir, many language constructs are in fact macros:

if true, do: "hello world"

In Elixir, if statement above is a macro.

def my_function do
  IO.inspect ""
end

In Elixir, def definition above is a macro as well!

What could be a practical example that might be a little more relevant to us?

Let’s imagine we have the following schema:

defmodule MyProject.Book do
  use Ecto.Schema

  schema "books" do
    field :author, :string
    field :iso, :string
    ...
  end
  ...
end

And an associated Absinth schema:

defmodule MyProject.BookTypes do
  use Absinthe.Schema.Notation

  @desc "Book"
  object :book do
    field :author, :string
    field :iso, :string
    ...
  end
end

First of all, these examples are full of macros already. object is a macro, basically adding a language construct that helps us to define GraphQL objects. So is the field macro.

We see that macros can help us to extend a language or even create a domain-specific language (DSL). But it can just help us with plain and simple repetition as well.

Above, we list all the fields for the Ecto schema and then again for the GraphQL schema. While this gives us flexibility, what if we decide that they are indeed the same and that we just want to maintain one set of attributes?

Completely possible with macros:

defmodule MyProject.BookTypes do
  use Absinthe.Schema.Notation

  @desc "Book"
  object :book do
    magic_macro()
    ...

How? How can magic_macro() fill in what was set of macro calls before?

By returning a part of abstract syntax tree.

Abstract Syntax Tree

An Abstract Syntax Tree (AST) is a tree representation of an abstract structure of a program for its execution flow.

If our macro returns part of the AST it will fit right into the rest of the program execution flow. It’s that simple.

But how do we find out what the AST should look like?

For that, we have to fully understand quoting and unquoting with quote and unquote.

Let’s look at our first example with quote:

iex(1)> quote do
...(1)>   if true, do: "hello world"
...(1)> end
{:if, [context: Elixir, import: Kernel], [true, [do: "hello world"]]}

What we see as the return value is, as you likely guessed, the AST. The building block of an Elixir program. It’s just an Elixir tuple, so nothing to worry about.

Actually… we call it a quoted expression in the Elixir world as the docs puts it:

When quoting more complex expressions, we can see that the code is represented in such tuples, which are often nested inside each other in a structure resembling a tree. Many languages would call such representations an Abstract Syntax Tree (AST). Elixir calls them quoted expressions…

quote is powerful because it will show us any AST representation.

What if we need to pass something in? If our “Hello world” would be a variable?

For that reason, we also get to have unquote:

iex(3)> hello = "Hello world!"
"Hello world!"
iex(4)> quote do
...(4)>   if true, do: unquote(hello)
...(4)> end
{:if, [context: Elixir, import: Kernel], [true, [do: "Hello world!"]]}

Writing a macro

In Elixir, we define a macro similarly as defining a regular function.

However, we use defmacro (which is also a macro!) and we almost always return a quoted expression:

defmacro get_set_value(value) do
  quote do
    def unquote(:"get_#{value}")() do
      # get value...
    end

    def unquote(:"set_#{value}")() do
      # set value...
    end
  end
end

The snipped above demonstrates how to create a macro for defining getter and setters functions automatically for a given value:

...
get_set_value(:attribute)

When we compile our program with this macro, get_set_value would be replaced with those two functions. And their direct AST for that matter.

We even saw a rather strange use of unquote too. Yes, it can be used in defining the function names just fine.

Nice. And what about our Absinth example?

defmodule MyMacros do
  defmacro magic_macro() do
    {:__block__, [],
     Enum.map(MyProject.Book.fields(), fn field_name ->
       {:field, [], [field_name, :string]}
     end)}
  end
end

Since Absinth is built around macros, I first confirmed using quote what Absinth macros return. With that knowledge and given I have a list of the attributes returned by the fields() function, I can replicate this AST in my macro definition.

There is much more to macros, but just remember to return some AST quoted expressions!

Updated: Schrockwell also rightly noted that the plain old functions can also be used to return ASTs at compile-time and that the main difference from macros are in handling of the arguments.

Extract Function Refactoring using inline functions.

Posted by Adam Young on September 09, 2020 07:06 PM

The Extract Function refactoring is the starting point for much of my code clean up. Once a “Main” function gets sufficiently complicated, I pull pieces of it out into their own functions, often with an eye to making them methods of the involved classes.

While working with some rust code, I encountered an opportunity to execute this refactoring on some logging code. Here’s how I executed it.

Here is what the code looks like at the start:

fn main() -> std::io::Result<()> {
    {
        let local_ip4 = IpAddr::from_str("0.0.0.0").unwrap();
        let listen4_port: u16  = 67;
        let socket = UdpSocket::bind(&SocketAddr::new(local_ip4, listen4_port))?;
        socket.set_broadcast(true).expect("set_broadcast call failed");
        // Receives a single datagram message on the socket. If `buf` is too small to hold
        // the message, it will be cut off.
        let mut buf = [0; 300];
        let (_amt, _src) = socket.recv_from(&mut buf)?;
        let boot_packet = build_boot_packet(&buf);
        
        println!("packet received");
        println!("opcode      = {0}", boot_packet.opcode);
        println!("hwtype      = {0}", boot_packet.hwtype);
        println!("hw addr len = {0}", boot_packet.hw_addr_len);
        println!("hop count   = {0}", boot_packet.hop_count);            
        println!("txn_id      = {:x}", boot_packet.txn_id);            
        println!("num_secs    = {:}", boot_packet.num_secs);
        println!("ips {0} {1} {2} {3}",
                 boot_packet.client_ip, boot_packet.your_ip,
                 boot_packet.server_ip,  boot_packet.gateway_ip);
        println!("Mac Addr:   = {:}", boot_packet.client_mac);            
   Ok(())
}

My next task is to modify the data in the packet in order to send it back to the client. I know that I am going to want to log the data prior to writing it to the socket. That means I am going to end up needing those same log lines, possibly with additional entries.

I also know that I am going to want to see the difference when logging the packets. So…I decide first to implement the copy trait on my structure, and make a duplicate of the packet so I can see the before and after.

        let boot_request = build_boot_packet(&buf);
        let mut boot_packet = boot_request;

As always: run the code after making this change. I don’t have automated unit tests for this yet, as all I am doing is readying data off the wire…just saying that points at the direction for where my test should go, and I will do that shortly.

For now, I spin up a virtual machine that makes a DHCP request, and I make sure that I can still see the log data.

Now, to extract the logging code, I first wrap that section in a local function. Make sure to call the function after it is defined:

        fn log_packet(boot_packet: &BootPacket){
        println!("packet received");
        println!("opcode      = {0}", boot_packet.opcode);
        println!("hwtype      = {0}", boot_packet.hwtype);
        println!("hw addr len = {0}", boot_packet.hw_addr_len);
        println!("hop count   = {0}", boot_packet.hop_count);
        println!("txn_id      = {:x}", boot_packet.txn_id);
        println!("num_secs    = {:}", boot_packet.num_secs);
        println!("ips {0} {1} {2} {3}",
                 boot_packet.client_ip, boot_packet.your_ip,
                 boot_packet.server_ip,  boot_packet.gateway_ip);
        println!("Mac Addr:   = {:}", boot_packet.client_mac);
        }
        log_packet(&boot_packet);

Again, run the test. If it runs successfully, continue on to moving the log_packet function up to the file level namespace. Run the tests.

The above steps are the main technique, and can be used to extract a function. I am going to take it one step further and convert it to a method implemented on struct BootPacket. The final code looks like this:

impl BootPacket {
    fn log_packet(&self){
        println!("packet received");
        println!("opcode      = {0}", self.opcode);
        println!("hwtype      = {0}", self.hwtype);
        println!("hw addr len = {0}", self.hw_addr_len);
        println!("hop count   = {0}", self.hop_count);
        println!("txn_id      = {:x}", self.txn_id);
        println!("num_secs    = {:}", self.num_secs);
        println!("ips {0} {1} {2} {3}",
                 self.client_ip, self.your_ip,
                 self.server_ip,  self.gateway_ip);
        println!("Mac Addr:   = {:}", self.client_mac);
    }
}

September Blog Post

Posted by Madeline Peck on September 09, 2020 02:00 PM

Happy Wednesday everyone!

I’ve shifted from working full time hours during the summer at Red Hat, to working ten hours (give or take) part time remotely while I’m starting classes again. I’ve decided to also chat about my thesis work on here, and I’m still deciding whether or not to make their own posts on the off weeks in between my intern posts, simply because neither probably has enough to talk about every week.

Let’s catch up though. The latest thing that’s been on my mind preparing for a sketch note session today on Hopin for a research talk, from 1:30-2:30 EST by Jose Renau and Karsten Wade. So basically Jose will be giving a talk about Live Hardware Development at UCSC and Karsten will help lead the conversation, and then at 2:00 it will be an open round table discussion. While they talk and give their slides virtually, there will be artists sketching on screen about the topic, which is my job.

On Friday I met with Heidi Dempsey, Sarah Coghlan, and Mo Duffy to go over the website and program and make sure we were all sure how it was going to work. During that session these were the doodles I came up with. I’m very intrigued by super heroes and detectives who are the champions of code and besides drawing what I imagined Sarah’s dog, and Heidi, that filled up the page pretty much.

<figure class=" sqs-block-image-figure intrinsic "> red+hat+hopin+doodles.jpg </figure>

I’ve been working on some doodles for ChRIS as well as the illustrations for the coloring book.

<figure class=" sqs-block-image-figure intrinsic "> Screenshot from 2020-09-08 21-07-09.png </figure>

How to install and play Elite Dangerous on Linux

Posted by Luca Ciavatta on September 09, 2020 01:10 PM

There are video games that have literally made the history of video games: Arkanoid, Galaga, Donkey Kong, Super Mario, R-Type, just to name a few. There are video games that, beyond their beauty, have conquered millions of players: PUBG, Fortnite, Destiny to stay in recent times. There are video games that have gone beyond the concept of video games: Flight Simulator, SimCity, Civilization, Age of Empires, and we stop there.

Elite Dangerous, like all episodes of the Elite saga, is part of all these fields and has rightfully acquired a golden place in the history of video games. It is not just a video game, but a real scientific simulator that responds to all the laws of physics and science. Space, the galaxies, are his playing field, the spaceships his characters, the pilots his players. Elite Dangerous offers the opportunity to fulfill the dreams of those who grew up watching science fiction and marveling at the starry sky.

Unfortunately, Elite Dangerous doesn’t have a Linux version yet, and still had significant issues with Wine. But things are changing (actually, they have already changed!) thanks to Steam, Proton, and some tricks to apply.

Play Elite Dangerous on your Linux. Hassle-free and fast

Enough with dual-booting, slow emulators, launch errors, and all the flaws that have characterized trying to play Elite Dangerous on Linux so far. Today, thanks to all the work done by the Steam team and their Proton, with just a few clicks and some terminal commands, we can play Elite Dangerous and have an incredible gaming experience.

So, let’s see how to install and play Elite Dangerous on Linux.

1- Install Steam on your Linux distro

For Debian (like Mint and Ubuntu) based distro, easy peasy double click on the .deb file that you can get here: https://steamcdn-a.akamaihd.net/client/installer/steam.deb.

For Fedora, on the Software Repositories, ensure to enable the RPM Fusion Nonfree – Steam repo and digit in a terminal:

$ sudo dnf install steam

For openSUSE, just click on the 1-Click Install for your version on this page: https://software.opensuse.org/package/steam.

2- Enable Proton on Steam

On the Steam client, on the upper-left corner click on Steam and then Settings. On the Settings window, click on Steam Play and ensure to check the Enable Steam Play for supported files and Enable Steam Play for all other titles checkboxes. Also, select the Proton version (we suggest you use the latest if you don’t have issues) you want to use from the drop-down menu. Lastly, restart the Steam client.

3- Buy and install Elite Dangerous on Steam

On the Steam client, buy and install the Elite Dangerous game. You can also start with the link from the Steam website: https://store.steampowered.com/app/359320/Elite_Dangerous/.

Wait until the game is fully installed, restart the Steam client and do all the updates if presents.

Start the game.

You will see the game load and the Steam client show you that it is active. But, in truth, you won’t be able to play the game and you won’t even see a picture of Elite Dangerous.

Don’t worry, close the game, close the Steam client and go on.

4- Install Protontricks for your distribution

You can use pipx to install the latest version of Protontricks. pipx requires Python 3.6 or newer, and you will need to install pip, setuptools and virtualenv first. Install the correct packages depending on your distribution.

For Debian based:

$ sudo apt install python3-pip python3-setuptools python3-venv

For Fedora:

$ sudo dnf install python3-pip python3-setuptools python3-libs

For openSUSE:

$ sudo zypper in python3-pip python3-setuptools python3-libs

After installing pip and virtualenv, run the following commands to install pipx for the current user.

$ python3 -m pip install --user pipx
$ ~/.local/bin/pipx ensurepath

Close and reopen your terminal. After that, you can install protontricks.

$ pipx install protontricks

To upgrade to the latest release:

$ pipx upgrade protontricks

5- Use Protontricks to make Elite Dangerous working on Linux

As suggested above, make sure you have tried to launch the game at least once, and have protontricks set up before getting the dotnet framework.

Open a terminal window and digit:

$ protontricks 359320 -q dotnet472 win7

Wait until the installation process has been terminated and close the terminal window.

6- Launch Elite Dangerous inside Steam

Open the Steam client again, and click PLAY on the Elite Dangerous game.

If the game crashes on the first launch, this is normal. Click on PLAY again and you will see that everything will be fine this time.

So, in the end, have a nice journey Commander!

References and resources

The post How to install and play Elite Dangerous on Linux appeared first on CIALU.NET.

Insider 2020-09: Prometheus; proxy; ESK;

Posted by Peter Czanik on September 09, 2020 10:09 AM

Dear syslog-ng users,


This is the 84th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.

NEWS

Using a proxy with the http() destination

The http() destination is quickly becoming one of the most often used destinations within syslog-ng. You might already be using it even if you are not aware of it. Quite a few syslog-ng destination drivers are actually just configuration snippets in the syslog-ng configuration library (SCL), utilizing the http() destination in the background. Just think about elasticsearch-http(), different Logging as a Service (LasS) providers, or slack(). Starting with syslog-ng version 3.28.1 you can also reach these services when there is a proxy server between syslog-ng and your destination.

https://www.syslog-ng.com/community/b/blog/posts/using-a-proxy-with-the-http-destination-of-syslog-ng

Prometheus: syslog-ng exporter

Recently Prometheus became one of the most used open source monitoring solutions. Quite a few people asked if a syslog-ng exporter is available. It is not part of syslog-ng, but there are numerous implementations available on GitHub. Now that Prometheus is part of the openSUSE Leap 15.2 release, which is the Linux distribution running on my laptop, I gave it a try. From this blog, you can learn how to compile the syslog-ng exporter for Prometheus yourself and get it working with Prometheus.

https://www.syslog-ng.com/community/b/blog/posts/prometheus-syslog-ng-exporter

Jump-starting ESK: Elasticsearch, syslog-ng and Kibana

If you want to test drive syslog-ng or just want to learn something new, I recommend you checking out the BLACK ESK project. By running a single script, you can set up a containerized test environment, complete with Elasticsearch, Kibana and a syslog-ng server. All network connections among them are encrypted and the syslog-ng configuration showcases many interesting syslog-ng features, including PatternDB and JSON parsing, GeoIP, in-list filtering and the new Elasticsearch destination. Once it is installed, all you need are some logs directed at this server and a browser to reach Kibana. You can learn a lot from reading through the setup scripts and the different configuration files.

https://www.syslog-ng.com/community/b/blog/posts/jump-starting-esk-elasticsearch-syslog-ng-and-kibana

NEW RELEASES

WEBINARS


Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/

New name for ABRT?

Posted by ABRT team on September 09, 2020 09:00 AM

The project ABRT started in 2009. The initial name was CrashWatcher. Very quickly changed to CrashCatcher. But in one month, it got its final name ABRT. ABRT is the name of a POSIX signal and stems from the word abort.

ABRT project was meant as a tool to ease the life of Red Hat Support. Unfortunately Red Hat Support never fully utilized and used ABRT (with some minor exceptions). I recently analyzed the use of ABRT, and its strength are for developers and DevOps. We can identify and helps to report bugs when new software or major release is released. Devops can leverage that we can identify crashes in their deployments and show it in a private instance of ABRT Analytics.

So we have a clear shift in the use.

The name ABRT has several cons: It is hard to pronounce. And a lot of people still remember when ABRT was unable to deduplicate bugs and spammed Bugzilla in its early days.

Therefore the ABRT team would like to rename ABRT to … something else. To indicate new directions: orientation on developers and DevOps. But we struggle what the name should be.

Can you help us? Please file in this form.

The name should not be used in in some different SW project (this outs `CrashCatcher). It should be easy to pronounce. The name may be related to the function - collecting crashes.

Installing latest syslog-ng on openSUSE, RHEL and other RPM distributions

Posted by Peter Czanik on September 09, 2020 08:03 AM

The syslog-ng application is included in all major Linux distributions, and you can usually install syslog-ng from the official repositories. If the core functionality of syslog-ng meets your needs, use the package in your distribution repository (yum install syslog-ng), and you can stop reading here. However, if you want to use the features of newer syslog-ng versions (for example, sending log messages to Elasticsearch or Apache Kafka), you have to either compile syslog-ng from source, or install it from unofficial repositories. This post explains you how to do that.

For information on all platforms that could be relevant to you, check out all my blog posts about installing syslog-ng on major Linux distributions, collected in one place.

In addition, syslog-ng is also available as a Docker image. To learn more, read our tutorial about logging in Docker using syslog-ng.

Why is syslog-ng in my distro so old?

Most Linux distributions have a number of limitations. Of course these are not limitations in the traditional sense, rather ways of quality control.

  • Distribution releases are done on a schedule: after a release candidate is out, software in the distribution cannot be upgraded. This ensures that a known state of the distribution can be tested and polished, and external applications are installed on a stable base. But it also means that distributions include an older version of syslog-ng, which lags behind a few minor or major versions.
  • The use of bundled libraries is often prohibited. Some functionality of syslog-ng is only available in bundled libraries, either because it needs a modified version, or needs a version which is not yet available in distributions.
  • Distributions may lack certain dependencies (tools, sources) necessary to enable certain features in syslog-ng. This makes compiling Java-based destinations nearly impossible as most tools and dependencies are missing or have a different version than required by syslog-ng.

All of this means that syslog-ng in Linux distributions is locked to a given version with a limited feature set for anywhere from half a year to up to a decade, depending on the release cycle. Thus, the syslog-ng version included in old Linux versions can also be a decade old.

If you need a feature or fix not available for some reason in the distribution package, you can either compile syslog-ng for yourself or use one of the unofficial syslog-ng RPM repositories. Using the repositories is usually easier 

Where to find new rpm packages of syslog-ng?

We, the developers of syslog-ng maintain several unofficial repositories for different distributions. The natural question is: why are these called “unofficial”? The short answer is: these packages are not officially supported by Balabit or a Linux distribution. If you need tested binaries, commercial support with guaranteed response times and other goodies, you either need a commercial Linux distribution, which includes syslog-ng (see possible problems above), or the commercial syslog-ng Premium Edition developed by Balabit. We support the unofficial repositories on a best effort level, which is sometimes quicker than commercial support, but most often is not.

For deb-based distributions, we also maintain unofficial repositories, see https://www.syslog-ng.com/products/open-source-log-management/3rd-party-binaries.aspx

Which package to install?

You can use many log sources and destinations in syslog-ng. The majority of these require additional dependencies to be installed. If all of the features would be included in a single package, installing syslog-ng would also install dozens of smaller and larger dependencies, including such behemoths as Java. This is why the syslog-ng package includes only the core functionality, and features requiring additional dependencies are available as sub-packages. The most popular sub-package is syslog-ng-java, which installs the Java-based big data destination drivers, like Elasticsearch, Kafka, and HDFS, but there are many others as well. Depending on your distribution: “yum search syslog-ng” or a similar command will list you all the possibilities.

Installing syslog-ng on RHEL and CentOS 7 (& 8)

1. Depending on whether you have RHEL or CentOS 7, do the following:

  • On RHEL 7: Enable the so-called “optional” repository , which contains a number of packages that are required to start syslog-ng:
    subscription-manager repos --enable rhel-7-server-optional-rpms
  • On RHEL 8: Enable the so-called "suplementary" repository
    subscription-manager repos --enable rhel-8-for-x86_64-supplementary-rpms
  • On CentOS: The content of this repo is included CentOS, so you do not have to enable it there separately:

2. The Extra Packages for Enterprise Linux (EPEL) repository contains many useful packages, which are not included in RHEL. A few dependencies of syslog-ng are available this repo. You can enable it by downloading and installing an RPM package:

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -Uvh epel-release-latest-7.noarch.rpm

3. Add the repo containing the latest unofficial build of syslog-ng. By the time of writing it is syslog-ng 3.29 and it is available on the Copr build service. Download the repo file to /etc/yum.repos.d/, so you can install and enable syslog-ng:

cd /etc/yum.repos.d/
wget https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng329/repo/epel-7/czanik-syslog-ng329-epel-7.repo
yum install syslog-ng
systemctl enable syslog-ng
systemctl start syslog-ng

Add any further sub-packages you need.

4. It is not strictly required, but you can avoid some confusion, if you also delete rsyslog at the same time:

yum erase rsyslog

Installing syslog-ng on Fedora

Installation on Fedora is a lot simpler.

1. Download the repo file according to your distribution version from https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng329/ and save it to the /etc/yum.repos.d/ directory. For example, on Fedora 31:

cd /etc/yum.repos.d/
wget https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng329/repo/fedora-31/czanik-syslog-ng329-fedora-31.repo

2. Next install and enable syslog-ng:

dnf install syslog-ng
systemctl enable syslog-ng
systemctl start syslog-ng

Add any further sub-packages you need.

3. It is not strictly required, but you can avoid some confusion, if you also delete rsyslog at the same time:

dnf erase rsyslog

Install syslog-ng on openSUSE or SLES

1. First you need to add the repository containing syslog-ng and its dependencies. Open https://build.opensuse.org/project/show/home:czanik:syslog-ng329 and you will find repository URLs on the right hand side behind the links named after distributions. For example on SLES 15 SP1 you can use the following command to add the repository:

zypper ar https://download.opensuse.org/repositories/home:/czanik:/syslog-ng329/SLE_15_SP1_Backports/ syslog-ng329

2. Next you can install syslog-ng:

zypper in syslog-ng

Add any further sub-packages you need.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

Installing latest syslog-ng on openSUSE, RHEL and other RPM distributions

Posted by Peter Czanik on September 09, 2020 08:03 AM

The syslog-ng application is included in all major Linux distributions, and you can usually install syslog-ng from the official repositories. If the core functionality of syslog-ng meets your needs, use the package in your distribution repository (yum install syslog-ng), and you can stop reading here. However, if you want to use the features of newer syslog-ng versions (for example, sending log messages to Elasticsearch or Apache Kafka), you have to either compile syslog-ng from source, or install it from unofficial repositories. This post explains you how to do that.

For information on all platforms that could be relevant to you, check out all my blog posts about installing syslog-ng on major Linux distributions, collected in one place.

In addition, syslog-ng is also available as a Docker image. To learn more, read our tutorial about logging in Docker using syslog-ng.

Why is syslog-ng in my distro so old?

Most Linux distributions have a number of limitations. Of course these are not limitations in the traditional sense, rather ways of quality control.

  • Distribution releases are done on a schedule: after a release candidate is out, software in the distribution cannot be upgraded. This ensures that a known state of the distribution can be tested and polished, and external applications are installed on a stable base. But it also means that distributions include an older version of syslog-ng, which lags behind a few minor or major versions.
  • The use of bundled libraries is often prohibited. Some functionality of syslog-ng is only available in bundled libraries, either because it needs a modified version, or needs a version which is not yet available in distributions.
  • Distributions may lack certain dependencies (tools, sources) necessary to enable certain features in syslog-ng. This makes compiling Java-based destinations nearly impossible as most tools and dependencies are missing or have a different version than required by syslog-ng.

All of this means that syslog-ng in Linux distributions is locked to a given version with a limited feature set for anywhere from half a year to up to a decade, depending on the release cycle. Thus, the syslog-ng version included in old Linux versions can also be a decade old.

If you need a feature or fix not available for some reason in the distribution package, you can either compile syslog-ng for yourself or use one of the unofficial syslog-ng RPM repositories. Using the repositories is usually easier 

Where to find new rpm packages of syslog-ng?

We, the developers of syslog-ng maintain several unofficial repositories for different distributions. The natural question is: why are these called “unofficial”? The short answer is: these packages are not officially supported by Balabit or a Linux distribution. If you need tested binaries, commercial support with guaranteed response times and other goodies, you either need a commercial Linux distribution, which includes syslog-ng (see possible problems above), or the commercial syslog-ng Premium Edition developed by Balabit. We support the unofficial repositories on a best effort level, which is sometimes quicker than commercial support, but most often is not.

For deb-based distributions, we also maintain unofficial repositories, see https://www.syslog-ng.com/products/open-source-log-management/3rd-party-binaries.aspx

Which package to install?

You can use many log sources and destinations in syslog-ng. The majority of these require additional dependencies to be installed. If all of the features would be included in a single package, installing syslog-ng would also install dozens of smaller and larger dependencies, including such behemoths as Java. This is why the syslog-ng package includes only the core functionality, and features requiring additional dependencies are available as sub-packages. The most popular sub-package is syslog-ng-java, which installs the Java-based big data destination drivers, like Elasticsearch, Kafka, and HDFS, but there are many others as well. Depending on your distribution: “yum search syslog-ng” or a similar command will list you all the possibilities.

Installing syslog-ng on RHEL and CentOS 7 (& 8)

1. Depending on whether you have RHEL or CentOS 7, do the following:

  • On RHEL 7: Enable the so-called “optional” repository , which contains a number of packages that are required to start syslog-ng:
    subscription-manager repos --enable rhel-7-server-optional-rpms
  • On RHEL 8: Enable the so-called "suplementary" repository
    subscription-manager repos --enable rhel-8-for-x86_64-supplementary-rpms
  • On CentOS: The content of this repo is included CentOS, so you do not have to enable it there separately:

2. The Extra Packages for Enterprise Linux (EPEL) repository contains many useful packages, which are not included in RHEL. A few dependencies of syslog-ng are available this repo. You can enable it by downloading and installing an RPM package:

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -Uvh epel-release-latest-7.noarch.rpm

3. Add the repo containing the latest unofficial build of syslog-ng. By the time of writing it is syslog-ng 3.29 and it is available on the Copr build service. Download the repo file to /etc/yum.repos.d/, so you can install and enable syslog-ng:

cd /etc/yum.repos.d/
wget https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng329/repo/epel-7/czanik-syslog-ng329-epel-7.repo
yum install syslog-ng
systemctl enable syslog-ng
systemctl start syslog-ng

Add any further sub-packages you need.

4. It is not strictly required, but you can avoid some confusion, if you also delete rsyslog at the same time:

yum erase rsyslog

Installing syslog-ng on Fedora

Installation on Fedora is a lot simpler.

1. Download the repo file according to your distribution version from https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng329/ and save it to the /etc/yum.repos.d/ directory. For example, on Fedora 31:

cd /etc/yum.repos.d/
wget https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng329/repo/fedora-31/czanik-syslog-ng329-fedora-31.repo

2. Next install and enable syslog-ng:

dnf install syslog-ng
systemctl enable syslog-ng
systemctl start syslog-ng

Add any further sub-packages you need.

3. It is not strictly required, but you can avoid some confusion, if you also delete rsyslog at the same time:

dnf erase rsyslog

Install syslog-ng on openSUSE or SLES

1. First you need to add the repository containing syslog-ng and its dependencies. Open https://build.opensuse.org/project/show/home:czanik:syslog-ng329 and you will find repository URLs on the right hand side behind the links named after distributions. For example on SLES 15 SP1 you can use the following command to add the repository:

zypper ar https://download.opensuse.org/repositories/home:/czanik:/syslog-ng329/SLE_15_SP1_Backports/ syslog-ng329

2. Next you can install syslog-ng:

zypper in syslog-ng

Add any further sub-packages you need.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

Does social media bring you joy?

Posted by Michel Alexandre Salim on September 09, 2020 12:00 AM

the-social-dilemma

I deleted my Instagram accounts today. The process is needlessly obscure - the flow on the app and the web interface only exposes an option to temporarily disable your account - which is an alarming anti-pattern; it is indeed one of the walled gardens I alluded to in an earlier post.

What was the impetus for this, you might ask? There are multiple factors I took into consideration, but the final straw was watching Jeff Orlowski’s thought-provoking documentary, The Social Dilemma. It’s on Netflix right now, and is a highly recommended watch. Even as someone who has, for a while, been highly skeptical of algorithmic newsfeeds in particular, and the attention economy in general, this still sends chills down my spine. A lot of tech insiders have second thoughts about the products they built, worry about its impact on the world, and try to insulate their children from it.

Center for Humane Technology’s Tristan Harris (ex-Google ethicist) and Aza Raskin (infinite scroll inventor), Jaron Lanier (a VR luminary, author of critical books such as Ten Arguments for Deleting Your Social Media Accounts Right Now) feature prominently in the documentary, as well as ex-Facebook personnel such as Tim Kendall and Justin Rosenstein (remember the Like button?); ex-YouTuber Guillaume Chaslot (who built AlgoTransparency1), former Twitter executives, and academics such as Shoshana Zuboff. I risk name-dropping if I continue (ok, one more - Cathy O’Neil, author of Weapons of Math Destruction). TL;DR - watch for yourself!

To get a quick peek, this might whet your appetite:

  • Wired’s interview with Tristan Harris2
  • GQ’s interview with Jaron Lanier3

If you have read this far, I’m assuming some of the material above resonated, or you at least want to find out more. What I’ll do next is break down that ineffable “something is wrong” dissastisfaction I have into several aspects, and suggest ways to address, or at least mitigate, that particular aspect.

Too Big To Fail

You might have noticed that I deleted my Instagram accounts, but did not delete my Facebook account. There are work reasons I can’t get into for why I am not doing that, but many of you might not be able to quit for other reasons: Metcalf’s Law means it’s hard to replace a network once it has achieved critical mass, and Facebook (and other Facebook-owned properties - especially WhatsApp) might be the only way you have of keeping in touch with some of your personal and work contacts – or even accessing local government services!

Quitting cold turkey might not be an option, but - even if you’re otherwise happy with Facebook (or any other proprietary social network you can name) - is this dependence healthy? What if:

  • there is a technical glitch and Facebook is down
  • … and some of your content is lost
  • for political reasons access to Facebook is shut down
  • your account, rightly or wrongly, is considered in violation and get suspended

The last is becoming more worrying as other Facebook-owned properties are increasingly being integrated with Facebook infrastructure. Oculus’ integration is in full swing, for instance45.

So make sure you have alternate means of reaching your contacts. Keep your own address book (Facebook is an incomplete replacement anyway; you don’t use it to store your friends’ and colleagues’ addresses there, do you?). Keep track of your friends’ birthdays there rather than relying on Facebook notifications. I would advise not using Google Contacts or Apple’s, but that’s a topic for another article, and anyway, most address books including those two tech giants’ are reasonably compliant with the CardDAV spec that getting your contacts’ data out is not as painful as getting it out from Facebook.

Make sure you have your contacts’ email addresses and/or phone numbers. Some messaging systems will auto-detect your contacts once you have their phone numbers (e.g. Signal, Telegram, and yes, WhatsApp); some others (IRC, Matrix) won’t so be sure to check if someone is reachable on some other networks and make note of it in your contacts database.

That algorithmic news feed

Many are increasingly worn out6 by political posts in their news feeds. The news feed might be intrinsically polarizing7. Facebook won’t let you turn it off though - likewise with People You May Know8. One draws engagement, the other drives growth – even though it can sometimes be a privacy nightmare - think sex workers being outed to their johns, or patients being recommended to one another.

There are alternatives, though.

  • on the web, bookmark the parts of Facebook you want to visit and don’t visit www.facebook.com since that takes you to the News Feed. e.g. visit your groups directly
  • on mobile, you can do the same by using your web browser instead of the Facebook app
  • if you need an Android app, Frost for Facebook is great. It’s open source - so if you use the F-Droid build, you can inspect the code and be sure that what you are running corresponds to the published source. It lets you customize what parts of Facebook you have quick access to, and… has dark mode long before the official app
  • there are similar apps for iOS but I can’t vouch for them

If You’re Not Paying, Your Attention Is The Product

Why does News Feed try so hard to keep you engaged? Think about how Facebook makes money - Senator, we run ads. More eyes on the app or site, more opportunities to show ads. If the most engaging content is polarizing and divisive, and advertisers can target those users who respond well to extreme fringe content… well, it’s not intentional, but … because it’s at the intersection of the business model and deep-rooted psychology it’s extremely hard to fix.

In an ideal world Facebook would give users the option to pay for the service - as Reddit already does. Sure, the most lucrative users are probably worth more as ad targets than as paying members, and many people in poorer countries might not be able to afford to pay. It’s questionable if Facebook is serving the latter well though9. As for the former – weaning the company off ads is probably a worthy goal, even if this might be as hard as weaning a petrostate off oil revenues.

Where Should I Go?

Ideally you start migrating to federated, open protocol platforms, with sustainable business models:

  • Matrix is a decentralized communication network, and Element, the company behind the main client and server implementations, sells paid hosting
  • Mastodon is the most common network on the Fediverse. It’s basically a decentralized Twitter; many Mastodon instances are supported by their users via subscriptions.

My ex-collague Dmitry wrote an excellent breakout plan for switching to Mastodon.

I would otherwise recommend Signal, but (a) it is not decentralized, and (b) it has no stable business model as of now, so its long-term future is questionable.

For those contacts who insist on using Facebook or Messenger – well, Messenger Lite is still not monetized. You’re missing the ability to send or request money to your contacts, but see point 1 on too big to fail.

Back to you

This is getting overly long, and it’s late at night, so I will probably stop here for now. I might post a follow-up if there are any points I did not think of addressing – and I’ll definitely follow up with further posts on how and why you might not want to use Gmail.

Would you get in trouble for this?

Hopefully not! I’m not recommending anything that is illegal, nor am I divulging any confidential information. A Facebook that is less dependent on advertising income might be more able to address a lot of the issues it now struggles with, after all.

Needless to say, the opinions expressed herein are my own and do not represent my employer’s views in any way. Nothing posted here should be considered official or sanctioned by my employer or any other organization I’m affiliated with.

This post is day 4 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

Have a comment on one of my posts? Start a discussion in my public inbox by sending an email to ~michel-slm/public-inbox@lists.sr.ht [ mailing list etiquette]

Posts are also tooted to @michel_slm@floss.social

<section class="footnotes" role="doc-endnotes">
  1. An ex-Google engineer is scraping YouTube to pop our filter bubbles, Technology Review ↩︎

  2. Our Minds Have Been Hijacked by Our Phones. Tristan Harris Wants to Rescue Them, Wired ↩︎

  3. The Conscience of Silicon Valley, GQ ↩︎

  4. Facebook is making Oculus’ worst feature unavoidable, The Verge ↩︎

  5. Oculus Connect is now ‘Facebook Connect’, The Verge ↩︎

  6. 55% of U.S. social media users say they are ‘worn out’ by political posts and discussions, Pew Research ↩︎

  7. How to think about polarization on Facebook, The Verge ↩︎

  8. ‘People You May Know:’ A Controversial Facebook Feature’s 10-Year History, Gizmodo ↩︎

  9. Facebook slammed by UN for its role in Myanmar genocide, Columbia Journalism Review ↩︎

</section>

Summertime sadness

Posted by Felipe Borges on September 08, 2020 10:10 AM

Another summer is about to end and with it comes the autumn* with its typical leaf loss. There’s beauty to the leaves falling and turning yellow/orange, but there’s also an association with melancholia. The possibilities and opportunities of the summer are perceived to be gone, and the chill of the winter is on the horizon.

The weather changes set in at the same time our Google Summer of Code season comes to an end this year. For a couple of years, I have planned to write this blog post to our GSoC alumni, and considering the exceptional quality of our projects this year, I feel that another GSoC can’t go without me finally taking a shot at writing this.

Outreachy and GSoC have been critical to various free and open source communities such as ours. By empowering contributors to spend a few months working fulltime in our projects we are not only benefiting from the features that interns are implementing but also having a chance to recruit talent that will continue pushing our project forward as generations pass.

“Volunteer time isn’t fungible” is a catchphrase but there’s lots of truth to it. Many people cannot afford to contribute to FOSS in their free time. Inequality is on the rise everywhere and job security is a privilege. We cannot expect that interns are going to continue delivering with the same bandwidth if they need to provide for themselves, their families, and/or work towards financial stability. Looking at ways to fund contributors is not a new discussion. Our friends at Tidelift and GitHub have been trying to tackle the problem from various fronts. Either making it easier for people to donate to volunteers and/or helping volunteers get fulltime jobs, the truth is that we are still far from sustainability.

So, if you are a mentor, please take some time to reflect on the reasons why your intern won’t continue participating in the project after the internship period ends and see what you can do to help them continue.

Some companies allow their employees to work in FOSS technologies and our alumni have a proven record of their contributions that can definitely help them land entry-level jobs. Therefore referring interns to job opportunities within your company might be a great way to help. Some companies prioritize candidates referred by fellow employees, so your referral can be of great help.

If you are an intern, discuss with us and with your mentor about your next steps. Reflect on your personal goals and on whether you want to build a career in FOSS. My personal advice is to be persistent. Lots of doors will close, but possibly the right one will open. You have a great advantage of having GSoC/Outreachy on your resume and a proven record of your contributions out in the open. Expand your portfolio by contributing bits that are important to you, and eventually recognition may come.

All in all, a career in FOSS isn’t guaranteed, and as branches grow in different ways, remember that the trunk still holds them together. Your roots are in GNOME and we are very proud to see our alumni thrive in the world, even far away from us.

*at least if you live outside the tropics, but that’s a topic I want to address on another blog post: the obstacles to a career in FOSS if you are coming from the global south.

دانلود رایگان کتاب Web Application Security

Posted by Fedora fans on September 08, 2020 06:30 AM
Web-Application-Security-ebook

Web-Application-Security-ebookکتاب Web Application Security از انتشارات O’Reilly می باشد و توسط Nginx به صورت رایگان در اختیار همگان قرار داده شده است. این ebook دارای نکات و توصیه های امنیتی عملی است که تیم های توسعه و امنیتی می توانند از آنها استفاده کنند.

به طور کلی با استفاده از این کتاب موارد زیر را یاد خواهید گرفت:

  • About common vulnerabilities plaguing today’s web applications
  • How to deploy mitigations to protect your applications against hackers
  • Practical tips to help you improve the overall security of your web applications
  • How to map and document web applications for which you don’t have direct access

نویسنده ی این کتاب آقای Andrew Hoffman می باشد. جهت اطلاعات بیشتر و دانلود کتاب Web Application Security می توانید به لینک پایین مراجعه نمایید:

https://www.nginx.com/resources/library/web-application-security/

 

The post دانلود رایگان کتاب Web Application Security first appeared on طرفداران فدورا.

Episode 213 – Security Signals: What are you telling the world

Posted by Josh Bressers on September 07, 2020 01:57 AM

Josh and Kurt talk about how your actions can tell the world if you actually take security seriously. We frame the discussion in the context of Slack paying a very low bug bounty and discover some ways we can look at Slack and decide if they do indeed take our security very seriously.

<audio class="wp-audio-shortcode" controls="controls" id="audio-1918-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_213_Security_Signals_What_are_you_telling_the_world.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_213_Security_Signals_What_are_you_telling_the_world.mp3</audio>

Show Notes

SNI-based load balancing with HAProxy

Posted by Juan Orti Alcaine on September 06, 2020 01:49 PM

In a bare-metal Openshift installation you need to use an external load balancer to access the API and other services. In my hone lab I also have a webserver accesible from the Internet. I also don't want to terminate the TLS connections in the load balancer to keep using the existing certificates in my webserver … Continue reading SNI-based load balancing with HAProxy

Fedora 32 : Can be better? part 007.

Posted by mythcat on September 05, 2020 09:17 AM
Another article in the Can be better? series that deals with a very popular feature called SELinux. Here that in this seventh part I will introduce you to the world of SELinux in my own style of simply explaining some SElinux configurations.
Let's recap some basic elements specific to SELinux.
Multi Category Security or MCS is a discretionary implementation of the mandatory Multi Level Security
MCS basically tries to use the MLS attributes: Security Levels and Security Compartments.
MCS implemented have one or more extra fields in their Security Context tuple: user_u:role_r:type_t:s0:c0.
You can see this with id -Z.
The MLS Range contains two components, the low (classification and compartments) and high (clearance).
sensitivity label build from the low component: s2 with c1, c2 ...
MCS does have 1024 categories that can be assigned to processes and files.
On an MLS system are two special labels, SystemLow(s0) and SystemHigh (s15:c0.c255).
The upper end of the MCS range is in an MCS environment s0:c0.c1023 is SystemHigh.
By default, everything in an MCS environment has access to SystemLow or s0.
You will able to access files with s0:c122 and s0:c123 categories.
The MLS translation mechanism to give a more literal meaning to the machine-like policy used in the MLS sensitivity and category declaration.
The MLS rule says: "no read up and no write down".
The MLS model is used to enforce confidentiality.
All processes that are forced to operate with Security Level.
The s0 Security Level or SystemLow level is the lower end of the Security Level Range in an MLS environment.
If you do not have the correct configurations then the SELinux setting operation for Enforcing could generate errors in the linux operation after reboot or during Linux operation.
You will need to have the root password and return for new SELinux settings.
Let's solve this issue: put SELinux into Enforce mode but give my user possibility to use command sudo su.
First, you need to see this table:
SELinux user Description Used for
unconfined_u SELinux user meant for unrestricted users. Unconfined users have hardly any restrictions in a SELinux context and are meant for systems where only Internet-facing services should run confined (i.e. the targeted SELinux policy store). All users on a targeted system
root The SELinux user meant for the root account The Linux root account
sysadm_u SELinux user with direct system administrative role assigned Linux accounts that only perform administrative tasks
staff_u SELinux user for operators that need to run both non-administrative commands (through the staff_r role) and administrative commands (through the sysadm_r role). Linux accounts used for both end user usage as well as administrative tasks
user_u SELinux user for non-privileged accounts Unprivileged Linux accounts
system_u Special SELinux user meant for system services Not used directly
Is need to change my user mythcat to staff_u with a good MLS Range.
[root@desk mythcat]# semanage login --modify --seuser staff_u --range s2:c100 mythcat
[root@desk mythcat]# semanage login --modify --seuser staff_u --range s0-s15:c0.c1023 mythcat
[root@desk mythcat]# semanage login -l
[root@desk mythcat]# setenforce enforcing
[root@desk mythcat]# getenforce
Enforcing
[root@desk mythcat]# semanage login -l
ValueError: Cannot read policy store.
After reboot need some time to load the new changes, first is the last configuration.
[mythcat@desk ~]$ semanage login -l
ValueError: SELinux policy is not managed or store cannot be accessed.
[mythcat@desk ~]$ id -Z
staff_u:staff_r:staff_t:s0-s15:c0.c1023
[mythcat@desk ~]$ sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: mls
Current mode: permissive
Mode from config file: permissive
Policy MLS status: enabled
Policy deny_unknown status: denied
Memory protection checking: actual (secure)
Max kernel policy version: 33
Few seconds later all is good:
[mythcat@desk ~]$ sudo su 
[sudo] password for mythcat:
bash: /root/.bashrc: Permission denied
bash-5.0# ls
bash-5.0# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: mls
Current mode: enforcing
Mode from config file: permissive
Policy MLS status: enabled
Policy deny_unknown status: denied
Memory protection checking: actual (secure)
Max kernel policy version: 33
bash-5.0# id -Z
staff_u:staff_r:staff_t:s0-s15:c0.c1023
bash-5.0# exit
exit
[mythcat@desk ~]$ sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: mls
Current mode: enforcing
Mode from config file: permissive
Policy MLS status: enabled
Policy deny_unknown status: denied
Memory protection checking: actual (secure)
Max kernel policy version: 33
Everything is fine for now, this delay is the reason for using the selinux kernel settings. More information about Multi-Level Security and Multi-Category Security can be found on this webpage.