Fedora People

Fedora 33 : Install and test with MonoGame .NET library .

Posted by mythcat on November 30, 2020 04:34 PM
MonoGame is a simple and powerful .NET library for creating games for desktop PCs, video game consoles, and mobile devices.
Today I tested with Fedora 33.
First let's install this templates for .NET Core CLI and the Rider IDE.:
[mythcat@desk ~]$ dotnet new --install MonoGame.Templates.CSharp

Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.109

----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
Getting ready...
Restore completed in 2.76 sec for /home/mythcat/.templateengine/dotnetcli/v3.1.109/scratch/restore.csproj.
...
Install MGCB Editor is a tool for editing .mgcb files:
[mythcat@desk ~]$ dotnet tool install --global dotnet-mgcb-editor
You can invoke the tool using the following command: mgcb-editor
Tool 'dotnet-mgcb-editor' (version '3.8.0.1641') was successfully installed.
[mythcat@desk ~]$ mgcb-editor --register
Installing icon...
Installation complete!
Installing mimetype...
gtk-update-icon-cache: No theme index file.
Installation complete!
Installing application...
Installation complete!

Registered MGCB Editor!
Install C# extension for code editor:
[mythcat@desk ~]$ code --install-extension ms-dotnettools.csharp
Installing extensions...
Extension 'ms-dotnettools.csharp' is already installed.
The next step is to create new MonoGame projects:
[mythcat@desk ~]$ cd CSharpProjects/
[mythcat@desk CSharpProjects]$ dotnet new mgdesktopgl -o MyGame001
The template "MonoGame Cross-Platform Desktop Application (OpenGL)" was created successfully.

An update for template pack MonoGame.Templates.CSharp::3.8.0.1641 is available.
install command: dotnet new -i MonoGame.Templates.CSharp::3.8.0.1641
I run the default template and working well:
[mythcat@desk CSharpProjects]$ cd MyGame001
[mythcat@desk MyGame001]$ ls
app.manifest Content Game1.cs Icon.bmp Icon.ico MyGame001.csproj Program.cs
[mythcat@desk MyGame001]$ dotnet run Program.cs
The default source code from Program.cs file is this:
using System;

namespace MyGame001
{
public static class Program
{
[STAThread]
static void Main()
{
using (var game = new Game1())
game.Run();
}
}
}

Fedora 33 : build and publish with .NET Core SDK .

Posted by mythcat on November 30, 2020 02:59 PM
In this tutorial I will show you how can use .NET Core SDK to build and run a simple MyGame001 application from the last tutorial.
Let's go to the project and get some infos:
[mythcat@desk ~]$ cd CSharpProjects/
[mythcat@desk CSharpProjects]$ cd MyGame001/
[mythcat@desk MyGame001]$ dotnet --info
.NET Core SDK (reflecting any global.json):
Version: 3.1.109
Commit: 32ced2d411

Runtime Environment:
OS Name: fedora
OS Version: 33
OS Platform: Linux
RID: fedora.33-x64
Base Path: /usr/lib64/dotnet/sdk/3.1.109/

Host (useful for support):
Version: 3.1.9
Commit: 774fc3d6a9

.NET Core SDKs installed:
3.1.109 [/usr/lib64/dotnet/sdk]

.NET Core runtimes installed:
Microsoft.AspNetCore.App 3.1.9 [/usr/lib64/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 3.1.9 [/usr/lib64/dotnet/shared/Microsoft.NETCore.App]

To install additional .NET Core runtimes or SDKs:
https://aka.ms/dotnet-download
The next command will create MyGame001.dll.
[mythcat@desk MyGame001]$ dotnet publish --configuration Release --runtime fedora.33-x64 --self-contained false
Microsoft (R) Build Engine version 16.4.0+e901037fe for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

Restore completed in 50.58 ms for /home/mythcat/CSharpProjects/MyGame001/MyGame001.csproj.
MyGame001 -> /home/mythcat/CSharpProjects/MyGame001/bin/Release/netcoreapp3.1/fedora.33-x64/MyGame001.dll
MyGame001 -> /home/mythcat/CSharpProjects/MyGame001/bin/Release/netcoreapp3.1/fedora.33-x64/publish/
Now you can run it with dotnet command:
[mythcat@desk MyGame001]$ dotnet /home/mythcat/CSharpProjects/MyGame001/bin/Release/netcoreapp3.1/fedora.33-x64/MyGame001.dll
You can simply run it like an Linux application:
[mythcat@desk MyGame001]$ /home/mythcat/CSharpProjects/MyGame001/bin/Release/netcoreapp3.1/fedora.33-x64/publish/MyGame001 

Install PHP 8.0 on CentOS, RHEL or Fedora

Posted by Remi Collet on November 30, 2020 02:10 PM

Here is a quick howto upgrade default PHP version provided on Fedora, RHEL or CentOS with latest version 8.0.

You can also follow the Wizard instructions.

 

Repositories configuration:

On Fedora, standards repositories are enough, on Enterprise Linux (RHEL, CentOS) the Extra Packages for Enterprise Linux (EPEL) repository must be configured, and on RHEL the optional channel must be enabled.

Fedora 33

dnf install https://rpms.remirepo.net/fedora/remi-release-33.rpm

Fedora 32

dnf install https://rpms.remirepo.net/fedora/remi-release-32.rpm

RHEL version 8.3

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm

RHEL version 7.9

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
wget https://rpms.remirepo.net/enterprise/remi-release-7.rpm
rpm -Uvh remi-release-7.rpm epel-release-latest-7.noarch.rpm
subscription-manager repos --enable=rhel-7-server-optional-rpms

CentOS version 8

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm

CentOS version 7

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
wget https://rpms.remirepo.net/enterprise/remi-release-7.rpm
rpm -Uvh remi-release-7.rpm epel-release-latest-7.noarch.rpm

 

php module usage

With Fedora modular and RHEL / CentOS 8, you can simply use the remi-8.0 stream of the php module

dnf module reset php
dnf module install php:remi-8.0

 

remi-php80 repository activation

Needed packages are in the remi-safe (enabled by default) and remi-php80 repositories, the latest is not enabled by default (administrator choice according to the desired PHP version).

RHEL or CentOS 7

yum install yum-utils
yum-config-manager --enable remi-php80

Fedora

dnf config-manager --set-enabled remi-php80

 

PHP upgrade

By choice, the packages have the same name than in the distribution, so a simple update is enough:

yum update

That's all :)

$ php -v
PHP 8.0.0 (cli) (built: Nov 24 2020 17:04:03) ( NTS gcc x86_64 )
Copyright (c) The PHP Group
Zend Engine v4.0.0-dev, Copyright (c) Zend Technologies
    with Zend OPcache v8.0.0, Copyright (c), by Zend Technologies

 

Known issues

The upgrade can fail (by design) when some installed extensions are not yet compatible with  PHP 8.0.

See the compatibility tracking list: PECL extensions RPM status

If these extensions are not mandatory, you can remove them before the upgrade, else, you will have to be patient.

Warning: some extensions are still under development, but it seems useful to provide them to allow upgrade to more people, and to allow user to give feedback to the authors.

 

More d'information

If you prefer to install PHP 8.0 beside the default PHP version, this can be achieved using the php80 prefixed packages, see the PHP 8.0 as Software Collection post.

You can also try the configuration wizard.

The packages available in the repository will be used as sources for Fedora 35 (if self contained change proposal is accepted).

By providing a full feature PHP stack, with about 130 available extensions, 7 PHP versions, as base and SCL packages, for Fedora and Enterprise Linux, and with 300 000 download per day, remi repository became in the last 15 years a reference for PHP users on RPM based distributions, maintained by an active contributor to the projects (Fedora, PHP, PECL...).

See also:

Release 5.6.1

Posted by Bodhi on November 30, 2020 12:52 PM

v5.6.1

This is a bugfix release.

Bug fixes

Fix two reflected XSS vulnerabilities - CVE: CVE-2020-15855

Contributors

The following developers contributed to this release of Bodhi:

  • Patrick Uiterwijk

Release 5.6

Posted by Bodhi on November 30, 2020 12:00 PM

v5.6

This is a feature release.

Dependency changes

  • Drop support for bleach 1.0 api (:pr:3875).
  • Markdown >= 3.0 is now required (:pr:4134).

Server upgrade instructions

This release contains database migrations. To apply them, run::

$ sudo -u apache /usr/bin/alembic -c /etc/bodhi/alembic.ini upgrade head

Features

  • Added a from_side_tag bool search parameter for Updates and allow searching
    for that and for gating status from WebUI (:pr:4119).
  • Allow overriding critpath.stable_after_days_without_negative_karma based on
    release status (:pr:4135).
  • Users which owns a side-tag can now create updates from that side-tag even if
    it contains builds for which they haven't commit access (:issue:4014).

Bug fixes

  • Fix encoding of package and user names in search results (:pr:4104).
  • Fix autotime display on update page (:pr:4110).
  • Set update.stable_days to 0 for Releases not composed by Bodhi itself
    (:pr:4111).
  • Ignore builds in Unpushed updates when checking for duplicate builds
    (:issue:1809).
  • Make automatic updates obsolete older updates stuck in testing due to failing
    gating tests (:issue:3916).
  • Fix 404 pages for bot users with nonstandard characters in usernames
    (:issue:3993).
  • Fixed documentation build with Sphinx3 (:issue:4020).
  • Serve the documentation directly from the WSGI application using WhiteNoise.
    (:issue:4066).
  • Updates from side-tag for non-rawhide releases were not pushed to testing
    (:issue:4087).
  • Side-tag updates builds were not editable in the WebUI (:issue:4122).
  • Fixed "re-trigger tests" button not showed on update page (:issue:4144).
  • Fixed a crash in automatic_updates handler due to get_changelog() returning
    an unhandled exception (:issue:4146).
  • Fixed a crash in automatic_updates handler due to trying access update.alias
    after the session was closed (:issue:4147).
  • Some comments orphaned from their update where causing internal server
    errors. We now enforce a not null check so that a comment cannot be created
    without associating it to an update. The orphaned comments are removed from
    the database by the migration script. (:issue:4155).
  • Dockerfile for pip CI tests has been fixed (:issue:4158).

Development improvements

  • Rename Release.get_testing_side_tag() to get_pending_testing_side_tag()
    to avoid confusion (:pr:4109).
  • Added F33 to tests pipeline (:pr:4132).

Contributors

The following developers contributed to this release of Bodhi:

  • Adam Saleh
  • Clement Verna
  • Justin Caratzas
  • Jonathan Wakely
  • Karma Dolkar
  • Mattia Verga
  • Pierre-Yves Chibon
  • Rayan Das
  • Sebastian Wojciechowski

EOL of EL6 and EL7 and removal Customer portal support

Posted by ABRT team on November 30, 2020 11:00 AM

Packages will no longer be built for EL6 and EL7

For a long time, we talked about EOLing EL6 and EL7. What does it actually mean?

RHEL 6 is going to be EOL at the end of this month (30th November 2020). We will no longer build ABRT packages for EL6 and we will stop supporting EL6 as content.

RHEL 7 is still active but quite old. ABRT team will stop testing, building, and developing on top of RHEL7. We are going to focus on RHEL 8 and upcoming RHEL and Fedora. We still support RHEL 7 as content (e.g., in ABRT Analytics).

This is a description of the current de-facto state. The last release of ABRT Analytics was already not built for RHEL 7.

End of reporting to Customer Portal

Red Hat Customer Portal was migrated to a new backend that does not have the API which we need and as a result, we removed reporting to Customer Portal from ABRT (libreport - to be precise).

Post Json API using curl

Posted by Robbi Nespu on November 30, 2020 10:23 AM

I developed a restful as communication for our software and client. We let end point of our API to talk each other and i quite simple for small test using Postman or SOAP-UI but to test with massive data via API is quite headache.

Lucky enough, I am good with unit test so since our system develop using java, then I use Junit as helper to help me do the automation test. It look nice but somehow I still have issue to remote test using Junit on my Eclipse IDE. It all because the remote server we connnecting is on customer premise and the connection are so bad!

As a guy who love the old school trick, I think to use curl to post the data into system API endpoint and I was right. The server much responsive to handle data.

Here I share how I use curl to POST data to our API endpoint locally (on server).

Let see, here I have 212,717 json file that I want to use and test the endpoint. All filename are unique and properly sorted.

$ find . -type f | wc -l
212717

If I want to POST a single data from my file, I just need to send command like this from directory that store my data file:

$ curl -X POST -H "Content-Type: application/json"  -d @TC-00001.json  http://localhost:8182/order

If I want to POST all the data from my file, I simply just need to execute this command inside directory that store my data files:

for f in TC-*.json; do printf "\nLoad ${f} - " && curl -X POST -H "Content-Type: application/json" --data @${f} http://localhost:8182/order;done

Simple right? As long you have the JSON as file, you can use this technique to test your system endpoint :)

Getting started with Stratis – up and running

Posted by Fedora Magazine on November 30, 2020 08:00 AM

When adding storage to a Linux server, system administrators often use commands like pvcreate, vgcreate, lvcreate, and mkfs to integrate the new storage into the system. Stratis is a command-line tool designed to make managing storage much simpler. It creates, modifies, and destroys pools of storage. It also allocates and deallocates filesystems from the storage pools.

Instead of an entirely in-kernel approach like ZFS or Btrfs, Stratis uses a hybrid approach with components in both user space and kernel land. It builds on existing block device managers like device mapper and existing filesystems like XFS. Monitoring and control is performed by a user space daemon.

Stratis tries to avoid some ZFS characteristics like restrictions on adding new hard drives or replacing existing drives with bigger ones. One of its main design goals is to achieve a positive command-line experience.

Install Stratis

Begin by installing the required packages. Several Python-related dependencies will be automatically pulled in. The stratisd package provides the stratisd daemon which creates, manages, and monitors local storage pools. The stratis-cli package provides the stratis command along with several Python libraries.

# yum install -y stratisd stratis-cli

Next, enable the stratisd service.

# systemctl enable --now stratisd

Note that the “enable –now” syntax shown above both permanently enables and immediately starts the service.

After determining what disks/block devices are present and available, the three basic steps to using Stratis are:

  1. Create a pool of the desired disks.
  2. Create a filesystem in the pool.
  3. Mount the filesystem.

In the following example, four virtual disks are available in a virtual machine. Be sure not to use the root/system disk (/dev/vda in this example)!

# sfdisk -s
/dev/vda:  31457280
/dev/vdb:   5242880
/dev/vdc:   5242880
/dev/vdd:   5242880
/dev/vde:   5242880
total:  52428800 blocks

Create a storage pool using Stratis

# stratis pool create testpool /dev/vdb /dev/vdc
# stratis pool list
Name Total Physical Size  Total Physical Used
testpool   10 GiB         56 MiB

After creating the pool, check the status of its block devices:

# stratis blockdev list
Pool Name   Device Node Physical Size   State  Tier
testpool    /dev/vdb            5 GiB  In-use  Data
testpool    /dev/vdc            5 GiB  In-use  Data

Create a filesystem using Stratis

Next, create a filesystem. As mentioned earlier, Stratis uses the existing DM (device mapper) and XFS filesystem technologies to create thinly-provisioned filesystems. By building on these existing technologies, large filesystems can be created and it is possible to add physical storage as storage needs grow.

# stratis fs create testpool testfs
# stratis fs list
Pool Name  Name   Used     Created                                 Device                    UUID
testpool   testfs 546 MiB  Apr 18 2020 09:15 /stratis/testpool/testfs  095fb4891a5743d0a589217071ff71dc

Note that “fs” in the example above can optionally be written out as “filesystem”.

Mount the filesystem

Next, create a mount point and mount the filesystem.

# mkdir /testdir
# mount /stratis/testpool/testfs /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem                                 Size  Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc  1.0T  7.2G 1017G   1% /testdir

The actual space used by a filesystem is shown using the stratis fs list command demonstrated previously. Notice how the testdir filesystem has a virtual size of 1.0T. If the data in a filesystem approaches its virtual size, and there is available space in the storage pool, Stratis will automatically grow the filesystem. Note that beginning with Fedora 34, the form of device path will be /dev/stratis/<pool-name>/<filesystem-name>.

Add the filesystem to fstab

To configure automatic mounting of the filesystem at boot time, run following commands:

# UUID=`lsblk -n -o uuid /stratis/testpool/testfs`
# echo "UUID=${UUID} /testdir xfs defaults 0 0" >> /etc/fstab

After updating fstab, verify that the entry is correct by unmounting and mounting the filesystem:

# umount /testdir
# mount /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem                                 Size  Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc  1.0T  7.2G 1017G   1% /testdir

Adding cache devices with Stratis

Suppose /dev/vdd is an available SSD (solid state disk). To configure it as a cache device and check its status, use the following commands:

# stratis pool add-cache testpool  /dev/vdd
# stratis blockdev
Pool Name   Device Node Physical Size  State    Tier
testpool    /dev/vdb            5 GiB  In-use   Data
testpool    /dev/vdc            5 GiB  In-use   Data
testpool    /dev/vdd            5 GiB  In-use   Cache

Growing the storage pool

Suppose the testfs filesystem is close to using all the storage capacity of testpool. You could add an additional disk/block device to the pool with commands similar to the following:

# stratis pool add-data testpool /dev/vde
# stratis blockdev
Pool Name  Device Node Physical Size   State    Tier
testpool   /dev/vdb            5 GiB   In-use   Data
testpool   /dev/vdc            5 GiB   In-use   Data
testpool   /dev/vdd            5 GiB   In-use   Cache
testpool   /dev/vde            5 GiB   In-use   Data

After adding the device, verify that the pool shows the added capacity:

# stratis pool
Name      Total Physical Size   Total Physical Used
testpool               15 GiB               606 MiB

Conclusion

Stratis is a tool designed to make managing storage much simpler. Creating a filesystem with enterprise functionalities like thin-provisioning, snapshots, volume management, and caching can be accomplished quickly and easily with just a few basic commands.

See also Getting Started with Stratis Encryption.

Accurate Conclusions from Bogus Data: Methodological Issues in “Collaboration in the open-source arena: The WebKit case”

Posted by Michael Catanzaro on November 29, 2020 11:46 PM

Nearly five years ago, when I was in grad school, I stumbled across the paper Collaboration in the open-source arena: The WebKit case when trying to figure out what I would do for a course project in network theory (i.e. graph theory, not computer networking; I’ll use the words “graph” and “network” interchangeably). The paper evaluates collaboration networks, which are graphs where collaborators are represented by nodes and relationships between collaborators are represented by edges. Our professor had used collaboration networks as examples during lecture, so it seemed at least mildly relevant to our class, and I wound up writing a critique on this paper for the class project. In this paper, the authors construct collaboration networks for WebKit by examining the project’s changelog files to define relationships between developers. They perform “community detection” to visually group developers who work closely together into separate clusters in the graphs. Then, the authors use those graphs to arrive at various conclusions about WebKit (e.g. “[e]ven if Samsung and Apple are involved in expensive patent wars in the courts and stopped collaborating on hardware components, their contributions remained strong and central within the WebKit open source project,” regarding the period from 2008 to 2013).

At the time, I contacted the authors to let them know about some serious problems I found with their work. Then I left the paper sitting in a short-term to-do pile on my desk, where it has been sitting since Obama was president, waiting for me to finally write this blog post. Unfortunately, nearly five years later, the authors’ email addresses no longer work, which is not very surprising after so long — since I’m no longer a student, the email I originally used to contact them doesn’t work anymore either — so I was unable to contact them again to let them know that I was finally going to publish this blog post. Anyway, suffice to say that the conclusions of the paper were all correct; however, the networks used to arrive at those conclusions suffered from three different mistakes, each of which was, on its own, serious enough to invalidate the entire work.

So if the analysis of the networks was bogus, how did the authors arrive at correct conclusions anyway? The answer is confirmation bias. The study was performed by visually looking at networks and then coming to non-rigorous conclusions about the networks, and by researching the WebKit community to learn what is going on with the major companies involved in the project. The authors arrived at correct conclusions because they did a good job at the later, then saw what they wanted to see in the graphs.

I don’t want to be too harsh on the authors of this paper, though, because they decided to publish their raw data and methodology on the internet. They even published the python scripts they used to convert WebKit changelogs into collaboration graphs. Had they not done so, there is no way I would have noticed the third (and most important) mistake that I’ll discuss below, and I wouldn’t have been able to confirm my suspicions about the second mistake. You would not be reading this right now, and likely nobody would ever have realized the problems with the paper. The authors of most scientific papers are not nearly so transparent: many researchers today consider their source code and raw data to be either proprietary secrets to be guarded, or simply not important enough to merit publication. The authors of this paper deserve to be commended, not penalized, for their openness. Mistakes are normal in research papers, and open data is by far the best way for us to be able to detect mistakes when they happen.

<figure aria-describedby="caption-attachment-9269" class="wp-caption aligncenter" id="attachment_9269" style="width: 684px">Collaboration network<figcaption class="wp-caption-text" id="caption-attachment-9269">A collaboration network from the paper. The paper reports that this network represents collaboration between September 2008 (when Google began contributing to WebKit) and February 2011 (the departure of Nokia from the project). Because the authors posted their data online, I noticed that this was a mistake in the paper: the graph actually represents the period between February 2011 and July 2012. The paper’s analysis of this graph is therefore questionable, but note this was only a minor mistake compared to the three major mistakes that impact this network. Note the suspiciously-high number of unaffiliated (“Other”) contributors in a corporate-dominated project.</figcaption></figure>

The rest of this blog post is a simplified version of my original school paper from 2016. I’ve removed maybe half the original content, including some flowery academic language and unnecessary references to class material (e.g. “community detection was performed using fast modularity maximization to generate an alternate visualization of the network,” good for high scores on class papers, not so good for blog posts). But rewriting everything to be informal takes a long time, and I want to finish this today so it’s not still on my desk five more years from now, so the rest of this blog post is still going to be much more formal than normal. Oh well. Tone shift now!

We (“we” means “I”) examine various methodological issues discovered by analyzing the paper. The first section discusses the effects on the collaboration network of choosing a poor definition of collaboration. The second section discusses a major source of error in detecting the company affiliation of many contributors. The third section describes a serious mistake in the data collection process. Each of these issues is quite severe, and any one alone calls into question the validity of the entire study. It must be noted that such issues are not necessarily unique to this paper, and must be kept in mind for all future studies that utilize collaboration networks.

Mistake #1: Poorly-defined Collaboration

The precise definition used to model collaboration has tremendous impact on the usefulness of the resultant collaboration network. Many collaboration networks are built using definitions of collaboration that are self-evidently useful, where there is little doubt that edges in the network represent real-world collaboration. The paper adopts an approach to building collaboration networks where developers are represented by nodes, and an edge exists between two nodes if the corresponding developers modified the same source code file during the time period under consideration for the construction of the network. However, it is not clear that this definition of collaboration is actually useful. Consider that it is a regular occurrence for developers who do not know each other and may have never communicated to modify the same files. Consider also that modifying a common file does not necessarily reflect any shared interest in a particular portion of the software project. For instance, a file might be modified when making an interface change in another file, or when fixing a build error occurring on a particular platform. Such occurrences are, in fact, extremely common in the WebKit project. Additionally, consider that there exist particular source code files that are unusually central to the project, and must be modified more frequently than other files. It is highly likely that almost all developers will at one point or another make some change in such a file, and therefore be connected via a collaboration edge to all other developers who have ever modified that file. (My original critique shows a screenshot of the revision history of WebPageProxy.cpp, to demonstrate that the developers modifying this file were working on unrelated projects.)

It is true, as assumed by the paper, that particular developers work on different portions of the WebKit source code, and collaborate more with particular other developers. For instance, developers who work for the same company typically, though not always, collaborate most with other developers from that same company. However, the paper’s naive definition of collaboration should ensure that most developers will be considered to have collaborated equally with most other developers, regardless of the actual degree of collaboration. For instance, consider developers A and B who regularly collaborate on a particular source file. Now, developer C, who works on a platform that does not use this file and would not ordinarily need to modify it, makes a change to some cross-platform interface in another file that requires updating this file. Developer C is now considered to have collaborated with developers A and B on this file! Clearly, this is not a desirable result, as developers A and B have collaborated far more on the development of the file. Moreover, consider that an edge exists between two developers in the collaboration network if they have ever both modified any file anywhere in WebKit during the time period under review; then we can expect to form a network that is almost complete (a “full” graph where edges exists between most nodes). It is evident that some method of weighting collaboration between different contributors would be desirable, as the unweighted collaboration network does not seem useful.

One might argue that the networks presented in the paper clearly show developers exist in subcommunities on the peripheries of the network, that the network is clearly not complete, and that therefore this definition of collaboration sufficed, at least to some extent. However, this is only due to another methodological error in the study. Mistake #3, discussed later, explains how the study managed to produce collaboration networks with noticeable subcommunities despite these issues.

We note that the authors chose this same definition of collaboration in their more recent work on OpenStack, so there exist multiple studies using this same flawed definition of collaboration. We speculate that this definition of collaboration is unlikely to be more suitable for OpenStack or for other software projects than it is for WebKit. The software engineering research community must explore alternative models of collaboration when undertaking future studies of software development collaboration networks in order to more accurately reflect collaboration.

Mistake #2: Misdetected Contributor Affiliation

One difficulty when building collaboration networks is the need to correctly match each contributor with the correct company affiliation. Although many free software projects are dominated by unaffiliated contributors, others, like WebKit, are primarily developed by paid contributors. Looking at the number of times a particular email domain appears in WebKit changelog entries made during 2015, most contributors commit using corporate emails, but many developers commit to WebKit using personal email accounts, such as Gmail accounts; additionally, many developers use generic webkit.org email aliases, which were previously available to active WebKit contributors. These developers may or may not be affiliated with companies that contribute to the project. Use of personal email addresses is a source of inaccuracy when constructing collaboration networks, as it results in an undercount of corporate contributions.  We can expect this issue has led to serious inaccuracies in the reported collaboration networks.

This substantial source of error is neither mentioned nor accounted for; all contributors using such email accounts were therefore miscategorized as unaffiliated. However, the authors clearly recognized this issue, as it has been accounted for in their more recent work covering OpenStack by cross-referencing email addresses from git revision history with a database containing corporate affiliations maintained by the OpenStack Foundation. Unfortunately, no such effort was made for the WebKit data set.

The WebKit project was previously dominated by contributors with chromium.org email domains. This domain is equivalent to webkit.org in that it can be used by contributors to the Chromium project regardless of corporate affiliation; however, most contributors with Chromium emails are actually Google employees. The high use of Chromium emails by Google employees appears to have led to a dramatic — by roughly an entire order of magnitude — undercount of Google’s contributors to the WebKit project, as only contributors with google.com emails were considered to be Google employees. The vast majority of Google employees used chromium.org emails, and so were counted as unaffiliated developers. This explains the extraordinarily high number of unaffiliated developers in the networks presented by the paper, despite the fact that WebKit development is, in reality, dominated by corporate contributors.

Mistake #3: Missing Most Changelog Data

The paper incorrectly claims to have gathered its data from both WebKit’s Subversion revision history and from its changelog files. We must draw a distinction between changelog entries and Subversion revision history. Changelog entries are inserted into changelog files that are committed into the Subversion repository; they are completely separate from the Subversion history. Each subproject within the WebKit project has its own set of changelog files used to record changes under the corresponding directory.

In fact, the paper processed only the changelog files. This was actually a good choice, as WebKit’s changelog files are much more accurate than the Subversion history, for two reasons. Firstly, it is easy for a contributor to change the email address entered into a changelog file, e.g. after a change in company affiliation. However, it is difficult to change the email address used to commit to Subversion, as this requires requesting a new Subversion account from the Subversion administrator; accordingly, contributors are more likely to use older email addresses, lacking accurate company affiliation, in Subversion revisions than in changelog files.  Secondly, many Subversion revisions are not directly committed by contributors, but rather are actually committed by the commit queue bot, which runs various tests before committing the revision. Subversion revisions are also, more rarely, committed by a completely different contributor than the patch author. In both cases, the proper contributor’s name will appear in only the changelog file, and not the Subversion data. Some developers are dramatically more likely to use the commit queue than others. Various other reviews of WebKit contribution history that examine data from Subversion history rather than from changelog files are flawed for this reason. Fortunately, by relying on changelog files rather than Subversion metadata, the authors avoid this problem.

Unfortunately, a serious error was made in processing the changelog data. WebKit has many different sets of changelog files, stored in various project subdirectories (JavaScriptCore, WebCore, WebKit, etc.), as well as toplevel changelogs stored in the root directory of the project. Regrettably, the authors were unaware of the changelogs in subdirectories, and based their analysis only on the toplevel changelogs, which contain only changes that occurred in subdirectories that lack their own changelog files. In practice, this inadvertently restricted the scope of the analysis to a very small minority of changes, primarily to build system files, manual tests, and the WebKit website. That is, the reported collaboration networks do not reflect collaboration on any actual source code files. All source code files are contained in subdirectories with their own changelog files, and therefore no source code files were actually considered in the analysis of collaboration on source code changes.

We speculate that the analysis’s focus on build system files likely exaggerates the effects of clustering in the network, as different companies used different build systems and thus were less likely to edit the build systems used by other companies, and that an analysis based on the correct data would display less of a clustering effect. Certainly, there would be dramatically more edges in the already-dense networks, because an edge exists between two developers if there exists any one file in WebKit that both developers have modified. Omitting all of the source code files from the analysis therefore dramatically reduces the likelihood of edges existing between nodes in the network.

Conclusion

We found that the original study was impacted by an unsuitable definition of collaboration used to build the collaboration networks, severe errors in counting contributor affiliation (including the classification of most Google employees as unaffiliated developers), and the omission of almost all the required data from the analysis, including all data on modifications to source code files. The authors constructed and studied essentially meaningless networks. Nevertheless, the authors were able to derive many accurate conclusions about the WebKit project from their inaccurate collaboration networks. Such conclusions illustrate the dangers of seeking to find particular meanings or explanations through visual inspection of collaboration networks. Researchers must work forwards from the collaboration networks to arrive at their conclusions, rather than backwards by attempting to match the networks to conclusions gained from prior knowledge.

Original Report

Wow, OK, you actually read this far? Since this blog post criticizes an academic paper, and since this blog post does not include various tables and examples that support my arguments, I’ve attached my original analysis in full. It is a boring, student-quality grad school project written with the objective of scoring the highest-possible grade in a class rather than for clarity, and you probably don’t want to look at it unless you are investigating the paper in detail. (If you download that, note that I no longer work for Igalia, and the paper was not authorized by Igalia either; I used my company email to disclose my affiliation and maybe impress my professor a little.) Phew, now I can finally remove this from my desk!

Episode 237 – Door 12: Video game hacking

Posted by Josh Bressers on November 29, 2020 10:00 PM

Josh and Kurt talk about video game hacking. The speedrunners are doing the best security research today

<audio class="wp-audio-shortcode" controls="controls" id="audio-2138-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_237_Door_12_Video_game_hacking.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_237_Door_12_Video_game_hacking.mp3</audio>

Links

How to Install PHP 8 on CentOS 7/8 and Fedora

Posted by Mohammed Tayeh on November 29, 2020 12:58 PM

Install the EPEL repository

  • CentOS 8
dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
  • CentOS 7
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
  • Fedora 33
dnf install https://rpms.remirepo.net/fedora/remi-release-33.rpm

Install the Remi repository

  • CentOS 8
dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm
  • CentOS 7
yum install https://rpms.remirepo.net/enterprise/remi-release-7.rpm
  • Fedora 33
dnf config-manager --set-enabled remi

Install the yum-utils package

  • CentOS 8
dnf install yum-utils
  • CentOS 7
yum install yum-utils

Enable the module stream for PHP 8.0:

  • CentOS 8
dnf module reset php
dnf module install php:remi-8.0
dnf update
  • CentOS 7
yum-config-manager --disable 'remi-php*'
yum-config-manager --enable   remi-php80
yum update
  • Fedora 33
dnf module reset php
dnf module install php:remi-8.0
dnf update

Install PHP 8

  • CentOS 8
dnf install php
  • CentOS 7
yum install php
  • Fedora 33
dnf install php

To verify the installed version, use the php command:

php -v

Conclusion

So that’s it on how to install PHP 8 on CentOS and RHEL.

if you like this post consider to support me

Fedora 33 : Testing xdotool linux tool .

Posted by mythcat on November 29, 2020 12:19 PM
Xdotool is a free and open source command line tool for simulating mouse clicks and keystrokes.
You can create beautiful scrips and tools with this command.
The help for this command is this:
[mythcat@desk ~]$ xdotool -help
Let's see some examples:
This simulate a keystroke for key c:
m[mythcat@desk ~]$ xdotool key c
c[mythcat@desk ~]$ c
The next command will simulate a key press for key c:
[mythcat@desk ~]$ xdotool keydown c
c[mythcat@desk ~]$ cccccccccccccccccccccccccccccccccccccccccccccccccccc^C
[mythcat@desk ~]$ ^C
The next command will simulate a key release for key c:
[mythcat@desk ~]$ xdotool keyup c
Use the enter key:
[mythcat@desk ~]$ xdotool key KP_Enter

[mythcat@desk ~]$
The next command will open a new tab in terminal:
[mythcat@desk ~]$ xdotool key shift+ctrl+t
This command simulate a right click at current location of pointer
xdotool click 3
The reference is: 1 – Left click, 2 – Middle click, 3 – Right click, 4 – Scroll wheel up, 5 – Scroll wheel down.
The next command get ID of terminal window currently in focus and then minimize it:
[mythcat@desk ~]$ xdotool getactivewindow windowminimize
This command lists all open windows:
[mythcat@desk ~]$ xdotool search --name ""
I get the open window with name New Tab - Google Chrome, with this command:
[mythcat@desk ~]$ xdotool search --name "New Tab -"
18874375
This command get the pid of the active window:
[mythcat@desk ~]$ xdotool getactivewindow getwindowpid
3538
The pid is from lxterminal tool:
[mythcat@desk ~]$ ps aux | grep 3538
mythcat 3538 0.3 0.4 601468 42500 ? Sl 11:42 0:05 lxterminal
mythcat 5713 0.0 0.0 221432 780 pts/1 S+ 12:07 0:00 grep 3538
The active window can be move with:
[mythcat@desk ~]$ xdotool getactivewindow windowmove 0 0
The next command get the current mouse coordinates:
[mythcat@desk ~]$ xdotool getmouselocation --shell
X=521
Y=339
SCREEN=0
WINDOW=16780805
These coordinates can be parsed:
[mythcat@desk ~]$ xdotool getmouselocation 2>/dev/null | cut -d\  -f1,2 -
x:0 y:0
[mythcat@desk ~]$ xdotool getmouselocation 2>/dev/null | sed 's/ sc.*//; s/.://g; s/ /x/'
0x0
Open inkscape and select Layer for this window Always on top:
[mythcat@desk ~]$ inkscape 
Start xdotool with focus on mouse and click on rectangle and draw area and follow the output:
[mythcat@desk ~]$ xdotool search --name "inkscape" behave %@ focus getmouselocation
x:25 y:239 screen:0 window:16786173
x:25 y:239 screen:0 window:16786173
x:340 y:378 screen:0 window:16786173
x:340 y:378 screen:0 window:16786173
x:332 y:358 screen:0 window:16786173
x:332 y:358 screen:0 window:16786173
x:518 y:643 screen:0 window:16786173
x:518 y:643 screen:0 window:16786173
x:723 y:466 screen:0 window:16780805
I create a bash script named inkscape_xdotool.sh and I add these commands:
#! /bin/sh
sleep 1
inkscape > /dev/null 2>&1 &
sleep 3
my_app=xdotool search --sync --name 'inkscape'
xdotool windowactivate $my_app
xdotool mousemove 0 0
xdotool mousemove 25 239
xdotool click 1
xdotool mousemove 332 358 mousedown 1 mousemove 518 643
xdotool mouseup 1
sleep 1
exit
I run the script:
[mythcat@desk ~]$ ./inkscape_xdotool.sh 
The script run well.
The output is an rectangle in inkscape with the default black color.

Fedora 33 : Install PyGame 2.0 on Fedora.

Posted by mythcat on November 29, 2020 12:17 PM
Today I will show you how to install the python PyGame version 2.0 package with python version 3.9 in Fedora 33 distro. Let's install all Fedora packages need for this python package:
[root@desk pygame]# dnf install SDL2-devel.x86_64 
...
Installed:
SDL2-devel-2.0.12-4.fc33.x86_64

Complete!
[root@desk pygame]# dnf install SDL2_ttf-devel.x86_64
...
Installed:
SDL2_ttf-2.0.15-6.fc33.x86_64 SDL2_ttf-devel-2.0.15-6.fc33.x86_64

Complete!
[root@desk pygame]# dnf install SDL2_image-devel.x86_64
...
Installed:
SDL2_image-2.0.5-5.fc33.x86_64 SDL2_image-devel-2.0.5-5.fc33.x86_64

Complete!
[root@desk pygame]# dnf install SDL2_mixer-devel.x86_64
...
Installed:
SDL2_mixer-2.0.4-7.fc33.x86_64 SDL2_mixer-devel-2.0.4-7.fc33.x86_64

Complete!
[root@desk pygame]# dnf install SDL2_gfx-devel.x86_64
...
Installed:
SDL2_gfx-1.0.4-3.fc33.x86_64 SDL2_gfx-devel-1.0.4-3.fc33.x86_64

Complete!
[root@desk pygame]# dnf install portmidi-devel.x86_64
...
Installed:
portmidi-devel-217-38.fc33.x86_64

Complete!
Use this command to clone it from GitHub and install it:
[mythcat@desk ~]$ git clone https://github.com/pygame/pygame
Cloning into 'pygame'...
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 38509 (delta 0), reused 0 (delta 0), pack-reused 38505
Receiving objects: 100% (38509/38509), 17.78 MiB | 11.66 MiB/s, done.
Resolving deltas: 100% (29718/29718), done.
[mythcat@desk ~]$ cd pygame/
[mythcat@desk pygame]$ python3.9 setup.py install --user


WARNING, No "Setup" File Exists, Running "buildconfig/config.py"
Using UNIX configuration...


Hunting dependencies...
SDL : found 2.0.12
FONT : found
IMAGE : found
MIXER : found
PNG : found
JPEG : found
SCRAP : found
PORTMIDI: found
PORTTIME: found
FREETYPE: found 23.4.17

If you get compiler errors during install, double-check
the compiler flags in the "Setup" file.
...
copying docs/pygame_tiny.gif -> build/bdist.linux-x86_64/egg/pygame/docs
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying pygame.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying pygame.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying pygame.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying pygame.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying pygame.egg-info/not-zip-safe -> build/bdist.linux-x86_64/egg/EGG-INFO
copying pygame.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt
creating dist
creating 'dist/pygame-2.0.1.dev1-py3.9-linux-x86_64.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing pygame-2.0.1.dev1-py3.9-linux-x86_64.egg
creating /home/mythcat/.local/lib/python3.9/site-packages/pygame-2.0.1.dev1-py3.9-linux-x86_64.egg
Extracting pygame-2.0.1.dev1-py3.9-linux-x86_64.egg to /home/mythcat/.local/lib/python3.9/site-packages
Adding pygame 2.0.1.dev1 to easy-install.pth file

Installed /home/mythcat/.local/lib/python3.9/site-packages/pygame-2.0.1.dev1-py3.9-linux-x86_64.egg
Processing dependencies for pygame==2.0.1.dev1
Finished processing dependencies for pygame==2.0.1.dev1
Let's test it:
[mythcat@desk pygame]$ ls
build dist examples README.rst setup.cfg src_c test
buildconfig docs pygame.egg-info Setup setup.py src_py
[mythcat@desk pygame]$ python3.9
Python 3.9.0 (default, Oct 6 2020, 00:00:00)
[GCC 10.2.1 20200826 (Red Hat 10.2.1-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
pygame 2.0.1.dev1 (SDL 2.0.12, python 3.9.0)
Hello from the pygame community. https://www.pygame.org/contribute.html
>>>

Matrix and Fedora

Posted by Kevin Fenzi on November 29, 2020 02:32 AM

Recently the Fedora Council has been floating the idea of modernizing the Fedora community real-time chat platform (currently IRC hosted at freenode.net). The front runner is matrix. I last looked at matrix 4 or so years ago, so I thought it would be a good time to revisit it and see how it looks today.

TLDR: I suspect we will have IRC and Matrix bridged together for a long time to come, if you are new user, use Matrix, if not keep using IRC.

First a few words about IRC (Internet Relay Chat). IRC is a 30+ year old chat protocol. There’s tons of clients. There’s tons of bots and add-ons. There’s tons of gateways and aggregators. So, whats not to like? Well, everything is a add-on mish mash that can be very confusing and frustrating for new users. For example, IRC has no persistance, you only see messages while your client is connected. So, folks invented irc “bouncers” to connect for you to the IRC networks you care about and when you reconnect play back all the messages you missed. Authentication is via messaging various services bots. Encryption is via plugins or other add ons (and often not setup). So, most old timers have a client they have carefully tuned, a bouncer and a bunch of custom bots, which is fine, but new users (not surprisingly) find this all a hassle and frustrating. IRC also has it’s own culture and rituals, some of which still make sense to me, but others that don’t.

Matrix on the other hand is pretty new (6 years). You can interact with it as a guest or have an account on a particular homeserver. If you have an account all your history is saved, and can be synced to your client on login. You can send pictures and moves and fancy gifs. You can (somewhat) have end to end encryption (see clients below) with encrypted rooms where the server can’t know what was said in the room. You can have ‘reactions’ to things people say. You can redact something you said in the past. You can have a nice avatar and a Real Name (if you like). You can join rooms/conversations with other matrix servers (for example the kde, mozilla and others are running servers). You can get read receipts to see who read your message and notifications when someone is typing (also client dependent see below).

Finally there’s matrix <–> IRC bridges. These allow conversations to be relayed from one side to another. Of course some things don’t translate over at all (reactions, receipts, typing notification, avatars) and some look different (if someone in matrix posts a picture, the people on IRC accross the bridge will see a link to the picture). The main text of the conversation is properly relayed to both sides.

I managed to re-setup my matrix homeserver (matrix.scrye.com) from 4 years ago. Ran into a little problem with the version of matrix-synapse in Fedora (It’s too old to federate right, so I built the new one and the other update thats blocking it. Hopefully that blockage will be fixed soon). It’s still a memory hog, but it runs well enough. They are working on the next generation server written in go (dentrite), but it still lacks a lot of features.

After that it was on to testing clients. There’s a bunch more available than there used to be, which is great. The documentation on them is pretty much missing for most of the clients (exception: element has docs).

The premier client is element (used to be riot). It’s available as a web app, an android app and a native linux app (which isn’t available except direct from them that I could find). The web app and android app are likely the most full featured of the clients. They support setting up client encryption and cross signing your connections so all of them can read the same encrypted rooms. For chat clients, I really prefer stand alone, as web apps have a lot of issues (not restarting on browser restart, notifications not working right, poor integration into the desktop, etc).

Fractal is the gnome native client, available as a flatpak. My impression: full screen is horrible as the chat text is centered in a small col in the middle. No way to adjust the text size or font, making it really small and non readable by me. On the plus side, it does have a ‘take me to next room with unread messages’ key, which is really nice.

nheko is a QT client with a mix of features implemented, it’s available as a rpm packaged in Fedora. My impression: Looks pretty nice, nice to be able to tag rooms into groups. Looks good full screen. There’s only really a “this room has unread messages in it”, not any indicator of _how many_ and no easy way to go to the next room with unread in it (that I could figure out). No docs at all anywhere that I could find. 🙁

Quaternion is another QT client with a mix of features implemented. No end to end encryption, but lots of the other features. It’s also available in Fedora as a rpm. My impression: looks pretty nice, lets you tag rooms and seperates them easily. Doesn’t seem to have a ‘go to next unread room’ function. ;( No docs.

spectral is a c++ client. Its packaged in Fedora, but it seems to crash on launch for me, so I didn’t get much chance to look it over. ;(

FluffyChat is a port of a native android matrix client. It’s pretty full featured and available as a flatpak. Does the chat sort of ‘sms’ style, which is cute and all, and fine for small rooms, but bad for larger discussions and such. Otherwise looks outstanding and is really fast!

Neochat is another QT client available as a rpm in Fedora. I had to tinker with my server setup and get /.well-known/matrix/client and server working before I could get neochat connecting. After that it connected fine, it was really fast, but

All of the clients seemed to handle basic chatting and history fine. Other features are all over the map. Element was the only one I saw with a search feature (to search the history). None of them had logging, which I guess could be mooted at least partly by a good search of the history/backlog. Element was the only one that seemed to have the url previews working (where the server fetches them for you and shows a preview in the client). I am not sure why so many of the clients are using QT, perhaps because kde is running their own matrix server?

So, as far as clients, I’m really missing easy ways to go to the next room with unread messages in most of them (I use this ALLLLLLLLLLLLL the time in hexchat). Logging/searching is really important to me too. I often have to look up what happened back on day X or see the exact command I used to solve something a year ago.

If you’re a new user/contributor these days I think it completely makes sense to just use a matrix client. You get history without having to deal with a bouncer and some nice other features and you can bridge to all the old fogeys on IRC. If Fedora gets it’s own home server this will be even easier (as I assume you will just be able to use your fedora account to login and create an account for you).

The real question is how long should we keep the current situation with Matrix and IRC bridged? What advantages would be dropping the irc bridges bring? Right now, not too much. End to end encryption isn’t that interesting for an open source project. Reactions are interesting (think about using them to vote up or down proposals in meetings?), but we have done without them so far. I think migration from IRC is going to be a long process, nor is there great advantage to pushing things to go faster. I hope that over coming years matrix clients continue to get better and implement more features. Someday (probably years down the road) more Fedora users will be on Matrix than IRC, then sometime after that things will have shifted enough that the community will start assuming you are on Matrix.

I have also a few other things I use my existing IRC client with: a bitlbee server to pull in other IM/twitter/etc, and a few old IRC servers that I still hang out in, so it probably doesn’t make sense for me to move over to matrix full time yet.

One additional thought: we have several IRC bots that do various things on IRC. Matrix has a handful of bots, but nothing like IRC. It’s practically a programming rite of passage to make a IRC bot. 🙂 I think we could safely look at starting design on bots for our needs for matrix and switch to them when ready (but again, no hurry at all here).

Edited some squircle icons for Big Sur (VLC, BBEdit, iTerm, Transmission)

Posted by Jon Chiappetta on November 28, 2020 05:01 PM

I thought I’d post them here in case anyone wants their dock to look a little more uniform with everything else!

Note: The VLC app has a ICNS file that it uses in addition when launched and active so you can also replace it by running:

cp vlcr.png ~/Applications/VLC.app/Contents/Resources/VLC.icns

     

     

     

     

Switzerland’s Responsible Business Initiative & Google

Posted by Daniel Pocock on November 28, 2020 04:20 AM

This weekend, the Swiss have another round of referendums, including the Responsible Business Initiative. The initiative aims to codify in Swiss law the obligation of due diligence from the United Nations Guiding Principles on Business and Human Rights. Fifty significant NGOs and all major Swiss political parties have endorsed the referendum and it seems likely that it will be passed. Nonetheless, it still provides an interesting opportunity to reflect on the way the campaign has been operated and especially the exploitation of children around the world.

Publicity for the initiative has focused on Glencore, the largest company domiciled in Switzerland. A leaflet dropped in Swiss letterboxes claims that Glencore uses children at the Porco mine in Bolivia. Glencore has denied it and announced legal action.

Whether the claims are true of false, I couldn't help wondering why activists have not looked at the role other multinationals play in Switzerland.

Google is not a Swiss company but they use Zurich as the base for their engineering work in Europe.

Google, evil

In 2019, Google was fined $170 million for violating children's privacy on YouTube. Glencore has been accused of exploiting a few hundred children in Bolivia: yet a court has determined that Google is exploiting every single child on the planet.

Mark Zuckerberg, evil

It wouldn't be fair to emphasize Google's impact on children without considering Facebook in the same blog post. I consider myself lucky that neither of these firms existed during my own childhood. Mark Zuckerberg of Facebook was questioned in the US Congress about his participation in Libra, a cryptocurrency to be operated from Switzerland. He tried to deny being in control of the platform. Former Facebook employees have publicly commented on the harm their company does to children.

Swiss guard

Switzerland is one of a few remaining countries to impose Church taxes on their citizens. Moreover, the Catholic Church's defence force, the Swiss guard, recruits its members from the population of Swiss males aged between 19 and 30. Those are the men in colourful medieval uniforms who appear at the entrance to the Vatican in Rome. There has been significant concern in recent years about cases of child abuse in the Catholic Church and even more disturbing, the practice of covering up the abuse, protecting the offenders and moving them to other parishes where they could continue offending. Swiss tax payers who do not wish to give their money to a church can not simply opt out, they have to explicitly declare themselves to be an atheist.

The most senior Catholic figure ever tried in relation to child abuse was Cardinal Pell, formerly arch bishop of Melbourne. He was eventually acquitted but controversy lingers around payments he authorized to both compensate and silence victims of abuse. At the time the charges were announced, he had risen to third most senior rank in the church, working alongside the Pope in the Vatican, under the protection of the Swiss guard. It will be interesting to see whether the Swiss guard can remain neutral on such issues after the referendum becomes law in their home country.

One of the world's largest tobacco companies, Philip Morris, is headquartered in Switzerland. Their products frequently find their way into the hands of children. A case is currently underway in the federal court in Chicago claiming that Juul and Philip Morris deliberately marketed vaping products to children.

Google's Android phones remain incredibly popular in Switzerland, many Swiss children use Facebook and many Swiss parents donate to the church. It is not hard to understand why Glencore executives feel they have been unfairly singled out in the campaign for this referendum.

Getting started with Fedora CoreOS

Posted by Fedora Magazine on November 27, 2020 08:00 AM

This has been called the age of DevOps, and operating systems seem to be getting a little bit less attention than tools are. However, this doesn’t mean that there has been no innovation in operating systems. [Edit: The diversity of offerings from the plethora of distributions based on the Linux kernel is a fine example of this.] Fedora CoreOS has a specific philosophy of what an operating system should be in this age of DevOps.

Fedora CoreOS’ philosophy

Fedora CoreOS (FCOS) came from the merging of CoreOS Container Linux and Fedora Atomic Host. It is a minimal and monolithic OS focused on running containerized applications. Security being a first class citizen, FCOS provides automatic updates and comes with SELinux hardening.

For automatic updates to work well they need to be very robust. The goal being that servers running FCOS won’t break after an update. This is achieved by using different release streams (stable, testing and next). Each stream is released every 2 weeks and content is promoted from one stream to the other (next -> testing -> stable). That way updates landing in the stable stream have had the opportunity to be tested over a long period of time.

Getting Started

For this example let’s use the stable stream and a QEMU base image that we can run as a virtual machine. You can use coreos-installer to download that image.

From your (Workstation) terminal, run the following commands after updating the link to the image. [Edit: On Silverblue the container based coreos tools are the simplest method to try. Instructions can be found at https://docs.fedoraproject.org/en-US/fedora-coreos/tutorial-setup/ , in particular “Setup with Podman or Docker”.]

$ sudo dnf install coreos-installer
$ coreos-installer download --image-url https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.20200907.3.0/x86_64/fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2.xz
$ xz -d fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2.xz
$ ls
fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2

Create a configuration

To customize a FCOS system, you need to provide a configuration file that will be used by Ignition to provision the system. You may use this file to configure things like creating a user, adding a trusted SSH key, enabling systemd services, and more.

The following configuration creates a ‘core’ user and adds an SSH key to the authorized_keys file. It is also creating a systemd service that uses podman to run a simple hello world container.

version: "1.0.0"
variant: fcos
passwd:
  users:
    - name: core
      ssh_authorized_keys:
        - ssh-ed25519 my_public_ssh_key_hash fcos_key
systemd:
  units:
    -
      contents: |
          [Unit]
          Description=Run a hello world web service
          After=network-online.target
          Wants=network-online.target
          [Service]
          ExecStart=/bin/podman run --pull=always   --name=hello --net=host -p 8080:8080 quay.io/cverna/hello
          ExecStop=/bin/podman rm -f hello
          [Install]
          WantedBy=multi-user.target
      enabled: true
      name: hello.service

After adding your SSH key in the configuration save it as config.yaml. Next use the Fedora CoreOS Config Transpiler (fcct) tool to convert this YAML configuration into a valid Ignition configuration (JSON format).

Install fcct directly from Fedora’s repositories or get the binary from GitHub.

$ sudo dnf install fcct
$ fcct -output config.ign config.yaml

Install and run Fedora CoreOS

To run the image, you can use the libvirt stack. To install it on a Fedora system using the dnf package manager

$ sudo dnf install @virtualization

Now let’s create and run a Fedora CoreOS virtual machine

$ chcon --verbose unconfined_u:object_r:svirt_home_t:s0 config.ign
$ virt-install --name=fcos \
--vcpus=2 \
--ram=2048 \
--import \
--network=bridge=virbr0 \
--graphics=none \
--qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=${PWD}/config.ign" \
--disk=size=20,backing_store=${PWD}/fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2

Once the installation is successful, some information is displayed and a login prompt is provided.

Fedora CoreOS 32.20200907.3.0
Kernel 5.8.10-200.fc32.x86_64 on an x86_64 (ttyS0)
SSH host key: SHA256:BJYN7AQZrwKZ7ZF8fWSI9YRhI++KMyeJeDVOE6rQ27U (ED25519)
SSH host key: SHA256:W3wfZp7EGkLuM3z4cy1ZJSMFLntYyW1kqAqKkxyuZrE (ECDSA)
SSH host key: SHA256:gb7/4Qo5aYhEjgoDZbrm8t1D0msgGYsQ0xhW5BAuZz0 (RSA)
ens2: 192.168.122.237 fe80::5054:ff:fef7:1a73
Ignition: user provided config was applied
Ignition: wrote ssh authorized keys file for user: core

The Ignition configuration file did not provide any password for the core user, therefore it is not possible to login directly via the console. (Though, it is possible to configure a password for users via Ignition configuration.)

Use Ctrl + ] key combination to exit the virtual machine’s console. Then check if the hello.service is running.

$ curl http://192.168.122.237:8080
Hello from Fedora CoreOS!

Using the preconfigured SSH key, you can also access the VM and inspect the services running on it.

$ ssh core@192.168.122.237
$ systemctl status hello
● hello.service - Run a hello world web service
Loaded: loaded (/etc/systemd/system/hello.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-28 10:10:26 UTC; 42s ago

zincati, rpm-ostree and automatic updates

The zincati service drives rpm-ostreed with automatic updates.
Check which version of Fedora CoreOS is currently running on the VM, and check if Zincati has found an update.

$ ssh core@192.168.122.237
$ rpm-ostree status
State: idle
Deployments:
● ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20200907.3.0 (2020-09-23T08:16:31Z)
Commit: b53de8b03134c5e6b683b5ea471888e9e1b193781794f01b9ed5865b57f35d57
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
$ systemctl status zincati
● zincati.service - Zincati Update Agent
Loaded: loaded (/usr/lib/systemd/system/zincati.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-28 13:36:23 UTC; 7s ago
…
Oct 28 13:36:24 cosa-devsh zincati[1013]: [INFO ] initialization complete, auto-updates logic enabled
Oct 28 13:36:25 cosa-devsh zincati[1013]: [INFO ] target release '32.20201004.3.0' selected, proceeding to stage it

... zincati reboot ...

After the restart, let’s remote login once more to check the new version of Fedora CoreOS.

$ ssh core@192.168.122.237
$ rpm-ostree status
State: idle
Deployments:
● ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20201004.3.0 (2020-10-19T17:12:33Z)
Commit: 64bb377ae7e6949c26cfe819f3f0bd517596d461e437f2f6e9f1f3c24376fd30
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20200907.3.0 (2020-09-23T08:16:31Z)
Commit: b53de8b03134c5e6b683b5ea471888e9e1b193781794f01b9ed5865b57f35d57
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0

rpm-ostree status now shows 2 versions of Fedora CoreOS, the one that came in the QEMU image, and the latest one received from the update. By having these 2 versions available, it is possible to rollback to the previous version using the rpm-ostree rollback command.

Finally, you can make sure that the hello service is still running and serving content.

$ curl http://192.168.122.237:8080
Hello from Fedora CoreOS!

More information: Fedora CoreOS updates

Deleting the Virtual Machine

To clean up afterwards, the following commands will delete the VM and associated storage.

$ virsh destroy fcos
$ virsh undefine --remove-all-storage fcos

Conclusion

Fedora CoreOS provides a solid and secure operating system tailored to run applications in containers. It excels in a DevOps environment which encourages the hosts to be provisioned using declarative configuration files. Automatic updates and the ability to rollback to a previous version of the OS, bring a peace of mind during the operation of a service.

Learn more about Fedora CoreOS by following the tutorials available in the project’s documentation.

Lidl (Silvercrest and Livarno Lux) branded Zigbee products for Open Source users

Posted by Daniel Pocock on November 26, 2020 07:35 PM

I was in Lidl today and found a huge pile of Zigbee products is available this week. For those who don't know Lidl, they have a section of the store where they distribute a different range of products each week. For example, this week they have the Zigbee products, next week they might use the same space to display microwave ovens or power drills.

The Lidl products are usually incredibly cheap and their smart home / domotics products are no exception. They are offerning bulbs, LED strips, smart sockets, lighting panels and a smart home gateway. The smart home gateway is CHF 29, approximately EUR 25 or USD 29. That is a lot cheaper than any similar product or Zigbee adapter such as the Zigate USB stick.

The LED strip worked immediately with my Domoticz setup using a Zigate stick and Raspberry Pi. Notice that the diagram is wrong, the arrow on the controller must align with the VCC label on the LED strip.

I opened up one of the gateways to see what it is and whether it can be adapted to run free, open source software. Inside, it is the Tuya TYGWZ-01 white label gateway. On the board, there is a Realtek RTL8196E router chip, radio module with label 330010257 and part no. 2.22.46.00001 and an EM6AA160TSE-5G DRAM chip.

If you have any ideas about this unit please feel free to share them in the Domoticz forum.

PHP version 8.0.0 is released!

Posted by Remi Collet on November 26, 2020 03:47 PM

RC5 was GOLD, so version 8.0.0 GA is just released, at planed date.

A great thanks to all developers who have contributed to this new major and long awaiting version of PHP and thanks to all testers of the RC versions who have allowed us to deliver a good quality version.

RPM are available in the remi-php80 repository for Fedora  31 and Enterprise Linux  7 (RHEL, CentOS) and as Software Collection in the remi-safe repository.

RPM are also available in the php:remi-8.0 module for Fedora and Enterprise Linux 8.

Read the PHP 8.0.0 Release Announcement and its Addendum for new features detailed description.

emblem-notice-24.pngInstallation: read the Repository configuration and choose installation mode, or follow the Configuration Wizard instructions.

Replacement of default PHP by version 8.0 installation (simplest):

Fedora modular or Enterprise Linux 8:

dnf module reset php
dnf module install php:remi-8.0

Fedora or Enterprise Linux 7:

yum-config-manager --enable remi-php80
yum update php\*

Parallel installation of version 8.0 as Software Collection (x86_64 only, recommended for tests):

yum install php80

emblem-important-2-24.pngTo be noticed :

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php80)

TTL pour les DNS locaux sur OpenWRT

Posted by Guillaume Kulakowski on November 26, 2020 12:43 PM

Récemment, je me suis un peu amusé avec AdGuard Home sur mon routeur OpenWRT. J’y ai rapidement constaté que mon serveur NAS ainsi que ma box domotique faisaient énormément d’appels au DNS. En effet, avec l’utilisation systématique de baux statiques et de noms de domaine locaux, j’ai remplacé les IPs de certain composant par leur […]

Cet article TTL pour les DNS locaux sur OpenWRT est apparu en premier sur Guillaume Kulakowski's blog.

PHP version 7.3.25 and 7.4.13

Posted by Remi Collet on November 26, 2020 10:52 AM

RPMs of PHP version 7.4.13 are available in remi repository for Fedora 32-33 and remi-php74 repository for Fedora 31 and Enterprise Linux  7 (RHEL, CentOS).

RPMs of PHP version 7.3.25 are available in remi repository for Fedora 31 and remi-php73 repository for Enterprise Linux  6 (RHEL, CentOS).

emblem-notice-24.pngNo security fix this month, so no update for version 7.2.34.

emblem-important-2-24.pngPHP version 7.1 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository and as module for Fedora 31-33 and EL-8.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.4 installation (simplest):

yum-config-manager --enable remi-php74
yum update

or, the modular way (Fedora and EL 8):

dnf module enable php:remi-7.4
dnf update php\*

Parallel installation of version 7.4 as Software Collection

yum install php74

Replacement of default PHP by version 7.3 installation (simplest):

yum-config-manager --enable remi-php73
yum update php\*

or, the modular way (Fedora and EL 8):

dnf module enable php:remi-7.3
dnf update php\*

Parallel installation of version 7.3 as Software Collection

yum install php73

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-8 RPMs are build using RHEL-8.2
  • EL-7 RPMs are build using RHEL-7.9
  • EL-6 RPMs are build using RHEL-6.10
  • EL-7 builds now use libicu65 (version 65.1)
  • EL builds now uses oniguruma5php (version 6.9.5, instead of outdated system library)
  • oci8 extension now uses Oracle Client version 19.9 (excepted on EL-6)
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php72 / php73 / php74)

OpenWRT et mise en place du Wi-Fi 802.11r

Posted by Guillaume Kulakowski on November 25, 2020 12:52 PM

On continue l’aventure autour d’OpenWRT avec ce coup-ci, la mise en place du Wi-Fi 802.11r. Tout d’abord, qu’est ce que le Wi-Fi 802.11r ? Si on simplifie à l’extrême, c’est pouvoir passer d’une borne Wi-Fi à une autre sans coupure. En image : je suis dans mon salon, je suis connecté à la borne la […]

Cet article OpenWRT et mise en place du Wi-Fi 802.11r est apparu en premier sur Guillaume Kulakowski's blog.

How to install the NVIDIA drivers on Fedora 33 with Hybrid Switchable Graphics

Posted by Mohammed Tayeh on November 25, 2020 07:27 AM

This is guide, how to install NVIDIA proprietary drivers on Fedora 33 with Hybrid Switchable Graphics [Intel + Nvidia GeForce]

Backup important files before you start installation. And this is of course at your own risk, because graphic cards, components and monitors are different and some combinations might cause totally unexpected results.

identify your Nvidia graphic

lspci -vnn | grep VGA

the output of the above command will be like:

00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics [8086:9bc4] (rev 05) (prog-if 00 [VGA controller])
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU116M [GeForce GTX 1660 Ti Mobile] [10de:2191] (rev a1) (prog-if 00 [VGA controller])

Enable RPM fusion

sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

update your system

sudo dnf update

It is recommended to reboot your system after update

install Nvidia driver

install akmod-nvidia

sudo dnf install gcc kernel-headers kernel-devel akmod-nvidia xorg-x11-drv-nvidia xorg-x11-drv-nvidia-libs xorg-x11-drv-nvidia-libs.i686

install cuda

sudo dnf install xorg-x11-drv-nvidia-cuda

Drivers are installed, run the below command

run sudo akmods --force and sudo dracut --force and then reboot.

and that’s it, the switch happens automatically when needed.

note this guide will disable wayland, X.org only supported.

to check NVIDIA Processes run the command nvidia-smi

nvidia-smi
Wed Nov 25 09:51:36 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.45.01    Driver Version: 455.45.01    CUDA Version: 11.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce GTX 166...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   33C    P8     4W /  N/A |      5MiB /  5944MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1776      G   /usr/libexec/Xorg                   4MiB |
+-----------------------------------------------------------------------------+

screenshot

NVIDIA X Server Setting nvidia-smi fedora_about

Ag troid le coróinvíreas le foinse oscailte

Posted by Máirín Duffy on November 25, 2020 04:01 AM

Tá mé ag iarraidh píosa beag a insint faoi cúpla tionscadal foinse oscáilte go mbeidh ag troid le coróinvíreas.

1. COVID-Net agus ChRIS

Is bogearraí intleachta saorga é COVID-Net. Is féidir leis COVID-19 a aimsiú i íomhánna x-ghathú cliabhraigh agus i íomhánna scanadh CAT cliabhraigh. Úsáideann sé samhail meaisínfhoghlama. Tá sé ag deanamh ag comhlacht ainm atá air DarwinAI agus ag University of Waterloo.

Is ardán scamaill foinse oscailte tábhachtach é ChRIS. Chruthaigh Boston Children’s Hospital é leis páirtithe is cósúil le Boston University agus Red Hat (mo fhostóir.) Ritheann COVID-Net ar ChRIS.

Is féidir libh níos mo faoin COVID-Net ar ChRIS a léamh (i mBéarla) anseo:

DarwinAI and Red Hat Team Up to Bring COVID-Net Radiography Screening AI to Hospitals, Using Underlying Technology from Boston Children’s Hospital

Dhear mé an comhéadain úsáideora do ChRIS agus do COVID-Net. Is féidir libh an dearadh comhéadain úsáideora COVID-Net a feacáil anseo: Dearadh COVID-Net. Is tionscadail an-tairbheach iad ChRIS agus COVID-Net agus is breá liom é a bheith ag obair acu.

2. Serratus

Is tionscadal foinse oscailte géineolaíocht é Serratus. Tá siad ag cuardach seichimh RNA víreasach i bunachar sonraí SRA poiblí. Nuair seichimh a aimsiú siad ann, cruthaíonn siad géanóim víreasach iomlán. Ansin seolann siad an géanóim chuig taighdeoirí vacsaíne.

Níor chuala mé ach faoi Serratus le déanaí agus tá mé sceitimíní orm foghlaim níos mo! Ceapaim go bhfuil acmhainneacht aige.

Sin é!

Sin é a chairde. Tá fíos agam nach bhfuil mo chuid Gaeilge ró-cliste. Tá súil agam níor mhiste leat mo chuid Ghaeilge a ceartú. Go raibh míle maith agat!

Cockpit 233

Posted by Cockpit Project on November 25, 2020 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from Cockpit version 233.

Non-admin users no longer see Cockpit in motd

The motd message suggesting how Cockpit can be enabled or used is now shown only to admin users. This depends on PAM 1.5.0 (which is not yet released) but the needed patches have been backported to Fedora 33 and may land in other distributions in the future.

Developers: jQuery removal

As previously announced, we have finally removed the long-deprcated ../base1/jquery.js API from Cockpit. The bundled version is no longer built or shipped. If you are using jQuery in your application, be sure to build against your own copy.

All known projects which used this API were ported away two years ago already.

Read more details in the original announcement.

Try it out

Cockpit 233 is available now:

Fedora program update: 2020-48

Posted by Fedora Community Blog on November 24, 2020 09:22 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora this week. Elections voting is open through 3 December. Fedora 31 has reached end of life. EPEL 6 will reach end-of-life on Monday. There will be no FPgM office hours this week (25 November) due to PTO. Announcements Calls for Participation Help wanted Upcoming meetings Releases […]

The post Fedora program update: 2020-48 appeared first on Fedora Community Blog.

Web interfaces for your syslog server – an overview

Posted by Peter Czanik on November 24, 2020 12:29 PM

This is the 2020 edition of my most read blog entry about syslog-ng web-based graphical user interfaces (web GUIs). Many things have changed in the past few years. In 2011, only a single logging as a service solution was available, while nowadays, I regularly run into others. Also, while some software disappeared, the number of logging-related GUIs is growing. This is why in this post, I will mostly focus on generic log management and open source instead of highly specialized software, like SIEMs.

Why grep is not enough?

Centralized event logging has been an important part of IT for many years for many reasons. Firstly, it is more convenient to browse logs in a central location rather than viewing them on individual machines. Secondly, central storage is also more secure. Even if logs stored locally are altered or removed, you can still check the logs on the central log server. Finally, compliance with different regulations also makes central logging necessary.

System administrators often prefer to use the command line. Utilities such as grep and AWK are powerful tools, but complex queries can be completed much faster with logs indexed in a database and a web interface. In the case of large amounts of messages, a web-based database solution is not just convenient, it is a necessity. With tens of thousands of incoming messages per second, the indexes of log databases still give Google-like response times even for the most complex queries, while traditional text-based tools are not able to scale as efficiently.

Why still syslog-ng?

Many software used for log analysis come with their own log aggregation agents. So why should you still use syslog-ng then? As organizations grow, so does the IT staff starts to diversify. Separate teams are created for operations, development and security, each with its own specialized needs in log analysis. And even the business side often needs log analysis as an input for business decisions. You can quickly end up with 4-5 different log analysis and aggregation systems running in parallel and working from the very same log messages.

This is where syslog-ng can come handy: creating a dedicated log management layer, where syslog-ng collects all of the log messages centrally, does initial basic log analysis, and feeds all the different log analysis software with relevant log messages. This can save you time and resources in multiple ways:

  • You only have to learn one tool instead of many.

  • Only a single tool to push through security and operations teams.

  • There are less computing resources on clients.

  • Logs travel only once over the network.

  • Long term archival in a single location with syslog-ng instead of using multiple log analysis software.

  • Filtering on the syslog-ng side can save significantly on the hardware costs of the log analysis software, and also on licensing in case of a commercial solution.

The syslog-ng application can collect both system and application logs, and can be installed both as a client and a server. Thus, you have a single application to install for log management everywhere on your network. It can reliably collect and transport huge amounts of log messages, parse (“look into”) your log messages, enrich them with geographical location and other extra data, making filters and thus, log routing, much more accurate.

Logging as a Service (LaaS)

A couple years ago, Loggly was the pioneer of logging as a service (LaaS). Today, there are many other LaaS providers (Papertrail, Logentries, Sumo Logic, and so on) and syslog-ng works perfectly with all of them.

Structured fields and name-value pairs in logs are increasingly important, as they are easier to search, and it is easier to create meaningful reports from them. The more recent IETF RFC 5424 syslog standard supports structured data, but it is still not in widespread use.

People started to use JSON embedded into legacy (RFC 3164) syslog messages. The syslog-ng application can send JSON-formatted messages – for example, you can convert the following messages into structured JSON messages:

  • RFC5424-formatted log messages.

  • Windows EventLog messages received from the syslog-ng Agent for Windows application.

  • Name-value pairs extracted from a log message with PatternDB or the CSV parser.

Loggly and other services can receive JSON-formatted messages, and make them conveniently available from the web interface.

A number of LaaS providers are already supported by syslog-ng out of the box. If your service of choice is not yet directly supported, the following blog can help you create a new LaaS destination: https://www.syslog-ng.com/community/b/blog/posts/how-to-use-syslog-ng-with-laas-and-why

Some non-syslog-ng-based solutions

Before focusing on the solutions with syslog-ng at their heart, I would like to say a few words about the others, some which were included in the previous edition of the blog.

LogAnalyzer from the makers of Rsyslog was a simple, easy to use PHP application a few years ago. While it has developed quite a lot, recently I could not get it to work with syslog-ng. Some of the popular monitoring software have syslog support to some extent, for example, Nagios, Cacti and several others. I have tested some of these, I have even sent patches and bug reports to enhance their syslog-ng support, but syslog is clearly not their focus, just one of the possible inputs.

The ELK stack (Elasticsearch + Logstash + Kibana) and Graylog2 have become popular recently, but they have their own log collectors instead of syslog-ng, and syslog is just one of many log sources. Syslog support is quite limited both in performance and protocol support. They recommend using file readers for collecting syslog messages, but that increases complexity, as it is an additional software on top of syslog(-ng), and filtering still needs to be done on the syslog side. Note that syslog-ng can send logs to Elasticsearch natively, which can greatly simplify your logging architecture.

Collecting and displaying metrics data

You can collect metrics data using syslog-ng. Examples include netdata or collectd. You can send the collected data to Graphite or Elasticsearch. Graphite has its own web interface, while you can use Kibana to query and visualize data collected to Elasticsearch.

Another option is to use Grafana. Originally, it was developed as an alternative web interface to the Graphite databases, but now it can also visualize data from many more data sources, including Elasticsearch. It can combine multiple data sources to a single dashboard and provides fine-grained access control.

Loki by Grafana is one of the latest applications that lets you aggregate and query log messages, and of course, to visualize logs using Grafana. It does not index the contents of log messages, only the labels associated with logs. This way, processing and storing log messages requires less resources, making Loki more cost-effective. Promtail, the log collector component of Loki, can collect log messages using the new, RFC 5424 syslog protocol. Learn here how syslog-ng can send its log messages to Loki.

Splunk

One of the most popular web-based interfaces for log messages is Splunk. A returning question is whether to use syslog-ng or Splunk. Well, the issue is a bit of apples vs. oranges: they do not replace, but rather complement each other. As I already mentioned in the introduction, syslog-ng is good at reliably collecting and processing huge amounts of data. Splunk, on the other hand, is good at analyzing log messages for various purposes. Learn more about how you can integrate syslog-ng with Splunk from our white paper!

Syslog-ng based solutions

Here I show a number of syslog-ng based solutions. While every software described below is originally based on syslog-ng Open Source Edition (except for One Identity’s own syslog-ng Store Box (SSB)), there are already some large-scale deployments available also with syslog-ng Premium Edition as their syslog server.

  • The syslog-ng application and SSB focus on generic log management tasks and compliance.

  • LogZilla focuses on logs from Cisco devices.

  • Security Onion focuses on network and host security.

  • Recent syslog-ng releases are also able to store log messages directly into Elasticsearch, a distributed, scalable database system popular in DevOps environments, which enables the use of Kibana for analyzing log messages.

Benefits of using syslog-ng PE with these solutions include the logstore, a tamper-proof log storage (even if it means that your logs are stored twice), Windows support, and enterprise grade support.

LogZilla

LogZilla is the commercial reincarnation of one of the oldest syslog-ng web GUIs: PHP-Syslog-NG. It provides the familiar user interface of its predecessor, but also includes many new features. The user interface supports Cisco Mnemonics, extended graphing capabilities, and e-mail alerts. Behind the scenes, LDAP integration, message de-duplication, and indexing for quick searching were added for large datasets.

Over the past years, it received many small improvements. It became faster, and role-based access control was added, as well as the live tailing of log messages. Of course, all these new features come with a price; the free edition, which I have often recommended for small sites with Cisco logs is completely gone now.

A few years ago, a complete rewrite became available with many performance improvements under the hood and a new dashboard on the surface. Development never stopped, and now LogZilla can parse and enrich log messages, and can also automatically respond to events.

Therefore, it is an ideal solution for a network operations center (NOC) full of Cisco devices.

Web site: http://logzilla.net/

Security Onion

One of the most interesting projects utilizing syslog-ng is Security Onion, a free and open source Linux distribution for threat hunting, enterprise security monitoring, and log management. It is utilizing syslog-ng for log collection and log transfer, and uses the Elastic stack to store and search log messages. Even if you do not use its advanced security features, you can still use it for centralized log collection and as a nice web interface for your logs. But it is also worth getting acquainted with its security monitoring features, as it can provide you some useful insights about your network. Best of all, Security Onion is completely free and open source, with commercial support available for it.

You can learn more about it at https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-and-security-onion

Elastisearch and Kibana

Elasticsearch is gaining momentum as the ultimate destination for log messages. There are two major reasons for this:

  • You can store arbitrary name-value pairs coming from structured logging or message parsing.

  • You can use Kibana as a search and visualization interface.

The syslog-ng application can send logs directly into Elasticsearch. We call this an ESK stack (Elasticsearch + syslog-ng + Kibana).

Learn how you can simplify your logging to Elasticsearch by using syslog-ng: https://www.syslog-ng.com/community/b/blog/posts/logging-to-elasticsearch-made-simple-with-syslog-ng

syslog-ng Store Box (SSB)

SSB is a log management appliance built on syslog-ng Premium Edition. SSB adds a powerful indexing engine, authentication and access control, customized reporting capabilities, and an easy-to-use web-based user interface.

Recent versions introduced AWS and Azure cloud support and horizontal scalability using remote logspaces. The new content-based alerting can send an e-mail alert whenever a match between the contents of a log message and a search expression is found.

SSB is really fast when it comes to indexing and searching log data. To put this scalability in context, the largest SSB appliance stores up to 10 terabytes of uncompressed, raw logs. With SSB’s current indexing performance of 100,000 events per second, that equates to approximately 8.6 billion logs per day or 1.7 terabytes of log data per day (calculating with an average event size of 200 bytes). Using compression, a single, large SSB appliance could store approximately one month of log data for an enterprise generating 1.7 terabytes of event data a day. This compares favorably to other solutions that require several nodes for collecting this amount of messages, and even more additional nodes for storing them. While storing logs to the cloud is getting popular, on-premise log storage is still a lot cheaper for a large amount of logs.

The GUI makes searching logs, configuring and managing the SSB easy. The search interface allows you to use wildcards and Boolean operators to perform complex searches, and drill down on the results. You can gain a quick overview and pinpoint problems fast by generating ad-hoc charts from the distribution of the log messages.

Configuring the SSB is done through the user interface. Most of the flexible filtering, classification and routing features in the syslog-ng Open Source and Premium Editions can be configured with the UI. Access and authentication policies can be set to integrate with Microsoft Active Directory, LDAP and RADIUS servers. The web interface is accessible through a network interface dedicated to the management traffic. This management interface is also used for backups, sending alerts, and other administrative traffic.

SSB is a ready-to-use appliance, which means that no software installation is necessary. It is easily scalable, because SSB is available both as a virtual machine and as a physical appliance, ranging from entry-level servers to multiple-unit behemoths. For mission critical applications, you can use SSB in High Availability mode. Enterprise-level support for SSB and syslog-ng PE is also available.

Read more about One Identity’s syslog-ng and SSB products here.

Request evaluation version / callback.

12/20 Élections pour le Conseil, FESCo et Mindshare pendant encore quelques jours

Posted by Charles-Antoine Couret on November 24, 2020 12:17 PM

Comme le projet Fedora est communautaire, une partie du collège des organisations suivantes doit être renouvelée : Council, FESCo et Mindshare. Et ce sont les contributeurs qui décident. Chaque candidat a bien sûr un programme et un passif qu'ils souhaitent mettre en avant durant leur mandat pour orienter le projet Fedora dans certaines directions. Je vous invite à étudier les propositions des différents candidats pour cela.

J'ai voté

Pour voter, il est nécessaire d'avoir un compte FAS actif et de faire son choix sur le site du scrutin. Vous avez jusqu'au vendredi 4 décembre à 1h heure française pour le faire. Donc n'attendez pas trop.

Par ailleurs, comme pour le choix des fonds d'écran additionnel, vous pouvez récupérer un badge si vous cliquez sur un lien depuis l'interface après avoir participé à un vote.

Je vais profiter de l'occasion pour résumer le rôle de chacun de ces comités afin de clarifier l'aspect décisionnel du projet Fedora mais aussi visualiser le caractère communautaire de celui-ci.

Council

Le Council est ce qu'on pourrait qualifier le grand conseil du projet. C'est donc l'organe décisionnaire le plus élevé de Fedora. Le conseil définit les objectifs à long terme du projet Fedora et participe à l'organisation de celui-ci pour y parvenir. Cela se fait notamment par le biais de discussions ouvertes et transparentes vis à vis de la communauté.

Mais il gère également l'aspect financier. Cela concerne notamment les budgets alloués pour organiser les évènements, produire les goodies, ou des initiatives permettant de remplir les dits objectifs. Ils ont enfin la charge de régler les conflits personnels importants au sein du projet, tout comme les aspects légaux liés à la marque Fedora.

Les rôles au sein du conseil sont complexes.

Ceux avec droit de vote complet

Tout d'abord il y a le FPL (Fedora Project Leader) qui est le dirigeant du conseil et de facto le représentant du projet. Son rôle est lié à la tenue de l'agenda et des discussions du conseil, mais aussi de représenter le projet Fedora dans son ensemble. Il doit également servir à dégager un consensus au cours des débats. Ce rôle est tenu par un employé de Red Hat et est choisi avec le consentement du conseil en question.

Il y a aussi le FCAIC (Fedora Community Action and Impact Coordinator) qui fait le lien entre la communauté et l'entreprise Red Hat pour faciliter et encourager la coopération. Comme pour le FPL, c'est un employé de Red Hat qui occupe cette position avec l'approbation du conseil.

Il y a deux places destinées à la représentation technique et à la représentation plus marketing / ambassadrice du projet. Ces deux places découlent d'une nomination décidée au sein des organes dédiées à ces activités : le FESCo et le Mindshare. Ces places sont communautaires mais ce sont uniquement ces comités qui décident des attributions.

Il reste deux places communautaires totalement ouvertes et dont tout le monde peut soumettre sa candidature ou voter. Cela permet de représenter les autres secteurs d'activité comme la traduction ou la documentation mais aussi la voix communautaire au sens la plus large possible. C'est pour une de ces places que le vote est ouvert cette semaine !

Ceux avec le droit de vote partiel

Un conseiller en diversité est nommé par le FPL avec le soutien du conseil pour favoriser l'intégration au sein du projet des populations le plus souvent discriminées. Son objectif est donc de déterminer les programmes pour régler cette problématique et résoudre les conflits associés qui peuvent se présenter.

Un gestionnaire du programme Fedora qui s'occupe du planning des différentes versions de Fedora. Il s'assure du bon respect des délais, du suivi des fonctionnalités et des cycles de tests. Il fait également office de secrétaire du conseil. C'est un employé de Red Hat qui occupe ce rôle toujours avec l'approbation du conseil.

FESCo

Le FESCo (Fedora Engineering Steering Committee) est un conseil entièrement composé de membres élus et totalement dévoués à l'aspect technique du projet Fedora.

Ils vont donc traiter en particulier les points suivants :

  • Les nouvelles fonctionnalités de la distribution ;
  • Les sponsors pour le rôle d'empaqueteur (ceux qui pourront donc superviser un débutant) ;
  • La création et la gestion des SIGs (Special Interest Group) pour organiser des équipes autour de certaines thématiques ;
  • La procédure d'empaquetage des paquets.

Le responsable de ce groupe est tournant. Les 9 membres sont élus pour un an, sachant que chaque élection renouvelle la moitié du collège. Ici 5 places sont à remplacer.

Mindshare

Mindshare est une évolution du FAmSCo (Fedora Ambassadors Steering Committee) qu'il remplace. Il est l'équivalent du FESCo sur l'aspect plus humain du projet. Pendant que le FESCo se préoccupera beaucoup plus des empaqueteurs, la préoccupation de ce conseil est plutôt l'ambassadeur et les nouveaux contributeurs.

Voici un exemple des thèmes dont il a compétence qui viennent du FAmSCo :

  • Gérer l'accroissement des ambassadeurs à travers le mentoring ;
  • Pousser à la création et au développement des communautés plus locales comme la communauté française par exemple ;
  • Réaliser le suivi des évènements auxquels participent les ambassadeurs ;
  • Accorder les ressources aux différentes communautés ou activités, en fonction des besoin et de l'intérêt ;
  • S'occuper des conflits entre ambassadeurs.

Et ses nouvelles compétences :

  • La communication entre les équipes, notamment entre la technique et le marketing ;
  • Motiver les contributeurs à s'impliquer dans différents groupes de travail ;
  • Gérer l'arrivé de nouveaux contributeurs pour les guider, essayer de favoriser l'inclusion de personnes souvent peu représentées dans Fedora (femmes, personnes non américaines et non européennes, étudiants, etc.) ;
  • Gestion de l'équipe marketing.


Il y a 9 membres pour gérer ce comité. Un gérant, 2 proviennent des ambassadeurs, un du design et web, un de la documentation, un du marketing, un de la commops et les deux derniers sont élus. C'est pour un de ces derniers sièges que le scrutin est ouvert.

Fin de vie de Fedora 31

Posted by Charles-Antoine Couret on November 24, 2020 08:37 AM

C'est en ce mardi 24 novembre 2020 que Fedora 31 a été déclaré comme en fin de vie.

Qu'est-ce que c'est ?

Un mois après la sortie d'une version de Fedora n, ici Fedora 33, la version n-2 (donc Fedora 31) est déclarée comme en fin de vie.

Ce mois sert à donner du temps aux utilisateurs pour faire la mise à niveau. Ce qui fait qu'en moyenne une version est officiellement maintenue pendant 13 mois.

En effet, la fin de vie d'une version signifie qu'elle n'aura plus de mises à jour et plus aucun bogue ne sera corrigé. Pour des questions de sécurité, avec des failles non corrigées, il est vivement conseillé aux utilisateurs de Fedora 31 et antérieurs d'effectuer la mise à niveau vers Fedora 33 ou 32.

Que faire ?

Si vous êtes concernés, il est nécessaire de faire la mise à niveau de vos systèmes. Vous pouvez télécharger des images CD ou USB plus récentes.

Il est également possible de faire la mise à niveau sans réinstaller via DNF ou GNOME Logiciels.

GNOME Logiciels a également dû vous prévenir par une pop-up de la disponibilité de Fedora 32 ou 33. N'hésitez pas à lancer la mise à niveau par ce biais.

How to automate a deploy with GitHub actions via SSH

Posted by Mohammed Tayeh on November 24, 2020 08:07 AM

Introduction

GitHub Actions is an API for cause and effect on GitHub: orchestrate any workflow, based on any event, while GitHub manages the execution, provides rich feedback, and secures every step along the way.

In this article, we will be exploring a hands-on approach to managing your CD processes using GitHub Actions via SSH.

The workflow:

  1. Connect to VPS via SSH
  2. Move to project directory
  3. git pull the new changes
  4. execute any necessary command

Prerequisites

  • A GitHub account. If you don’t have one, you can sign up here
  • A server with SSH access
  • Basic knowledge of writing valid YAML
  • Basic knowledge of GitHub and Git

Configuring workflows

we should create a yml file on .github/workflows/. For example .github/workflows/ci.yml and add this code to the file:

name: CI

on: [push]

jobs:
  deploy:
    if: github.ref == 'refs/heads/master'
    runs-on: [ubuntu-latest]
    steps:
      - uses: actions/checkout@v1
      - name: Push to server
        uses: appleboy/ssh-action@master
        with:
          host: ${{ secrets.SERVER_IP }}
          username: ${{ secrets.SERVER_USERNAME }}
          key: ${{ secrets.KEY }}
          passphrase: ${{ secrets.PASSPHRASE }} 
          script: cd ${{ secrets.PROJECT_PATH }} && git pull

After add this file go to Settings -> Secrets and add secrets SERVER_IP, SERVER_USERNAME, KEY, PASSPHRASE, PROJECT_PATH github_actions_ssh_secrets

note: you can use password insted of keys just you need to replace the key and passphrase line with password in the workflow file password: ${{ secrets.PASSWORD }} and add the password to secrets

I use the GitHub secrets to keep important information hidden

also you can add more commands to the script line as you need

the next time we push to the master branch, it will automatically be deployed to our server.

github_actions_run_job

Hording AD groups through wbinfo

Posted by Ingvar Hagelund on November 24, 2020 07:44 AM

In a samba setup where users and groups are fetched from Active Directory to be used in a unix/linux environment, AD may prohibit the samba winbind tools like wbinfo to recurse into its group structure. You may get groups and users and their corresponding gids and uids, but you may not get the members of a group.

It is usually possible to do the opposite, that is, probing a user object and get the groups that user is member of. Here is a little script that collects all users, probing AD for the groups of each and every user, and sorting and putting it together. In perl of course.

https://github.com/ingvarha/groupmembers

Site and blog migration

Posted by Adam Williamson on November 24, 2020 12:36 AM

So I've been having an adventurous week here at HA Towers: I decided, after something more than a decade, I'm going to get out of the self-hosting game, as far as I can. It makes me a bit sad, because it's been kinda cool to do and I think it's worked pretty well, but I'm getting to a point where it seems silly that a small part of me has to constantly be concerned with making sure my web and mail servers and all the rest of it keep working, when the services exist to do it much more efficiently. It's cool that it's still possible to do it, but I don't think I need to actually do it any more.

So, if you're reading this...and I didn't do something really weird...it's not being served to you by a Fedora system three feet from my desk any more. It's being served to you by a server owned by a commodity web hoster...somewhere in North America...running Lightspeed (boo) on who knows what OS. I pre-paid for four years of hosting before realizing they were running proprietary software, and I figured what the hell, it's just a web serving serving static files. If it starts to really bug me I'll move it, and hopefully you'll never notice.

All the redirects for old Wordpress URLs should still be in place, and also all URLs for software projects I used to host here (fedfind etc) should redirect to appropriate places in Pagure and/or Pypi. Please yell if you see something that seems to be wrong. I moved nightlies and testcase_stats to the Fedora openQA server for now; that's still a slightly odd place for them to be, but at least it's in the Fedora domain not on my personal domain, and it was easiest to do since I have all the necessary permissions, putting them anywhere else would be more work and require other people to do stuff, so this is good enough for now. Redirects are in place for those too.

I've been working on all the other stuff I self-host, too. Today I set up all the IRC channels I regularly read in my Matrix account and I'm going to try using that setup for IRC instead of my own proxy (which ran bip). It seems to work okay so far. I'm using the Quaternion client for now, as it seems to have the most efficient UI layout and isn't a big heavy wrapper around a web client. Matrix is a really cool thing, and it'd be great to see more F/OSS projects adopting it to lower barriers to entry without compromising F/OSS principles; IRC really is getting pretty creaky these days, folks. There's some talk about both Fedora and GNOME adopting Matrix officially, and I really hope that happens.

I also set up a Kolab Now account and switched my contacts and calendar to it, which was nice and easy to do (download the ICS files from Radicale, upload them to Kolab, switch my accounts on my laptops and phone, shut down the Radicale server, done). I also plan to have it serve my mail, but that migration is going to be the longest and most complicated as I'll have to move several gigs of mail and re-do all my filters. Fun!

I also refreshed my "desktop" setup; after (again) something more than a decade having a dedicated desktop PC I'm trying to roll without one again. Back when I last did this, I got to resenting the clunky nature of docking at the time, and also I still ran quite a lot of local code compiles and laptops aren't ideal for that. These days, though, docking is getting pretty slick, and I don't recall the last time I built anything really chunky locally. My current laptop (a 2017 XPS 13) should have enough power anyhow, for the occasional case. So I got me a fancy Thunderbolt dock - yes, from the Apple store, because apparently no-one else has it in stock in Canada - and a 32" 4K monitor and plugged the things into the things and waited a whole night while all sorts of gigantic things I forgot I had lying around my home directory synced over to the laptop and...hey, it works. Probably in two months I'll run into something weird that's only set up on the old desktop box, but hey.

So once I have all this wrapped up I'm aiming to have substantially fewer computers lying around here and fewer Sysadmin Things taking up space in my brain. At the cost of being able to say I run an entire domain out of a $20 TV stand in my home office. Ah, well.

Oh, I also bought a new domain as part of this whole thing, as a sort of backup / staging area for transitions and also possibly as an alternative vanity domain. Because it is sometimes awkward telling people yes, my email address is happyassassin.net, no, I'm not an assassin, don't worry, it's a name based on a throwaway joke from university which I probably wouldn't have picked if I knew I'd be signing up for bank accounts with it fifteen years later. So if I do start using it for stuff, here is your advance notice that yeah, it's me. This name I just picked to be vaguely memorable and hopefully to be entirely inoffensive, vaguely professional-sounding, and composed of sounds that are unambiguous when read over an international phone line to a call centre in India. It doesn't mean anything at all.

fwupd 1.5.2

Posted by Richard Hughes on November 23, 2020 04:36 PM

The last few posts I did about fwupd releases were very popular, so I’ll do the same thing again: I’ve just tagged fwupd 1.5.2 – This release changes a few things:

  • Add a build time flag to indicate if packages are supported – this would be set for “traditional” package builds done by the distro, and unset by things like the Fedora COPR build, the Flatpak or Snap bundles. There are too many people expecting that the daily snap or flatpak packages represent the “official fwupd” and we wanted to make it clear to people using these snapshots that we’ve done basically no QA on the snapshots.
  • A plugin for the Pinebook Pro laptop has been added, although it needs further work from PINE64 before it will work correctly. At the moment there’s no way of getting the touchpad version, or finding out which keyboard layout is installed so we can tag the correct firmware file. It’s nearly there and is still very useful for playing with the hardware on the PB Pro.
  • Components can now set the icon from the metadata from the LVFS, if supported by the fwupd plugin. This allows us to tag “generic” ESRT devices as things like EC devices, or, ahem, batteries.
  • I’ve been asked by a few teams, including the Red Hat Edge team, the CoreOS team and also by Google to switch from libsoup to libcurl for downloading data – as this reduces the image size by over 5MB. Even NetworkManager depends on libcurl now, and this seemed like a sensible thing to do given fwupd is now being used in so many different places.
  • Fall back to FAT32 internal partitions for detecting ESP, as some users were complaining that fwupd did not properly detect their ESP that didn’t have the correct partition GUID set. Although I think fixing the GUID is the right thing to do, the system firmware also falls back, and pragmatically so should we.
  • Fix detection of ColorHug version on older firmware versions, which was slightly embarrassing as ColorHug is one of the devices in the device regression tests, but we were not testing an old enough firmware version to detect this bug.
  • Fix reading BCM57XX vendor and device ids from firmware – firmware for the Talos II machine is already on the LVFS and can replace the non-free firmware there in almost all situations now.
  • For this release we had to improve synaptics-mst reliability when writing data, which was found occasionally when installing firmware onto a common dock model. A 200ms delay is the difference between success and failure, which although not strictly required seemed pragmatic to add.
  • Fix replugging the MSP430 device which was the last device that was failing a specific ODM QA. This allows us to release a ton of dock firmware on the LVFS.
  • Fix a deadlock seen when calling libfwupd from QT programs. This was because we were calling a sync method from threads without a context, which we’ve now added.
  • In 1.5.0 we switched to the async libfwupd by default, and accidentally dropped the logic to only download the remote metadata as required. Most users only need to download the tiny .jcat file every day, and the much larger .xml.gz is only downloaded if the signature has changed in the last 24h. Of course, it’s all hitting the CDN, but it’s not nice to waste bandwidth for no reason.
  • As Snap is bundling libfwupd with gnome-software now, we had to restore recognizing GPG and PKCS7 signature types. This allows a new libfwupd to talk to an old fwupd daemon which is something we’d not expected before.
  • We’re also now setting the SMBIOS chassis type to portable if a DeviceTree battery exists, although I’d much rather see a ChassisType in the DT specification one day. This allows us to support HSI on platforms like the PineBook Pro, although the number of tests is still minimal without more buy-in from ARM.
  • We removed the HSI update and attestation suffixes; we decided they complicated the HSI specification and didn’t really fit in. Most users won’t even care and the spec is explicitly WIP so expect further changes like this in the future.
  • If you’re running 1.5.0 or 1.5.1 you probably want to update to this release now as it fixes a hard-to-debug hang we introduced in 1.5.0. If you’re running 1.4.x you might want to let the libcurl changes settle, although we’ve been using it without issue for more than a week on a ton of hardware here. Expect 1.5.3 in a few weeks time, assuming we’re all still alive by then. :)

    Gnome Asia summit 2020

    Posted by Robbi Nespu on November 23, 2020 04:04 PM

    24-26 November 2020 @ https://events.gnome.org/event/24

    I think it maybe too late to help out telling anyone about this event since registeration already closed but look like the registeration still open. Anyway, let see the schedule:

    Some of the segment that offer good topic and caught my eyes :

    Gnome Asia summit 2020 will start by tomorrow today and conference will be online. This event was sponsor by Gitlab and openSUSE.

    Gnome Asia submit 2020

    Posted by Robbi Nespu on November 23, 2020 04:04 PM

    24-26 November 2020 @ https://events.gnome.org/event/24

    I think it maybe too late to help out telling anyone about this event since registeration already closed but look like the registeration still open. Anyway, let see the schedule:

    Some of the segment that offer good topic and caught my eyes :

    Gnome Asia submit 2020 will start by tomorrow and conference will be online. This event was sponsor by Gitlab and openSUSE.

    Musical Midi Accompaniment: First Tune

    Posted by Adam Young on November 23, 2020 02:16 AM

    Here is a tune I wrote called “Standard Deviation” done as an accompaniment track using MMA. This is a very simplistic interpretation that makes no use of dynamics, variations in the BossaNova Groove, or even decent repeat logic. But it compiles.

    Here’s the MMA file.

    Slightly Greater than one Standard Deviation from the Mean:

    Episode 225 – Who is responsible if IoT burns down your house?

    Posted by Josh Bressers on November 23, 2020 12:01 AM

    Josh and Kurt talk about the safety and liability of new devices. What happens when your doorbell can burn down your house? What if it’s your fault the doorbell burned down your house? There isn’t really any prior art for where our devices are taking us, who knows what the future will look like.

    <audio class="wp-audio-shortcode" controls="controls" id="audio-2077-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_225_Who_is_responsible_if_IoT_burns_down_your_house.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_225_Who_is_responsible_if_IoT_burns_down_your_house.mp3</audio>

    Show Notes

    On Safety Razors and Technology

    Posted by Michel Alexandre Salim on November 23, 2020 12:00 AM

    rex-ambassador

    On safety razors

    I recently switched over from the ubiquitous cartridge razors to double-edge safety razors. The original impetus was not finding a non-charging base for my GiletteLabs Heated Razors - the battery in the stem made it too wide for most razor holders - and noticing that a lot of reviews swear by various safety razors.

    I ended up buying the Rex Ambassador1 a few months ago - and then held off on actually using it, telling myself I need to learn how to properly use it first. In the end I told myself I would stop using my Gilette the day after the US Presidential Election, and start using the safety razor the morning after the US Presidential Election is finally called – which was Sunday the 8th, with a nice 5-day stubble to test it on.

    The first shave went surprisingly smoothly; the next few shaves ended with some minor mishaps - cockiness and distraction getting in the way - but overall there is no way I’m going back to cartridge razors after this. Feeling more in control, getting a closer shave, no plastic waste to dispose – and hey much lower total cost of ownership!

    … and technology

    There seems to be a parallel here between the world of personal care and that of technology:

    • most people are trapped on proprietary, heavily marketed solutions (cartridge razors, proprietary operating systems, apps and services)
    • these proprietary solutions are at first glance more user friendly
    • the more open solutions have a steeper learning curve but are eventually more empowering
    • vendor lock-in
    • the incentives for the manufacturers/vendors and customers/users are not aligned

    Think Windows on one side, vs Linux (and the BSDs) on the other (with macOS initially being in the middle and increasingly swaying to becoming even more constraining than Windows). Think proprietary gaming consoles and mobile IAP-chasing games, vs game platforms that encourage participation like TIC-80 and LÖVE. Think US-centric proprietary social networks (Facebook, Twitter) and services (Dropbox, Google Suite) vs distributed social networks (Mastodon, Pleroma, Diaspora etc.) and self-hosted services (Nextcloud, Cryptpad etc.).

    What are most people sacrificing to the altar of promised convenience? Literally both time and money: our attention, higher costs; also our autonomy (you’re locked in) and our privacy (… so platform owners can mine your attention and monetize what they observe of your behavior).

    If you believe in capitalism, this is bad news. If you don’t it’s even worse.

    So what can we do?

    Part of the solution is regulatory. In the EU, a recent ECJ ruling requires EU companies to stop using US-based cloud services to host data from EU citizens. This could help push the adoption of more open, user-empowering, privacy-friendly alternatives.

    But in other jurisdictions like the US, regulation might be a long time coming, except maybe in California (plus the companies we’re trying to unshackle users from are mostly US-based). So a lot of the solution has to be bottom up.

    We simply need to lower barriers to entry, both actual and perceived, to using the platforms we’re championing. Some involve compromises (e.g. Flatpak is a great way to abstract away the differences between Linux distributions, to the point that it’s easier to install proprietary apps, including Steam – which improves the availability of games on Linux despite, yes, being proprietary). Some involve corporate backing (e.g. Fedora on Lenovo laptops). A lot would involve being more welcoming to newcomers, and bridging the actual usability gaps there are.

    It’s hard enough to overcome incumbency and the network effect. Let’s not make it harder for ourselves.

    This post is day 5 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

    Have a comment on one of my posts? Start a discussion in my public inbox by sending an email to ~michel-slm/public-inbox@lists.sr.ht [ mailing list etiquette]

    Posts are also tooted to @michel_slm@floss.social

    <section class="footnotes" role="doc-endnotes">
    1. Not a product placement, honest! ↩︎

    </section>

    Musical Midi Accompaniment: Understanding the Format

    Posted by Adam Young on November 22, 2020 07:52 PM

    Saxophone is a solo instrument. Unless you are into the sounds of Saxophone multiphonics, harmony requires playing with some other instrument. For Jazz, this tends to be a rhythms section of Piano, Bass, and Drums. As a kid, my practicing (without a live Rhythm section) required playing along with pre-recordings of tunes. I had my share of Jamie Aebersold records.

    Nowadays, the tool of choice for most Jazz muscians, myself included is iReal Pro. A lovely little app for the phone. All of the Real Book tunes have their chord progressions been posted and generated. The format is simple enough.

    But it is a proprietary app. While I continue to support and use it, I am also looking for alternatives that let me get more involved. One such tool is Musical MIDI Accompaniment. I’m just getting started with it, and I want to keep my notes here.

    First is just getting it to play. Whether you get the tarball or checkout from Git, there is a trick that you need to do in order to even play examples: regenerate the libraries.

    ./mma.py -G
    

    That allows me to generate a midi file from a file in the MMA Domain Specific Language (DSL) which is also called MMA. I downloaded the backing track for I’ve Got You Under My Skin https://www.mellowood.ca/mma/examples/examples.html and, once I regenerated the libraries with the above command, was able to run :

    ./mma.py ~/Downloads/ive-got-you-under-my-skin.mma
    Creating new midi file (120 bars, 4.57 min / 4:34 m:s): '/home/ayoung/Downloads/ive-got-you-under-my-skin.mid'
    

    Which I can then play with timidity.

    The file format is not quite as simplistic as iReal Pro, but does not look so complex that I won’t be able to learn it.

    There are examples of things that look like real programming. Begin and End Blocks.

    Line Numbers. This is going to give my flashbacks to coding in Basic on my C64…not such an unpleasant set of memories. And musical ones at that.

    Ok, lets take this apart. Here is the first few lines:

    // I've Got You Under My Skin
    
    Tempo 105
    Groove Metronome2-4
    
    	z * 2
    

    Comments are doubles slashes. Title is just for documentation.

    Tempo is in BPM.

    Groove Metronome2-4 Says to use a Groove, the MMA “Grooves, in some ways, are MMA ‘s answer to macros … but they are cooler, easier to use, and have a more musical name. ” Says the manual. So, somewhere we have inherited a Groove called Metronome…something. Is the 2-4 part of the name? It looks it. Found this in the library

    lib/stdlib/metronome.mma:97:DefGroove Metronome2-4 A very useful introduction. On bar one we have hits on beats 1 and 3; on bar two hits on beats 1, 2, 3 and 4.

    Which is based on a leader counting off the time in the song. If you play the midi file, you can hear the cowbell-effect used to count off

    z * 2 is the way of saying that this extends for 2 measures.

    The special sequences, “-” or “z”, are also the equivalent of a rest or “tacet” sequence. For example, in defining a 4 bar sequence with a bass pattern on the first 3 bars and a walking bass on bar 4 you might do something like:

    If you already have a sequence defined5.2 you can repeat or copy the existing pattern by using a single “*” as the pattern name. This is useful when you are modifying an existing sequence.

    The next block is the definition of a section he calls Solo. This is a Track.

    Begin Solo
    	Voice Piano2
    	Octave 4
    	Harmony 3above
    	Articulate 90
    	Accent 1 20
     	Volume f
    End
    

    I think that the expectation is that you get the majority of the defaults from the Groove, and customize the Solo track.


    As a general rule, MELODY tracks have been designed as a “voice” to accompany a predefined form defined in a GROOVE—it is a good idea to define MELODY parameters as part of a GROOVE. SOLO tracks are thought to be specific to a certain song file, with their parameters defined in the song file.

    So if it were a Melody track definition is would be ignored, and the track from the Rhumba base would be used instead.

    The next section defines what is done overall.

    Keysig 3b
    
    
    Groove Rhumba
    Alltracks SeqRnd Off
    Bass-Sus Sequence -		// disable the strings
    
    Cresc pp mf 4
    

    Keysig directive can be found here. This will generate a MIDI KeySignature event. 3b means 3 flats in the midi spec. Major is assumed if not specified. Thus this is the key of E Flat.

    The Groove Rhumba directive is going to drive most of the song. The definitions for this Groove can be found under the standard library I might tear apart a file like this one in a future post.

    The next two lines specify how the Groove is to be played. SeqRnd inserts randomness into the sequencing, to make it more like a live performance. This directive shuts down the randomness.

    Bass-Sus Sequence – seems to be defining a new, blank sequence. The comment implies that it is shutting off the strings. I have to admit, I don’t quite understand this. I’ve generated the file with this directive commented out and detect no differences. Since Bass-Sus is defined in the Bossa Nova Groove under the standard library, I’m tempted to think this is an copy-pasta error. Note that it defines “Voice Strings” and I think that is what he was trying to disable. I suspect a git history will show the Bass-Sus getting pulled out of the Rhumba file.

    Cresc pp mf 4 Grow in volume from piano (soft) to mezzo-forte (Medium Loud) over 4 bars. Since no track is specified, it is for the master volume.

    // 4 bar intro
    
    1 	Eb		{4.g;8f;2e;}
    2 	Ab      {4.a;8g;2f;}
    3 	Gm7     {1g;}
    4 	Bb7     {2b;}
    
    Delete Solo
    

    Now we start seeing the measures. The numbers are optional, and just for human readers to keep track.
    Measure 1 is an E flat chord. The braces delineate a Riff line. The 4 means a Quarter note. The period after it Makes it Dotted, half again as long, or the equivalent of 3 tied eighth notes. The Note played is a g. This is adjusted for the octave appropriate to the voice. This is followed by an eighth note f, an a half note e. This adds up to a full measure; 3/8 + 1/8 + 4/8.

    After the four bar intro, the solo part is deleted, and the normal Rhumba patterns take effect.

    The next line is a Repeat directive, which is paired with the repeatending directive on line 129 and repeatend directives on line 135. This says that measures 5-60 should be repeated once, first and second ending style.

    The Groove changes many times during the song, and I think this leads to the one bug I noticed: the time keeps changing, speeding up and slowing down. I think these match up with the Groove changes, but I am not yet certain.

    It should be fairly easy to translate one of my songs into this format.

    OpenWRT derrière une Freebox: IPv6, DMZ et Bridge

    Posted by Guillaume Kulakowski on November 22, 2020 05:02 PM

    Bien que je sois le très récent et heureux possesseur d’une Freebox Pop, j’ai fait le choix de continuer à déléguer la gestion de mon réseau ainsi que de mon partage Wi-Fi, non pas à la Pop, mais à OpenWRT. Les avantages pour moi sont les suivants : Plus de contrôle au niveau des règles […]

    Cet article OpenWRT derrière une Freebox: IPv6, DMZ et Bridge est apparu en premier sur Guillaume Kulakowski's blog.

    Fedora program update: 2020-47

    Posted by Fedora Community Blog on November 20, 2020 09:51 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora this week. Elections voting is open through 3 December. Fedora 31 will reach end-of-life on Tuesday. EPEL 6 will reach end-of-life on 30 November. Announcements Calls for Participation Help wanted Upcoming meetings Releases Announcements Elections voting CfPs Conference Location Date CfP Balkan FLOSStival 2020 virtual 5-6 […]

    The post Fedora program update: 2020-47 appeared first on Fedora Community Blog.

    Scaling Flathub 100x

    Posted by Alexander Larsson on November 20, 2020 04:06 PM

    Flatpak relies on OSTree to distribute apps. This means that flatpak repositories, such as Flathub, are really just OSTree repositories. At the core of an OSTree repository is the summary file, which describes the content of the repository.  This is similar to the metadata that “apt-get update” downloads.

    Every time you do an flatpak install it needs the information in the summary file. The file is cached between operations, but any time the repository changes the local copy needs to be updated.

    This can be pretty slow, with Flathub having around 1000 applications (times 4 architectures). In addition, the more applications there are, the more likely it is that one has been updated since the last time which means you need to update.

    This isn’t yet a major problem for Flathub, but its just a matter of time before it is, as apps keep getting added:

    This is particularly problematic if we want to add new architectures, as that multiplies the number of applications.

    So, the last month I’ve been working in OSTree and Flatpak to solve this by changing the flatpak repository format. Today I released Flatpak 1.9.2 which is the first version to support the new format, and Flathub is already serving it (and the old format for older clients).

    The new format is not only more efficient, it is also split by architecture meaning each user only downloads a subset of the total summary. Additionally there is a delta-based incremental update method for updating from a previous version.

    Here are some data for the latest Flathub summary:

    • Original summary: 6,6M  (1.8M compressed)
    • New (x86-64) summary: 2.7M (554k compressed)
    • Delta from previous summary: 20k

    So, if you’re able to use the delta, then it needs 100 times less network bandwidth compared to the original (compressed) summary and will be much faster.

    Also, this means we can finally start looking at supporting other architectures in Flathub, as doing so will not inconvenience users of the major architectures.

    To the future and beyond!

    We can’t move forward by looking back

    Posted by Josh Bressers on November 19, 2020 03:24 PM

    For the last few weeks Kurt and I have been having a lively conversation about security ratings scales. Is CVSS good enough? What about the Microsoft scale? Are there other scales we should be looking at? What’s good, what’s missing, what should we be talking about.

    There’s been a lot of back and forth and different ideas, over the course of our discussions I’ve come to realize an important aspect of security which is we don’t look forward very often. What I mean by this is there is a very strong force in the world of security to use prior art to drive our future decisions. Except all of that prior art is comically out of date in the world of today.

    An easy example are existing security standards. All of the working groups that build the standards, and ideas the working groups bring to the table, are using ideas from the past to solve problems for the future. You can argue that standards are at best a snapshot of the past, made in the present, to slow down the future. I will elaborate on that “slow down the future” line in a future blog post, for now I just want to focus on the larger problem.

    It might be easiest to use an example, I shall pick on CVSS. The vast majority of ideas and content in a standard such as CVSS is heavily influenced by what once was. If you look at how CVSS scores things, it’s clear a computer in a datacenter was in mind for many of the metrics. That was fine a decade ago, but it’s not fine anymore. Right now anyone overly familiar with CVSS is screaming “BUT CVSS DOESN’T MEASURE RISK IT MEASURES SEVERITY”, which I will say: you are technically correct, nobody cares, and nobody uses it like this. Sit down. CVSS is a perfect example of the theory being out of touch with reality.

    Am I suggesting CVSS has no value? I am not not. In its current form CVSS has some value (it should have a lot more). It’s all we have today, so everyone is using it, and it’s mostly good enough in the same way you can drive a nail with a rock. I have a suspicion it won’t be turned into something truly valuable because it is a standard based on the past. I would like to say we should take this into account when we use CVSS, but nobody will. The people doing the work don’t have time to care about improving something that’s mostly OK, and the people building the standards don’t do the work, so it’s sort of like a Mexican standoff, but one where nobody showed up.

    There are basically two options for CVSS: don’t use it because it doesn’t work properly, or use it and just deal with the places it falls apart. Both of those are terrible options. There’s little chance it’s going to get better in the near future. There is a CVSSv4 design document here. If you look at it, does it look like something describing a modern cloud based architecture? They’ve been working on this for almost five years; do you remember what your architecture looked like even a year ago? For most of us in the world of IT a year is a lifetime now. Looking backwards isn’t going to make anything better.

    OK, I’ve picked on CVSS enough. The real reason to explain all of this is to change the way we think about problems. Trying to solve problems we already had in the past won’t help with problems we have today, or will have in the future. I think this is more about having a different mindset than security had in the past. If you look at the history of infosec and security, there has been a steady march of progress, but much of that progress has been slower than the forward movement of IT in general. What’s holding us back?

    Let’s break this down into People, Places, and Things

    People

    I use the line above “The people doing the work don’t have time to care, and the people building the standards don’t do the work”. What I mean by this is there are plenty of people doing amazing security work. We don’t hear about them very often though because they’re busy working. Go talk to someone building detection rules for their SIEM, those are the people making a difference. They don’t have time to work on the next version of CVSS. They probably don’t even have the time to file a bug report against an open source project they use. There are many people in this situation in the security world. They are doing amazing work and getting zero credit. These are the heroes we need.

    But we have the heroes we deserve. If you look at many of the people working on standards, and giving keynotes, and writing blogs (oh hi), a lot of them live in a world that no longer exists. I willingly admit I used to live in a world that didn’t exist. I had an obsession with problems nobody cared about because I didn’t know what anyone was really doing. I didn’t understand cloud, or detection, or compliance, or really anything new. Working at Elastic and seeing what our customers are accomplishing in the world of security has been a life changing experience. It made me realize some those people I thought were leaders weren’t actually making the world a better place. They were desperately trying to keep the world in a place that they were relevant and could understand.

    Places

    One of my favorite examples these days is the fact that cloud won, but a lot of people are still talking about data centers or “hybrid cloud” or some other term that means owning a computer. A data center is a place. Places don’t exist anymore, at least not for the people making a difference. Now there are reasons to have a data center, just like there are reasons to own an airplane. Those reasons are pretty niche and solve a unique problem. We’re not worried about those niche problems today.

    How many of our security standards focus on having a computer in a room, in a place? Too many. Why doesn’t your compliance document ask about the seatbelts on your airplane? Because you don’t own an airplane, just like you don’t (or shouldn’t) own a server. The world changed, security is still catching up. There are no places anymore. Trying to secure a server in a room isn’t actually helping anyone.

    Things

    Things is one of the most interesting topics today. How many of us have corporate policies that say you can only access company systems from your laptop, while connected to a VPN, and wearing a hat. Or some other draconian rule. Then how many of us have email on our personal phones? But that’s not a VPN, or a hat, or a laptop! Trying to secure a device is silly because there are a near infinite number devices and possible problems.

    We used to think about securing computers. Servers, desktops, laptops, maybe a router or two. Those are tangible things that exist. We can look at them, we can poke them with a stick, we can unplug them. We don’t have real things to protect anymore and that’s a challenge. It’s hard to think about protecting something that we can’t hold in our hand. The world has changed in a such a way that the “things” we care about aren’t even things anymore.

    The reality is we used to think of things as objects we use, but things of today are data. Data is everything now. Every service, system, and application we use is just a way to understand and move around data. How many of our policies and ideas focus on computers that don’t really exist instead of the data we access and manipulate?

    Everything new is old again

    I hope the one lesson you take away from all of this is to be wary of leaning on the past. The past contains lessons, not directions. Security exists in a world unlike any we’ve ever seen, the old rules are … old. But it’s also important to understand that even what we think of as a good idea today might not be a good idea tomorrow.

    Progress is ahead of you, not behind.

    Acer Aspire Switch 10 E SW3-016's and SW5-012's and S1002's horrible EFI firmware

    Posted by Hans de Goede on November 19, 2020 09:28 AM
    Recently I acquired an Acer Aspire Switch 10 E SW3-016, this device was the main reason for writing my blog post about the shim boot loop. The EFI firmware of this is bad in a number of ways:

    1. It considers its eMMC unbootable unless its ESP contains an EFI/Microsoft/Boot/bootmgfw.efi file.

    2. But it will actually boot EFI/Boot/bootx64.efi ! (wait what? yes really)

    3. It will only boot from an USB disk connected to its micro-USB connector, not from the USB-A connector on the keyboard-dock.

    4. You must first set a BIOS admin password before you can disable secure-boot (which is necessary to boot home-build kernels without doing your own signing)

    5. Last but not least it has one more nasty "feature", it detect if the OS being booted is Windows, Android or unknown and it updates the ACPI DSDT based in this!

    Some more details on the OS detection mis feature. The ACPI "Device (SDHB) node for the MMC controller connected to the SDIO wifi module contains:

            Name (WHID, "80860F14")
            Name (AHID, "INT33BB")


    Depending on what OS the BIOS thinks it is booting it renames one of these 2 to _HID. This is weird given that it will only boot if EFI/Microsoft/Boot/bootmgfw.efi exists, but it still does this. Worse it looks at the actual contents of EFI/Boot/bootx64.efi for this. It seems that that file must be signed, otherwise it goes in OS unknown mode and keeps the 2 above DSDT bits as is, so there is no _HID defined for the wifi's mmc controller and thus no wifi. I hit this issue when I replaced EFI/Boot/bootx64.efi with grubx64.efi to break the bootloop. grubx64.efi is not signed so the DSDT as Linux saw it contained the above AML code. Using the proper workaround for the bootloop from my previous blog post this bit of the DSDT morphes into:

            Name (_HID, "80860F14")
            Name (AHID, "INT33BB")


    And the wifi works.

    The Acer Aspire Switch 10 E SW3-016's firmware also triggers an actual bug / issue in Linux' ACPI implementation, causing the bluetooth to not work. This is discussed in much detail here. I have a patch series fixing this here.

    And the older Acer Aspire Switch 10 SW5-012's and S1002's firmware has some similar issues:

    1. It considers its eMMC unbootable unless its ESP contains an EFI/Microsoft/Boot/bootmgfw.efi file

    2. These models will actually always boot the EFI/Microsoft/Boot/bootmgfw.efi file, so that is somewhat more sensible.

    3. On the SW5-012 you must first set a BIOS admin password before you can disable secure-boot.

    4. The SW5-012 is missing an ACPI device node for the PWM controller used for controlling the backlight brightness. I guess that the Windows i915 gfx driver just directly pokes the registers (which are in a whole other IP block), rather then relying on a separate PWM driver as Linux does. Unfortunately there is no way to fix this, other then using a DSDT overlay. I have a DSDT overlay for the V1.20 BIOS and only for the v1.20 BIOS available for this here.

    Because of 1. and 2. you need to take the following steps to get Linux to boot on the Acer Aspire Switch 10 SW5-012 or the S1002:

    1. Rename the original bootmgfw.efi (so that you can chainload it in the multi-boot case)

    2. Replace bootmgfw.efi with shimia32.efi

    3. Copy EFI/fedora/grubia32.efi to EFI/Microsoft/Boot

    This assumes that you have the files from a 32 bit Windows install in your ESP already.

    Release of osbuild-composer 25

    Posted by OSBuild Project on November 19, 2020 12:00 AM

    We are happy to announce that we released osbuild-composer 25. It now supports building RHEL 8.4. 🤗

    Below you can find the official change log, compiled by Ondřej Budai. Everyone is encouraged to upgrade!


    • Composer now supports RHEL 8.4! Big thanks to Jacob Kozol! If you want to build RHEL 8.4 using Composer API or Composer API for Koji, remember to pass “rhel-84” as a distribution name.

    • Composer can now be started without Weldr API. If you need it, start osbuild-composer.socket before osbuild-composer.service is started. Note that cockpit-composer starts osbuild-composer.socket so this change is backward compatible.

    • When Koji call failed, both osbuild-composer and osbuild-worker errored. This is now fixed.

    • The dependency on osbuild in the spec file is now moved to the worker subpackage. This was a mistake that could cause the worker to use an incompatible version of osbuild.

    • As always, testing got some upgrades. This time, mostly in the way we build our testing RPMs.

    Contributions from: Jacob Kozol, Lars Karlitski, Ondřej Budai, Tom Gundersen

    — Liberec, 2020-11-19

    Release of koji-osbuild 3

    Posted by OSBuild Project on November 19, 2020 12:00 AM

    We are happy to announce that we released koji-osbuild 3, our new project to integrate osbuild-composer with koji, the build and tracking system primarily used by the Fedora Project and Red Hat.

    Below you can find the official change log, compiled by Christian Kellner.


    • Ship tests in koji-osbuild-tests package. The tests got reworked so that they can be installed and run from the installation. This will be useful for reverse dependency testing, i.e. testing the plugins from other projects, like composer as well as in gating tests.

    • Add the ability to skip the tagging. An new command line option, –skip-tag is added, which translate into an a new field in the options for the hub and builder. If that option is present, the builder plugin will skip the tagging step.

    • builder plugin: the compose status is attached to the koji task as compose-status.json and updated whenever it is fetched from composer. This allows to follow the individual image builds.

    • builder plugin: The new logs API, introduce in composer version 24, is used to fetch and attach build logs as well as the koji init/import logs.

    • builder plugin: Support for the dynamic build ids, i.e. don’t use the koji build id returned from the compose request API call but use the new koji_build_id field included in the compose status response. This makes koji-osbuild depend on osbuild composer 24!

    • test: lots of improvements to the tests and ci, e.g. using the quay mirror for the postgres container or matching the container versions to the host.

    Contributions from: Christian Kellner, Lars Karlitski, Ondřej Budai

    — Berlin, 2020-11-19

    Release of cockpit-composer 26

    Posted by OSBuild Project on November 19, 2020 12:00 AM

    We are happy to announce the release of cockpit-composer 26. This release has no major new features, but contains useful fixes.

    Below you can find the official change log, compiled by Jacob Kozol. Everyone is encouraged to upgrade!


    • Add additional form validation for the Create Image Wizard
    • Improve page size dropdown styling
    • Update minor NPM dependencies
    • Improve code styling
    • Improve test reliability

    Contributions from: Jenn Giardino, Jacob Kozol, Martin Pitt, Sanne Raymaekers, Xiaofeng Wang

    — Berlin, 2020-11-19

    Fedora 33 elections voting now open

    Posted by Fedora Community Blog on November 19, 2020 12:00 AM
    Fedora 26 Supplementary Wallpapers: Vote now!

    Voting in the Fedora 33 elections is now open. Go to the Elections app to cast your vote. Voting closes at 23:59 UTC on Thursday 3 December. Don’t forget to claim your “I Voted” badge when you cast your ballot. Links to candidate interviews are below. Fedora Council There is one seat open on the Fedora Council. Tom Callaway […]

    The post Fedora 33 elections voting now open appeared first on Fedora Community Blog.

    RTL (bidi) in Nextcloud (Farsi)

    Posted by Ahmad Haghighi on November 19, 2020 12:00 AM

    امنیت و محرمانگی اطلاعات و این‌که داده‌های ما کجا، چگونه و توسط چه فرد یا نهادی ذخیره می‌شود برای بسیاری از افراد و شرکت‌ها/سازمان‌ها مهم و به عبارتی دغدغه است، و به همین دلیل (+ دلایل دیگری نظیر تحریم‌های خارجی و یا وضعیت قوانین داخلی) شرکت‌ها و افراد بهترین گزینه را این می یابند که خود داده‌های خود را مدیریت و نگهداری کنند و واقعا و کاملا مالک داده‌های خود باشند.

    بدون شک نکست‌کلود (https://nextcloud.com) یکی از بهترین راهکار‌های ابری موجود است که نه‌تنها امکانات و app‌هایی فراوان و رایگان دارد، بلکه از همه مهم‌تر آزاد (نرم‌افزار آزاد) است و تیم فعال، پویا و در حال رشدی دارد.

    از آنجایی که این مطلب در مورد نکست‌کلود و معرفی آن نیست به همین مقدار بسنده می‌کنم و سایر اطلاعات را می‌توانید از وب‌سایت رسمی نکست‌کلود به آدس nextcloud.com دریافت کنید.

    برای راه‌اندازی سرور نکست‌کلود شخصی نیاز به منابع زیاد و سنگینی ندارید و لذا اینکونه نیست که باید حتما دارای کسب و کار و شرکت باشید تا به فکر راه‌اندازی سرور ابری شخصی خود بیفتید، چرا که اگر به امنیت و مالکیت و محرمانگی داده‌های خود اهمیت می‌دهید، می‌توانید یک سرور ارزان قیمت با منابع کم تهیه کنید و برای خود و خانواده بدون مشکل از آن استفاده کنید.

    رفع مشکل متن‌های دوسویه (Bidirectional) در نکست‌کلود

    برای این منظور ابتدا وارد قسمت Apps شوید و سپس Custom CSS را جستجو و سپس نصب و فعال کنید.

    nextcloud-bidi-custom-css-app

    سپس وارد قسمت Settings شوید و از بخش Administration وارد قسمت تنظیمات مربوط به پوسته یا همان Theming شوید:

    nextcloud-bidi-custom-css-settings

    حال اسکریپت CSS مورد نظر خود را که مایلید روی تمامی سرور اعمال شود در این قسمت وارد کرده و کلید Save را بزنید. پس از ذخیره تغییرات اعمال می‌شود.

    اسکریپت پیشنهادی برای استفاده به عنوان Custom CSS :

    p,h1,div,span,a,ul,h2,h3,h4,li,input {
        direction: ltr;
        unicode-bidi: plaintext;
        text-align: initial;
    }
    

    تصاویر ضمیمه

    nextcloud-bidi-deck
    nextcloud-bidi-talk

    Mindshare election: Interview with Nasir Hussain (nasirhm)

    Posted by Fedora Community Blog on November 18, 2020 11:55 PM

    This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 19 November and closes promptly at 23:59:59 UTC on Thursday, 3 December 2020. Interview with Nasir Hussain Fedora Account: nasirhm IRC: nasirhm, nasirhm[m] (found in fedora-i3 #fedora-mindshare #fedora-badges #fedora-mote #fedora-noc #fedora-admin #fedora-devel) Fedora User Wiki Page […]

    The post Mindshare election: Interview with Nasir Hussain (nasirhm) appeared first on Fedora Community Blog.