Fedora People

Adobe is planning to end-of-life Flash.

Posted by mythcat on July 26, 2017 03:41 PM
According to this article, Adobe is planning to end-of-life Flash.

This will happen at the end of of 2020 and encourage content creators to migrate any existing Flash content to these new open formats.
Adobe will also remain fully committed to working with partners ( Apple, Facebook, Google, Microsoft and Mozilla) to maintain the security and compatibility of Flash content.

The article was posted by ADOBE CORPORATE COMMUNICATIONS ON <time class="updated" datetime="2017-07-25" pubdate="" style="box-sizing: border-box; color: #3d3d3d; font-family: "Open Sans", Cambria, "Times New Roman", Times, serif; font-size: 12px; font-weight: bold; text-align: center; text-transform: uppercase;">JULY 25, 2017 .</time>

New Evince format support: Adobe Illustrator and CBR files

Posted by Bastien Nocera on July 26, 2017 01:42 PM
A quick update, as we've touched upon Evince recently.

I mentioned that we switched from using external tools for decompression to using libarchive. That's not the whole truth, as we switched to using libarchive for CBZ, CB7 and the infamous CBT, but used a copy/paste version of unarr to support RAR files, as libarchive support lacks some needed features.

We hope to eventually remove the internal copy of unarr, but, as a stop-gap, that allowed us to start supporting CBR comics out of the box, and it's always a good thing when you have one less non-free package to grab from somewhere to access your media.

The second new format is really two formats, from either side of the 2-digit-year divide: PostScript-based Adobe Illustrator and PDF-based Adobe Illustrator. Evince now declares to support "the format" if both of the backends are built and supported. It only took 12 years, and somebody stumbling upon the feature request while doing bug triaging. The nooks and crannies of free software where the easy feature requests get lost :)


Both features will appear in GNOME 3.26, the out-of-the-box CBR support is however available now in an update for the just released Fedora 26.

Post Quantum Cryptography

Posted by Red Hat Security on July 26, 2017 01:30 PM

Traditional computers are binary digital electronic devices based on transistors. They store information encoded in the form of binary digits each of which could be either 0 or 1. Quantum computers, in contrast, use quantum bits or qubits to store information either as 0, 1 or even both at the same time. Quantum mechanical phenomenons such as entanglement and tunnelling allow these quantum computers to handle a large number of states at the same time.

Quantum computers are probabilistic rather than deterministic. Large-scale quantum computers would theoretically be able to solve certain problems much quicker than any classical computers that use even the best currently known algorithms. Quantum computers may be able to efficiently solve problems which are not practically feasible to solve on classical computers. Practical quantum computers will have serious implications on existing cryptographic primitives.

Most cryptographic protocols are made of two main parts, the key negotiation algorithm which is used to establish a secure channel and the symmetric or bulk encryption cipher, which is used to do the actual protection of the channel via encryption/decryption between the client and the server.

The SSL/TLS protocol uses RSA, Diffie-Hellman (DH) or Elliptic Curve Diffie-Hellman (ECDH) primitives for the key exchange algorithm. These primitives are based on hard mathematical problems which are easy to solve when the private key is known, but computationally intensive without it. For example, RSA is based on the fact that when given a product of two large prime numbers, factorizing the product (which is the public key) is computationally intensive. By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find the public key factors. This ability could allow a quantum computer to decrypt many of the cryptographic systems in use today. Similarly, DH and ECDH key exchanges could all be broken very easily using sufficiently large quantum computers.

For symmetric ciphers, the story is slightly different. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm with key length of n bits by brute force requires time equal to roughly 2^(n/2) invocations of the underlying cryptographic algorithm, compared with roughly 2^n in the classical case, meaning that the strength of symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search. Therefore the situation with symmetric ciphers is stronger than the one with public key crypto systems.

Hashes are also affected in the same way symmetric algorithms are: Grover's algorithm requires twice the hash size (as compared to the current safe values) for the same cryptographic security.

Therefore, we need new algorithms which are resistant to quantum computations. Currently there are 5 proposals, which are under study:

Lattice-based cryptography

A lattice is the symmetry group of discrete translational symmetry in n directions. This approach includes cryptographic systems such as Learning with Errors, Ring-Learning with Errors (Ring-LWE), the Ring Learning with Errors Key Exchange and the Ring Learning with Errors Signature. Some of these schemes (like NTRU encryption) have been studied for many years without any known feasible attack vectors and hold great promise. On the other hand, there is no supporting proof of security for NTRU against quantum computers.

Lattice is interesting because it allows the use of traditional short key sizes to provide the same level of security. No short key system has a proof of hardness (long key versions do). The possibility exists that a quantum algorithm could solve the lattice problem, and the short key systems may be the most vulnerable.

Multivariate cryptography

Multivariate cryptography includes cryptographic systems such as the Rainbow scheme which is based on the difficulty of solving systems of multivariate equations. Multivariate signature schemes like Rainbow could provide the basis for a quantum secure digital signature, though various attempts to build secure multivariate equation encryption schemes have failed.

Several practical key size versions have been proposed and broken. EU has standardized on a few; unfortunately those have all been broken (classically).

Hash-based cryptography

Hash based digital signatures were invented in the late 1970s by Ralph Merkle and have been studied ever since as an interesting alternative to number theoretic digital signatures like RSA and DSA. The primary drawback for any Hash based public key is a limit on the number of signatures that can be signed using the corresponding set of private keys. This fact reduced interest in these signatures until interest was revived due to the desire for cryptography to resist attack by quantum computers. Schemes that allow an unlimited number of signatures (called 'stateless') have now been proposed.

Hash based systems are provably secure as long as hashes are not invertible. The primary issue with hash based systems is the signature sizes (quite large). They also only provide signatures, not key exchange.

Code-based cryptography

Code-based cryptography includes cryptographic systems which rely on error-correcting codes. The original McEliece signature using random Goppa codes has withstood scrutiny for over 30 years. The Post Quantum Cryptography Study Group sponsored by the European Commission has recommended the McEliece public key encryption system as a candidate for long term protection against attacks by quantum computers. The downside, however, is that code based cryptography has large key sizes.

Supersingular elliptic curve isogeny cryptography

This cryptographic system relies on the properties of supersingular elliptic curves to create a Diffie-Hellman replacement with forward secrecy. Because it works much like existing Diffie-Hellman implementations, it offers forward secrecy which is viewed as important both to prevent mass surveillance by governments but also to protect against the compromise of long term keys through failures.

Implementations

Since this is a dynamic field with cycles of algorithms being defined and broken, there are no standardized solutions. NIST's crypto competition provides a good chance to develop new primitives to power software for the next decades. However it is important to mention here that Google Chrome, for some time, has implemented the NewHope algorithm which is a part of Ring Learning-with-Errors (RLWE) family. This experiment has currently concluded.

In conclusion, which Post-Quantum cryptographic scheme will be accepted will ultimately depend on how fast quantum computers become viable and made available at minimum to state agencies if not the general public.

Category

Secure

Summer recipes

Posted by Matthias Clasen on July 26, 2017 12:44 PM

I just did a release of GNOME recipes in time for GUADEC.

While this is a development release, I think it may be time to declare this version (1.6) stable. Therefore, I encourage you to try it out and let me know if something doesn’t work.

Here is a quick look at the highlights in this release.

Inline editing

Inline editing of ingredients has been completed. We provide completions for both units and ingredients.

Error handling

And if something goes wrong, we try to be helpful by pointing out the erroneous entry and explaining the required format.

Printing

The printed form of recipes has seen several small improvements. Fields like cuisine and season are now included, and the ingredients list is properly aligned in columns.

Lists

All lists of recipes can now be sorted by recency or by name.

If you happen to be in Manchester this weekend, you can learn more about GNOME recipes by coming to our talk.

Running Wayland on the Nvidia driver

Posted by Christian F.K. Schaller on July 26, 2017 12:21 PM

I know many of you have wanted to test running Wayland on NVidia. The work on this continues between Jonas Ådahl, Adam Jackson and various developers at NVidia. It is not ready for primetime yet as we are still working on the server side glvnd piece we need for XWayland. That said with both Adam Jackson looking at this from our side and Kyle Brenneman looking at it from NVidia I am sure we will be able to hash out the remaining open questions and get that done.

In the meantime Miguel A. Vico from NVidia has set up a Copr to let people start testing using EGLStreams under Wayland. I haven’t tested it myself yet, but if you do and have trouble make sure to let Miguel and Jonas know.

As a sidenote, I am heading off to GUADEC in Manchester tomorrow and we do plan to discuss efforts like these there. We have team members like Jonas Ådahl flying in from Taiwan and Peter Hutterer flying in from Australia, so it will be a great chance to meet core developers who are far away from us in terms of timezone and geographical distance. GUADEC this year should be a lot of fun and from what I hear we are going to have record level attendance this year based early registration numbers, so if you can make it Manchester I strongly recommend joining us as I think this years event will have a lot of energy and a lot of interesting discussions on what the next steps are for GNOME.

How to use the same SSH key pair in all AWS regions

Posted by Fedora Magazine on July 26, 2017 08:00 AM

This article shows how to use the AWS Command Line Interface (AWS CLI) to configure a single SSH key pair on multiple AWS Regions. By doing this you can access EC2 instances from different regions using the same SSH key pair.

Installing and configuring AWS CLI

Start by installing and configuring the AWS command line interface:

sudo dnf install awscli
aws configure

Verify the AWS CLI installed correctly:

aws --version
aws-cli/1.11.109 Python/3.6.1 Linux/4.11.10-300.fc26.x86_64 botocore/1.5.72

Configuring the SSH key pair

If you don’t have an SSH key pair or want to follow this article using a new one:

openssl genrsa -out ~/.ssh/aws.pem 2048
ssh-keygen -y -f ~/.ssh/aws.pem > ~/.ssh/aws.pub

If you already have an SSH private key created using the AWS Console, extract the public key from it:

ssh-keygen -y -f ~/.ssh/aws.pem > ~/.ssh/aws.pub

Importing the SSH key pair

Now that you have the public key, declare the variable AWS_REGION containing a list with the regions to which you want to copy your SSH key. To check the full list of available AWS regions use this link.

AWS_REGION="us-east-1 us-east-2 us-west-1 us-west-2 ap-south-1 eu-central-1 eu-west-1 eu-west-2"

If you don’t want to specify each region manually, you can use the ec2 describe-regions command to get a list of all available regions:

AWS_REGION=$(aws ec2 describe-regions --output text | awk '{print $3}' | xargs)

Next, import the SSH public key to these regions, substituting your key’s name for <MyKey>:

for each in ${AWS_REGION} ; do aws ec2 import-key-pair --key-name <MyKey> --public-key-material file://~/.ssh/aws.pub --region each ; done

Also, if you want to display which SSH key is available in a region:

aws ec2 describe-key-pairs --region REGION

To delete an SSH key from a region:

aws ec2 delete-key-pair --key-name <MyKey> --region REGION

Congratulations, now you can use the same SSH key to access all your instances in the regions where you copied it. Enjoy!

FAD Latam - Reporte final

Posted by Alex Irmel Oviedo Solis on July 26, 2017 05:00 AM

Los FAD (Fedora Activity Day) fueron en su mayoria técnicas, pero esta ocasión
logramos realizar un FAD organizativo que permitiera a los embajadores lograr
objetivos mayores y poder contribuir a la comunidad de mejor manera.
Foto de grupo

Primer dia

En nuestro primer dia, fuimos calurosamente recibidos por las autoridades de la
Universidad Global del Cusco con una pequeña pero muy cordial
ceremonia, donde el rector nos dio animos para trabajar y contribuir a comunidad y
a la sociedad.

Lo siguiente en la lista fue realizar un pequeño diagnostico el cual lo realizamos
con por medio de preguntas las cuales nos ayudaron mucho.

Pizarra del primer dia

Lo siguiente fue una tarea titanica, hacer una matriz FODA que estuvo llena de debates,
mucha polemica pero con muchas buenas ideas.

Pizarra del primer dia

Segundo dia

Durante el segundo dia pensamos en las estrategias para conseguir nuestros objetivos
y podamos hacer crecer a la comunidad.

Un breve resumen de una parte de nuestro trabajo es la Matriz:

Matriz FODA

Tercer dia

El ultimo dia se trataron temás muy interesantes respecto al marco logistico y presupuestario.

Luego de tanto trabajo dimos un paseo por la ciudad, usando la famosa combi que
supongo fue una experiencia nueva para algunos.
En la combi

Resultados

Los resultados de esta fatigada actividad se podran ver en breve en la pagina de wiki
de la comunidad junto con los reportes de los demás participantes en la siguiente
dirección:

Amistad y más amistad

A pesar de las discusiones, polemicas y todo el trabajo arduo se cumplieron todos
los objetivos. Quiero agradecer de manera muy especial a Lincoln Delgado un gran amigo
que hizo posible que la Universidad Global nos brindara sus ambientes. Tambien
quiero agredecer a los embajadores que asistieron a esta actividad:

Quisiera agradecer de manera muy especial a los integrantes del grupo UAC-CIRCLE de
la Universidad Andina del Cusco que nos apoyaron con
las traducciones y muchas ideas.

Para cerrar este reporte quiero compartir con Uds algunas fotos del evento y espero
que podamos continuar reuniendo a los colaboradores de fedora :-)

Galeria completa
foto

foto

foto

foto

foto

foto

foto

foto

foto

Galeria completa

FAD Latam - Final Report

Posted by Alex Irmel Oviedo Solis on July 26, 2017 05:00 AM

The FADs (Fedora Activity Day) were technical in many cases, but this time
We can to realize an organizational FAD that allowed the ambassadors to achieve
objectives and to contribute to the community in a better way.
Foto de grupo

First day

On our first day, we were warmly welcomed by the authorities of the
Universidad Global del Cusco with a small but very cordial
ceremony, where the rector gave us the courage to work and contribute to community
and society.

The next in the agenda was to perform a small diagnosis which we performed through questions that helped us a lot.

Pizarra del primer dia

The following was a titanic task, to make a SWOT matrix that was full of debates, much controversy but with many good ideas.

Pizarra del primer dia

Second day

During the second day we think about the strategies to achieve our goals and how we can grow the community.

A brief summary of a part of our work is the Matrix:

Matriz FODA

Third day

The last day dealt with very interesting issues regarding the logistical and budgetary framework.

After so much work we took a walk around the city, using the famous combi (peruvian public transport)
that I guess a new experience for some
En la combi

Results

The results of this activity can be seen shortly on the Wiki page from the community together with the reports of the other participants in the following Address:

Amistad y más amistad

In spite of the controversies and all the hard work were fulfilled all the objectives. I want to thank Lincoln Delgado in a very special way, a great friend who made it possible for us to provide his environments.

I also want to thank the ambassadors who attended this activity:

I would like to thank in a very special way the members of the UAC-CIRCLE
The Andean University of Cusco (http://www.uandina.edu.pe) who supported us with
Translations and many ideas.

To close this report I want to share with you some photos of the event and I hope
That we can continue to meet fedora's collaborators :-)

Complete gallery here
foto

foto

foto

foto

foto

foto

foto

foto

foto

Complete gallery here

Security by Isolating Insecurity

Posted by Russel Doty on July 25, 2017 10:14 PM

In my previous post I introduced “Goldilocks Security”, proposing three approaches to security.

Solution 1: Ignore Security

Safety in the crowd – with tens of millions of cameras out there, why would anyone pick mine? Odds are that the bad guys won’t pick yours – they will pick all of them! Automated search and penetration tools easily find millions of IP cameras. You will be lost in the crowd – the crowd of bots!

Solution 2: Secure the Cameras

For home and small business customers, a secure the camera approach simply won’t work because ease of use wins out over effective security in product design and because the camera vendors’ business model (low-cost, ease of use, and access over the Internet) all conspire against security. What’s left?

Solution 3: Isolation

If the IP cameras can’t be safely placed on the Internet, then isolate them from the Internet.

To do this, introduce an IoT Gateway between the cameras and all other systems. This IoT Gateway would have two network interfaces: one network interface dedicated to the cameras and the second network interface used to connect to the outside world. An application running on the IoT Gateway would talk to the IP cameras and then talk to the outside world (if needed). There would be no network connection between the IP cameras and anything other than the IoT Gateway application. The IoT Gateway would also be hardened and actively managed for best security.

How is this implemented?

  • Put the IP cameras on a dedicated network. This should be a separate physical network. At a minimum it should be a VLAN (Virtual LAN). There will typically be a relatively small number of IP cameras in use, so a dedicated network switch, probably with PoE, is cost effective.
    • Use static IP addresses. If the IP cameras are assigned static IP addresses, there is no need to have an IP gateway or DNS server on the network segment. This further reduces the ability of the IP cameras to get out on the network. You lose the convenience of DHCP assigned address and gain significant security.
    • You can have multiple separate networks. For example, you might have one for external cameras, one for cameras in interior public spaces, one for manufacturing space and one for labs. With this configuration, someone gaining access to the exterior network would not be able to gain access to the lab cameras.
  • Add an IoT Gateway – a computer with a network interface connected to the camera network. In the example above, the gateway would have four network interfaces – one for each camera network. The IoT Gateway would probably also be connected to the corporate network; this would require a fifth network interface. Note that you can have multiple IoT Gateways, such as one for each camera network, one for a building management system, one for other security systems, and one that connects an entire building or campus to the Internet.
  • Use a video monitoring program such as ZoneMinder or a commercial program to receive, monitor and display the video data. Such a program can monitor multiple camera feeds, analyze the video feeds for things such as motion detection, record multiple video streams, and create events and alerts. These events and alerts can do things like trigger alarms, send emails, send text messages, or trigger other business rules. Note that the video monitoring program further isolates the cameras from the Internet – the cameras talk to the video monitoring program and the video monitoring program talks to the outside world.
  • Sandbox the video monitoring program using tools like SELinux and containers. These both protect the application and protect the rest of the system from the application – even if the application is compromised, it won’t be able to attack the rest of the system.
  • Remove any unneeded services from the IoT Gateway. This is a dedicated device performing a small set of tasks. There shouldn’t be any software on the system that is not needed to perform these tasks – no development tools, no extraneous programs, no unneeded services running.
  • Run the video monitoring program with minimal privileges. This program should not require root level access.
  • Configure strong firewall settings on the IoT Gateway. Only allow required communications. For example, only allow communications with specific IP addresses or mac addresses (the IP cameras configured into the system) over specific ports using specific protocols. You can also configure the firewall to only allow specific applications access to the network port. These settings would keep anything other than authorized cameras from accessing the gateway and keep the authorized cameras from talking to anything other than the video monitoring application. This approach also protects the cameras. Anyone attempting to attack the cameras from the Internet would need to penetrate the IoT Gateway and then change settings such as the firewall and SELinux before they could get to the cameras.
  • Use strong access controls. Multi-factor authentication is a really good idea. Of course you have a separate account for each user, and assign each user the minimum privilege they need to do their job. Most of the time you don’t need to be logged in to the system – most video monitoring applications can display on the lock screen, allowing visual monitoring of the video streams without being able to change the system. For remote gateways interactive access isn’t needed at all; they simply process sensor data and send it to a remote system.
  • Other systems should be able to verify the identity of the IoT Gateway. A common way to do this is to install a certificate on the gateway. Each gateway should have a unique certificate, which can be provided by systems like Linux IdM or MS Active Directory. Even greater security can be provided by placing the system identity into a hardware root of trust like a TPM (Trusted Processing Module), which prevents the identity from being copied, cloned, or spoofed.
  • Encrypted communications is always a good idea for security. Encryption protects the contents of the video stream from being revealed, prevents the contents of the video stream from being modified or spoofed, and verifies the integrity of the video stream – any modifications of the encrypted traffic, either deliberate or due to network error, are detected. Further, if you configure a VPN (Virtual Private Network) between the IoT Gateway and backend systems you can force all network traffic through the VPN, thus preventing network attacks against the IoT Gateway. For security systems it is good practice to encrypt all traffic, both internal and external.
  • Proactively manage the IoT Gateway. Regularly update it to get the latest security patches and bug fixes. Scan it regularly with tools like OpenSCAP to maintain secure configuration. Monitor logfiles for anomalies that might be related to security events, hardware issues, or software issues.

You can see how a properly configured IoT Gateway can allow you to use insecure IoT devices as part of a secure system. This approach isn’t perfect – the cameras should also be managed like the gateway – but it is a viable approach to building a reasonably secure and robust system out of insecure devices.

One issue is that the cameras are not protected from local attack. If WiFi is used the attacker only needs to be nearby. If Ethernet is used an attacker can add another device to the network. This is difficult as you would need to gain access to the network switch and find a live port on the proper network. Attacking the Ethernet cable leaves signs, including network glitches. Physically attacking a camera also leaves signs. All of this can be done, but is more challenging than a network based attack over the Internet and can be managed through physical security and good network monitoring. These are some of the reasons why I strongly prefer wired network connections over wireless network connections.


Troubleshooting CyberPower PowerPanel issues in Linux

Posted by Major Hayden on July 25, 2017 06:16 PM

Power linesI have a CyberPower BRG1350AVRLCD at home and I’ve just connected it to a new device. However, the pwrstat command doesn’t retrieve any useful data on the new system:

# pwrstat -status

The UPS information shows as following:


    Current UPS status:
        State........................ Normal
        Power Supply by.............. Utility Power
        Last Power Event............. None

I disconnected the USB cable and ran pwrstat again. Same output. I disconnected power from the UPS itself and ran pwrstat again. Same output. This can’t be right.

Checking the basics

A quick look at dmesg output shows that the UPS is connected and the kernel recognizes it:

[   65.661489] usb 3-1: new full-speed USB device number 7 using xhci_hcd
[   65.830769] usb 3-1: New USB device found, idVendor=0764, idProduct=0501
[   65.830771] usb 3-1: New USB device strings: Mfr=3, Product=1, SerialNumber=2
[   65.830772] usb 3-1: Product: BRG1350AVRLCD
[   65.830773] usb 3-1: Manufacturer: CPS
[   65.830773] usb 3-1: SerialNumber: xxxxxxxxx
[   65.837801] hid-generic 0003:0764:0501.0004: hiddev0,hidraw0: USB HID v1.10 Device [CPS BRG1350AVRLCD] on usb-0000:00:14.0-1/input0

I checked the /var/log/pwrstatd.log file to see if there were any errors:

2017/07/25 12:01:17 PM  Daemon startups.
2017/07/25 12:01:24 PM  Communication is established.
2017/07/25 12:01:27 PM  Low Battery capacity is restored.
2017/07/25 12:05:19 PM  Daemon stops its service.
2017/07/25 12:05:19 PM  Daemon startups.
2017/07/25 12:05:19 PM  Communication is established.
2017/07/25 12:05:22 PM  Low Battery capacity is restored.
2017/07/25 12:06:27 PM  Daemon stops its service.

The pwrstatd daemon can see the device and communicate with it. This is unusual.

Digging into the daemon

If the daemon can truly see the UPS, then what is it talking to? I used lsof to examine what the pwrstatd daemon is doing:

# lsof -p 3975
COMMAND   PID USER   FD   TYPE             DEVICE SIZE/OFF      NODE NAME
pwrstatd 3975 root  cwd    DIR               8,68      224        96 /
pwrstatd 3975 root  rtd    DIR               8,68      224        96 /
pwrstatd 3975 root  txt    REG               8,68   224175 134439879 /usr/sbin/pwrstatd
pwrstatd 3975 root  mem    REG               8,68  2163104 134218946 /usr/lib64/libc-2.25.so
pwrstatd 3975 root  mem    REG               8,68  1226368 134218952 /usr/lib64/libm-2.25.so
pwrstatd 3975 root  mem    REG               8,68    19496 134218950 /usr/lib64/libdl-2.25.so
pwrstatd 3975 root  mem    REG               8,68   187552 134218939 /usr/lib64/ld-2.25.so
pwrstatd 3975 root    0r   CHR                1,3      0t0      1028 /dev/null
pwrstatd 3975 root    1u  unix 0xffff9e395e137400      0t0     37320 type=STREAM
pwrstatd 3975 root    2u  unix 0xffff9e395e137400      0t0     37320 type=STREAM
pwrstatd 3975 root    3u  unix 0xffff9e392f0c0c00      0t0     39485 /var/pwrstatd.ipc type=STREAM
pwrstatd 3975 root    4u   CHR             180,96      0t0     50282 /dev/ttyS1

Wait a minute. The last line of the lsof output shows that pwrstatd is talking to /dev/ttyS1, but the device is supposed to be a hiddev device over USB. If you remember, we had this line in dmesg when the UPS was plugged in:

hid-generic 0003:0764:0501.0004: hiddev0,hidraw0: USB HID v1.10 Device [CPS BRG1350AVRLCD] on usb-0000:00:14.0-1/input0

Things are beginning to make more sense now. I have a USB-to-serial device that allows my server to talk to the console port on my Cisco switch:

[   80.389533] usb 3-1: new full-speed USB device number 9 using xhci_hcd
[   80.558025] usb 3-1: New USB device found, idVendor=067b, idProduct=2303
[   80.558027] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[   80.558028] usb 3-1: Product: USB-Serial Controller D
[   80.558029] usb 3-1: Manufacturer: Prolific Technology Inc. 
[   80.558308] pl2303 3-1:1.0: pl2303 converter detected
[   80.559937] usb 3-1: pl2303 converter now attached to ttyUSB0

It appears that pwrstatd is trying to talk to my Cisco switch (through the USB-to-serial adapter) rather than my UPS! I’m sure they could have a great conversation together, but it’s hardly productive.

Fixing it

The /etc/pwrstatd.conf has a relevant section:

# The pwrstatd accepts four types of device node which includes the 'ttyS',
# 'ttyUSB', 'hiddev', and 'libusb' for communication with UPS. The pwrstatd
# defaults to enumerate all acceptable device nodes and pick up to use an
# available device node automatically. But this may cause a disturbance to the
# device node which is occupied by other software. Therefore, you can restrict
# this enumerate behave by using allowed-device-nodes option. You can assign
# the single device node path or multiple device node paths divided by a
# semicolon at this option. All groups of 'ttyS', 'ttyUSB', 'hiddev', or
# 'libusb' device node are enumerated without a suffix number assignment.
# Note, the 'libusb' does not support suffix number only.
#
# For example: restrict to use ttyS1, ttyS2 and hiddev1 device nodes at /dev
# path only.
# allowed-device-nodes = /dev/ttyS1;/dev/ttyS2;/dev/hiddev1
#
# For example: restrict to use ttyS and ttyUSB two groups of device node at
# /dev,/dev/usb, and /dev/usb/hid paths(includes ttyS0 to ttySN and ttyUSB0 to
# ttyUSBN, N is number).
# allowed-device-nodes = ttyS;ttyUSB
#
# For example: restrict to use hiddev group of device node at /dev,/dev/usb,
# and /dev/usb/hid paths(includes hiddev0 to hiddevN, N is number).
# allowed-device-nodes = hiddev
#
# For example: restrict to use libusb device.
# allowed-device-nodes = libusb
allowed-device-nodes =

We need to explicitly tell pwrstatd to talk to the UPS on /dev/hid/hiddev0:

allowed-device-nodes = /dev/usb/hiddev0

Let’s restart the pwrstatd daemon and see what we get:

# systemctl restart pwrstatd
# pwrstat -status

The UPS information shows as following:

    Properties:
        Model Name................... BRG1350AVRLCD
        Firmware Number..............
        Rating Voltage............... 120 V
        Rating Power................. 810 Watt(1350 VA)

    Current UPS status:
        State........................ Normal
        Power Supply by.............. Utility Power
        Utility Voltage.............. 121 V
        Output Voltage............... 121 V
        Battery Capacity............. 100 %
        Remaining Runtime............ 133 min.
        Load......................... 72 Watt(9 %)
        Line Interaction............. None
        Test Result.................. Unknown
        Last Power Event............. None

Success!

Photo credit: Wikipedia

The post Troubleshooting CyberPower PowerPanel issues in Linux appeared first on major.io.

The little bit different Fedora 26 Release Party – Part 3

Posted by Sirko Kemter on July 25, 2017 03:13 PM

For day 3 of the preparations for the Fedora 26 Release Party at PNC in Phnom Penh was planned that the different teams present GNOME 3.24, the Python Classroom and LXQT as features of Fedora 26. We have 4 teams for GNOME, 4 Teams for LXQT and 2 teams for the Python Classroom but presenting would be on the end just one team for each feature. I had fears that we can not select today as on Sunday no team had started to create their slides and we would need the two hours we have with each class to prepare them. Opposite to Saturday, where I had the afternoon and no classes would be after the workshop, I had to start at 7.30 in the morning and there would be classes afterwards. So no chance to overdo in time. But the students already did do them as homework by themself, so they did need only a few more minutes to finish them.
We managed to see the presentations and choose which teams will present on Saturday to the public. Especially both of the teams for the Python Classroom Lab, had a very good presentation. The chosen team just did win because their slides looked a bit more profesional and they was a bit more safe in presenting it. The team chosen for GNOME 3.24 hat a small overview about GNOME in general to, which I did like. For a lot of them it will be also the first presentation so there are some rethorical issues but presenting will help them in the future to master such kind of task. Next step now is, that the selected teams rework their slides with the given issues, mostly typos. And on Saturday will be the big day for them.
One sad thing, is I did not manage today to shot a photo of the second class, as we was really a bit under time pressure but I am sure on Saturday will be more time and I will shot some more pictures. So no picture for today but you can be sure it did happen ;)

to be continued…

Boltron - Fedora Modular OS playground!

Posted by Radek Vokál on July 25, 2017 10:41 AM
Last summer at Flock Langdon White, Ralph Bean and couple folks around them announced work on new release tools and a project called Modularity. The goal was simple but aspirational - for couple years we've talked about rings proposal, splitting applications from the core of the OS, having alternatives available and easily installable for certain components and even though for all of these usecases you could always find a way how to achieve them they weren't really supported by the build infrastructure and software management tools. Once you would update your system or install something else it would usually break or do something unexpected. Modularity goal was to come up with a straightforward way how to deliver a bulk of content thru our build pipeline, offer multiple versions of components and different installation profiles. At the same time this new approach to delivering content would not break existing workflows and will be super easy for package maintainers.


Folks working on this concept has been reporting progress over the last year but finally today we have an image available that shows how modules behave on the OS and how DNF can deal with them. So let's give it a try and let me walk you thru what you can do Boltron - Fedora Modularity concept.


First, get the latest fedora compose

$ docker pull fedora:26-modular
  $ docker run -it --rm fedora:26-modular /bin/bash


As you can see in

$ cat /etc/os-release

this is a special compose that points at a repository with modules. As a second step, you need DNF with module plugin and couple other enhancements.

  $ microdnf -y install dnf

(Please ignore the ugly output for microdnf - it's supposed to be used in dockerfiles only and this is a sort of a hack to get the right DNF installed)
One of the main goals was to preserve existing command line interface. Unlike with software collections or other tools DNF just works and there are only minor changes that expose the magic that is happening in the background.

$ dnf list

..has a new section called modules. The UI of this is most probably going to change but right now it shows modules that are available (apx 25 modules) and whether they are installed on the OS or not. It also shows two other important information - the stream where you either have the default stream (f26) or alternative stream (in case of nodejs, stream 8). Once you installed any module you will also see the installed profile of that module under Installed column.

So let's try to install our first module

  $ dnf install httpd

Wait, there's no special command line option for module and it feels and looks like I'm just installing RPMs!? Right, that's the plan! Unless you want to use the functionality that modules bring - streams and profiles - you don't even know that you're playing with modules. In case of apache, module is a blob of RPMs that got build together in Factory 2.0 (the new Fedora release tools - updated koji, module build service, pungi and few others), they got tested all together as a bulk and now installed on your Fedora.

  $ dnf list

.. now shows that I have one module installed - httpd in it's default profile.

Ok, let's try something different. Nodejs has two versions/modules available. Nodejs v6 and v8. So first, what happens if I just run

  $ dnf install nodejs

In this case I get nodejs from the default stream - f26 and now I have nodejs v6 installed.

  $ node -v
  v6.10.3


What streams do is that they preserve the version of a component. In our case, nodejs v8 is available but running

$ dnf update nodejs

won't update nodejs since there's no update available in the current stream. So how can I switch the stream? Easily, just run

$ dnf install nodejs-8
  $ node -v
  v8.0.0


Tada .. I have nodejs v8 on my system now and running dnf update at any time will keep nodejs on the current stream - v8. And what if I want to go back to the default nodejs version? I would simply run

$ dnf install nodejs-f26

and I’m back on nodejs v6.

This is one cool feature of modules - streams. Another cool feature comes with profiles. But first let's see what information module actually carries and how DNF works with modules. First, DNF is really cool because of it's plugin-ability and all the module specific magic is included in a dnf plugin called module. Any special actions with modules, additional commands are thus hidden under $ dnf module
<command>
.. so what I can for example do is display all information about specific module.

  $ dnf module info mariadb

One of the sections called "profiles" shows this

profiles:
  client:
    rpms: [mariadb]
  default:
    rpms: [mariadb, gettext, mariadb-server]
  server:
    rpms: [mariadb-server, gettext]


There are 3 available profiles, a default profile that install both client and server packages, server profile that only installs mariadb server and client that installs client. So what I actually want on my system is server and I don’t care about the client package.

  $ dnf install mariadb/server

And I’ve got a server! Now what happens when I run $ dnf update mariadb later? I won’t be getting anything else but what I want from the profile. DNF for each module creates a configuration file

  $ cat /etc/dnf/modules.d/mariadb.module

[mariadb]
name = mariadb
stream = f26
version = 20170707130409
profiles = server
enabled = 1
locked =


which keeps track of selected profile and stream. As I’ve already mentioned, DNF module plugin has couple additional commands. One of the allows you to list all installed modules

  $ dnf module list --installed

Now you can see the versions, streams and profiles of what you have on your system.
Cool .. isn't it?

All additional information can be found on https://docs.pagure.org/modularity. The syntax of DNF, integration of module metada into other tools such as anaconda and gnome-software are currently at works and hopefully will land for F27.</command>

Ura Design crowdfunds free design for open source projects

Posted by Justin W. Flory on July 25, 2017 08:30 AM
Ura Design crowdfunds free design for open source projects

This article was originally published on Opensource.com.


Ura Design logoOpen source software is nothing new in 2017. Even now, big tech giants are exploring open source. More and more companies allow employees to contribute to open source software on company hours, if it isn’t altogether encouraged. However, design assets and work have not enjoyed the same popularity with open source licensing and use as software has. However, Albanian design agency Ura Design is helping change this.

The team consists of four people: Elio Qoshi, Redon Skikuli, Giannis Konstantinidis, and Anxhelo Lushka. Ura Design started from an idea. The team believed that many open source projects are full of capabilities and features, but their design can make it difficult for users to effectively use the software. This could be through user experience, branding, or accessibility. And the goal? To help bring these better design principles to open source projects at little to no cost. “In open source, there are amazing projects that are poorly communicated with the outside world. By communication, we mean visual communications, branding, even marketing. That is nonexistent for many reasons. There is a connection between communicating your project well and also getting contributors or users on board,” says Skikuli.

The Ura Design team, left to right: Elio Qoshi, Redon Skikuli, Giannis Konstantinidis, Anxhelo Lushka

The Ura Design team, left to right: Elio Qoshi, Redon Skikuli, Giannis Konstantinidis, Anxhelo Lushka

The Ura Design team helps open source projects improve their design so they can focus on great code.

How it works

Together, the four of them work with open source project owners to help them bring better design elements to their projects. The principles of ethical design are part of the goals and values of the project.

Some of their past projects include Mozilla, the Tor Project, Free Software Foundation Europe, Glucosio, and more. The team takes contract work with companies or communities with a budget, but for projects with less financial support, they’ll even do the work free. This is in part supported by their Patreon page, where anyone can subscribe or see updates from Ura Design.

Their Patreon page is part of the reason Ura Design is able to take on some projects at no cost. “Since there are working hours involved, we are asking for people to make small contributions to help us pay living costs, so we work for small projects who apply for free or minimal design support from us. This is our way of supporting some open source initiatives that we think are worth it,” says Skikuli. Right now, they have 22 backers to the project, which lets them cover most infrastructure costs. Some of the goals for the team now is to expand into photography to release work into the public domain.

Ura Design Patreon page

Ura Design Patreon: https://patreon.com/ura

Past projects

Ura Design has worked with many open source projects already. Some of their work covers projects at Mozilla, the Tor Project, Glucosio, GalliumOS, Open Labs Hackerspace, and more. You can see the full list of past works on their website.

Mozilla

Mozilla Localization team mentorship logo by Ura DesignThe Mozilla localization team was looking to send a reward to their community translators around the world. Specifically, it was to celebrate the relationships formed between mentors and mentees over the years. The localization team was hoping to design t-shirts that captured these relationships and why they were important for the community.

For Mozilla, Qoshi had an existing relationship with Mozilla as a contributor. He was asked to help design and capture this connection inside of the localization community for Mozilla. The final design focused on two lions, one big and one small, looking at each other. “It was nice effort for contributors who have been mentoring others to get recognized for their contributions. Good design breaks off a conversation even for a project like this,” Qoshi said.

The Tor Project

Working with the Tor Project was a unique experience for Ura Design. The Tor Project was looking at rebranding the entire project. The end goal was to try to improve the accessibility of the project by incorporating good design elements.

Tor Project rebranding by Ura Design

Part of the new branding used by the Tor Project

Together with the Tor Project leadership, Ura Design helped lead the rebranding of the project. This included graphical assets, logos, and corporate identity. Today, you can see the new branding featured across Tor’s web presence online. There are still plans to continue to roll these changes out over the coming year.

Logobridge

The newest project from the team is Logobridge. From unused work or small samples, Ura Design releases several new logos into the public domain each month. People are encouraged to use them in their projects, for icons, for placeholders, or anything they want. There’s no restrictions on how the logos can be used. Anyone can download the source SVG files to use in vector imaging software, like Adobe Illustrator or Inkscape.

Most of the logos designed through Logobridge are supported by monthly subscribers to Ura Design. It was from this that they decided to start Logobridge. You can see all of the logos they have on their website.

Got projects?

Ura Design is still relatively new, but they hope to continue impacting open source projects through their work. To learn more about Ura Design, you can visit their website or read their blog. Additionally, you can follow them on Facebook or Twitter for other news and updates from the team. If you want to support their work, you can visit their Patreon page. And if you’re an open source project? The Ura Design team encourages you to get in touch!

The post Ura Design crowdfunds free design for open source projects appeared first on Justin W. Flory's Blog.

Announcing Boltron: The Modular Server Preview

Posted by Fedora Magazine on July 25, 2017 08:00 AM

The Modularity and Server Working Groups are very excited to announce the availability of the Boltron Preview Release. Boltron is a bit of an anomaly in the Fedora world — somewhere between a Spin and a preview for the future of Fedora Server Edition. You can find it, warts (known issues) and all, by following the directions below to grab a copy and try it out.

Fedora’s Modularity Working Group (and others) have been working for a while on a Fedora Objective. The Objective is generically called “Modularity,” and its crux is to allow users to safely access the right versions of what they want. However, there are two major aspects of “accessing the right versions.”

The first aspect deals with the problem of installing multiple versions of something in the same user space. In other words, the user may want httpd-2.4 and httpd-2.6 installed and running at once. There are countless solutions to this problem, with different tradeoffs and primary goals. For example:

  • Python natively allows this
  • Software Collections munge binaries into their own namespace on disk
  • Containers namespace most aspects of the running binaries away from the default user space.

Early on, the Modularity WG decided not to focus on solving this problem yet again. Rather, they promoted and encouraged OCI containers, System Containers, and Flatpaks to address the different use cases in this space. Watch for another announcement about using System Containers with Boltron in a few weeks.

There are other solutions, but in the interest of time, the Working Group has focused on the other aspect, availability of multiple versions. At first glance, this may seem to be a simple problem. That is, until you review the Fedora infrastructure and see the tight coupling of our packaging and the concept of the “Fedora Release” (e.g. F25, F26, etc) with everything Fedora builds and ships.

The Working Group also took on the requirement to impact the Fedora Infrastructure, user base, and packager community as little as possible. The group also wanted to increase quality and reliability of the Fedora distribution, and drastically increase the automation, and therefore speed, of delivery.

As a result, the group didn’t treat this as a greenfield experiment that would take years to harden and trust. Instead, they kept the warts and wrinkles with the toolset, and implemented tools and procedures that slightly adjusted the existing systems to provide something new. Largely, the resultant package set can be thought of as virtualized, separate repositories. In other words, the client tooling (dnf) treats the traditional flat repo as if it was a set of repos that are only enabled when you want that version of the component.

Now Fedora has 25 modules for you to play with, easily shown with dnf module list or at the bottom of dnf list. As the Arbitrary Branching change to dist-git didn’t land in time for Fedora 26, the stream for most of the modules is the typical branch found in dist-git, namely f26. Over time, the modules are expected to actually develop their own streams that most likely follow their upstream communities. There is one example at present, where NodeJS version 8 is being made available in the nodejs-8 stream.

The Bits

“Blah blah, where are my bits,” you ask? The recommended process starts by running the system as a container, found in the Fedora Registry:

docker run --rm -it registry.fedoraproject.org/f26-modular/boltron

Feedback

The Modularity Working Group is interested in your feedback. The group developed a Getting Started page and a general feedback form. However, we could really use your specific feedback about the interactions with the tools and would love it if you could try our walk through.

You can find more general Modularity documentation, including how to build a module, at our docs site. The proposed Module Packaging Guidelines also appear in the Pagure repo, to ease collaboration before it is promoted to the Wiki after approval. We are recommending users try the container so that we have the opportunity to update it to deal with warts and feedback over the course of Fedora 26.

August 2017 Elections – Nomination period open

Posted by Fedora Community Blog on July 25, 2017 01:26 AM

Originally posted by Jan Kurik on the Ambassadors mailing list.


FESCo, Council and FAmSCo elections are now open and we’re looking for new candidates. If you are interested in these roles, please add yourself to the lists of nominees before 23:59:59 UTC on July 31st, 2017! If you wish to nominate someone else, please consult with that person ahead of time. If you know someone who would be a good candidate, now is a great time to make sure they’re thinking about it.

For FESCo we have opened 4 seats.

For Council we have opened 1 seat.

For FAmSCo we have opened 3 seats.

August 2017 Elections schedule

The Elections schedule is as follows:

  • July 25 – July 31: Nomination period
  • August 01 – August 07: Campaign period
  • August 08 – August 14: Voting opens
  • August 15: Results announcement

The elections questionnaire needs more questions for Community Blog interviews! If you have anything you would like to ask candidates to FESCo, FAmSCo or to Council, please add it to the wiki.

Learn more about Fedora leadership

Read more about the FESCo, FAMSCo, and the Council at:

The post August 2017 Elections – Nomination period open appeared first on Fedora Community Blog.

rawhide notes from the trail, the 2017-07-24 edition

Posted by Kevin Fenzi on July 24, 2017 10:10 PM

Greetings! Once again it’s been a long spell since one of these posts, so lets jump right in and rope that calf.

Rawhide has been marching along to the next branching point, when Fedora 27 will branch off for it’s release. There’s a mass rebuild that should be happening very very soon now, there was a bit of delay while tools were all sorted out. Look for a very very big rawhide update soon once that mass rebuild finishes.

All the alternative arches are now in the same koji. We finally got s390x added in and going. Initally it was only 5 builders with 4GB ram, so things got stopped up from time to time, as all it would take was 5 long builds to show up and everything else waited for them. A few weeks ago we increased things to 15 builders and 8GB of ram each so hopefully that will keep up with builds as we go. We will see what the mass rebuild looks like with all the alternative arches included this time. I suspect it’s going to take longer than before, but not sure how much longer.

Thanks to Patrick you may also notice that uploading sources to our lookaside cache is no longer uploading them twice. Instead it’s using the kerberos cache to authenticate on the first go so it just needs to upload once.

We have missed some composes in the last few months (once for quite a while), and this is due to the compose process now failing the compose if all the release blocking deliverables are not there. On one hand thats nice because it means we always have those if the compose finishes, on the other it’s not so nice because if it fails we need to go fix whatever breakage is there.

Look for a new note soon, and ride safe out there!

GSoC: Improvements in kiskadee architecture

Posted by David Carlos on July 24, 2017 05:00 PM

Today I have released kiskadee 0.2.2. This minor release brings some architecture improvements, fix some bugs in the plugins and improve the log messages format. Initially, lets take a look in the kiskadee architecture implemented on the 0.2 release.

In this architecture we have two queues. One used by plugins, called packages_queue, to queue packages that should be analyzed. This queue is consumed by monitor component, to check if the enqueued package was not analyzed. The other queue, called analysis_queue, is consumed by the runner component, in order to receive from monitor, packages that must be analyzed. If a dequeued package not exists in the database, the monitor component will save it, and enqueue it in the analysis_queue queue. When a analysis is made, the runner component updates the package analysis in the database. Currently, kiskadee only generate analysis for projects implemented in C/C++, and this was a scope decision made by the kiskadee community. Analyze only projects implemented in this languages, makes several monitored packages not be analyzed, and this behavior, with the architecture 0.2, lead us to a serious problem: A package is saved in the database even if a analysis is not generated for it. This was making our database storing several packages without a static analysis of source code, turning on kiskadee a less useful tool for the ones that want to continuously check the quality of some projects.

The release 0.2.2 fix this architecture issue, creating a new queue used by runner component to send back to monitor, packages that was successfully analyzed. In these implementation, we remove from the runner source code all database operations, centering in monitor the responsibility of interact with the database. Only packages enqueued in the results_queue, will be saved in the database by the monitor component.

We also add a limit to all kiskadee queues, once that the rate that a plugin enqueue a package, is greater than the rate the runner run a analysis. With this limit, all queues will always have at most ten elements. This will make the volume of monitored projects proportional to the analyzed projects. The log messages was also improved, facilitating the tool debug. Some bug fix in Debian plugin was also made, and now some packages that were been missed, are been properly monitored. This architecture improvements make the behavior of kiskadee more stable, and this release is already running in a production environment.

CzP @ RMLL / Libre Software Meeting 2017

Posted by Peter Czanik on July 24, 2017 12:04 PM

This year I participated again in the security track of the largest French open source conference, Libre Software Meeting (RMLL). “Participated” as I did not only give a talk on syslog-ng there, but also sat in to most of the presentations and had very good discussions both with visitors and fellow speakers. The organizers brought together talks from diverse IT security related fields, a very good opportunity for cross-pollination of ideas.

You can find the schedule of the security track here.

RMLL / LSM

Monday

As the security track started only in the afternoon, I was looking for an English speaking session elsewhere. I found one about an open source application developed in close cooperation between doctors and open source developers. It was originally designed for rescue workers at the Nepal Earthquake two years ago and since then it was successfully used after other disasters as well.

The first afternoon of the security track focused on making the privacy of communication over the internet easier for end users. All of the applications shown are still in various stages of early development.

  • Caliopen is a unified message management application with a strong privacy focus. It basically aggregates all private messages from mobile messaging apps (Facebook, Twitter, email and so on) into a single timeline. It creates a privacy index for anything, making sure that the user knows how secure is it to post on a given channel.
  • Pretty Easy Privacy is making privacy for email default and easy to use. It encrypts and anonymizes your email and works with accounts such as Gmail or Yahoo.
  • Ring: distributed communications application based on blockchain technologies that respects users privacy. It serves as a secure phone and messenger application that does not store centralized information about users.

Tuesday

The second day also covered many diverse topics. For a full list, check the schedule, I only list some of my favorites here.

  • Clémentine Maurice demonstrated in her talk, that Javascript has a lot more access to your hardware than you would ever think. Using just a browser running on the host she was collecting keystrokes from an isolated virtual machine. I got an uneasy feeling about how secure my system actually is…
  • Damien Cauquil & Nicolas Kovacs talked about Internet of Compromised Things and how IoT devices can be used in forensic investigations. That is why CERT-UBIK created the Hardware Forensic Database so when necessary these devices can be analyzed quickly.
  • Ole André Vadla Ravnås made the most fantastic live demo during his talk at RMLL about the Frida debugger. I do not actively code any more, but it seemed so easy to debug a running application using Frida that it made me think about coding again.

At the end of the day there was a lightning talk session where I also participated. I turned one of my blogs into a short talk: using SCL to simplify syslog-ng configuration.

Before my talk I asked people in the room how many of them know about syslog-ng and about 3/4 of the attendees raised their hands. When I asked how many actually use it, still more than half responded positively. Both are very nice numbers, considering that syslog-ng is not installed by default, but it is the choice of the user.

In the evening I participated the security track speakers’ dinner. The food was as fantastic as the company. I had long discussions with some of the organizers and fellow speakers. Based on this I expect to do some testing of MISP, the Open Source Threat Intelligence Platform. If everything works as expected then with some minimal integration the inlist() filter of syslog-ng could use lists maintained by MISP for real time threat detection.

Wednesday

My last day at the conference was Wednesday. To catch my plane back to Budapest I even had to skip a few talks in the afternoon. Luckily I talked to the speakers whose talks I missed the day before at the speakers’ dinner.

  • The morning started with a talk about PaSSHport. It is a solution to control access to SSH servers. Being in France and hearing that only Balabit was mentioned together with a French company as the commercial competitors made me feel proud.
  • My talk was about making sense of your security logs using syslog-ng. I gave an introduction to syslog-ng, talked a lot about message parsing, showed a simple syslog-ng configuration and concluded my talk with a few interesting use cases.
  • Václav Zbránek gave a talk about the router I also have at home, the Turris Omnia. He talked about its history and some of the features, powered entirely by open source software. A couple of the features were new even to me. 🙂

RMLL beer :)

As you can imagine reading about my experiences: I plan to be back at RMLL next year!

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post CzP @ RMLL / Libre Software Meeting 2017 appeared first on Balabit Blog.

Slice of Cake #15

Posted by Brian "bex" Exelbierd on July 24, 2017 09:00 AM

A slice of cake

In the last week as FCAIC I:

  • Lots and lots of Flock work. We are up to over 45 people receiving some form of funding or other travel assistance, including visa letters.
  • Worked Robert Mayr on some Mindshare ideas. This is exciting stuff and you should get involved.
  • Did I mention the Flock work? :)

À la mode

  • Attended a wedding this weekend in Poland. Survived :P.

Cake Around the World

I’ll be traveling some and hope you’ll ping me for coffee if you’re nearby.

  • Flock on Cape Cod, Massachusetts, USA from 29 August - 1 September.

Easy backups with Déjà Dup

Posted by Fedora Magazine on July 24, 2017 08:00 AM

Welcome to part 3 in the series on taking smart backups with duplicity. This article will show how to use Déjà Dup, a GTK+ program to quickly back up your personal files.

Déjà Dup is a graphical frontend to Duplicity. Déjà Dup handles the GPG encryption, scheduling and file inclusion for you, presenting a clean and simple backup tool. From the project’s mission:

Déjà Dup aims squarely at the casual user. It is not designed for system administrators, but rather the less technically savvy.

It is also not a goal to support every desktop environment under the sun. A few popular environments should be supported, subject to sufficient developer-power to offer tight integration for each.

Déjà Dup integrates very well with GNOME, making it an excellent choice for a quick backup solution for Fedora Workstation.

Installing Déjà Dup

Déjà Dup can be found in GNOME Software’s Utilities category.


Alternatively, Déjà Dup can be installed with dnf:

dnf install deja-dup

Once installed, launch Déjà Dup from the Overview.

Déjà Dup presents 5 sections to configure your backup.

  • Overview
  • Folders to save
  • Folders to ignore
  • Storage location
  • Scheduling

Folders to save

Similar to selecting directories for inclusion using duplicity‘s --inclusion option, Déjà Dup stores directories to include in a list. The default includes your home directory. Add any additional folders you wish to back up to this list.

Perhaps your entire home directory is too much to back up. Or parts of your home directory are backed up using version control. In that case, remove “Home” from the list and add just the folders you want to back up. For example, ~/Documents and ~/Projects.

Folders to ignore

These folders are going to be excluded, similar to the --exclude option. Starting with the defaults, add any other folders you wish to exclude. One such directory might be ~/.cache. Consider whether to exclude this carefully. GNOME Boxes stores VM disks inside ~/.cache. If those virtual disks contain data that needs to be backed up, you might want to include ~/.cache after all.

Launch Files and from the Action menu, turn on the Show Hidden Files option. Now in Déjà Dup, click the “plus” button and find Homecache. The list should look like this afterwards:

Storage location

The default storage location is a local folder. This doesn’t meet the “Remote” criteria, so be sure to select an external disk or Internet service such as Amazon S3 or Rackspace Cloud Files. Select Amazon S3 and paste in the access key ID you saved in part 1. The Folder field allows you to customize the bucket name (it defaults to $HOSTNAME).

Scheduling

This section gives you options around frequency and persistence. Switching Automatic backup on will immediately start the deja-dup-monitor program. This program runs inside a desktop session and determines when to run backups. It doesn’t use cron, systemd or any other system scheduler.

Back up

The first time the Back Up utility runs, it prompts you to configure the backup location. For Amazon S3, provide the secret access key you saved in part 1. If you check Remember secret access key, Déjà Dup saves the access key into the GNOME keyring.

Next, you must create a password to encrypt the backup. Unlike the specified GPG keys used by duply, Déjà Dup uses a symmetric cipher to encrypt / decrypt the backup volumes. Be sure to follow good password practices when picking the encryption password. If you check Remember password, your password is saved into the GNOME keyring.

Click Continue and Déjà Dup does the rest. Depending on the frequency you selected, this “Backing up…” window will appear whenever a backup is taking place.

Conclusion

Déjà Dup deviates from the backup profiles created in part 1 and part 2 in a couple specific areas. If you need to encrypt backups using a common GPG key, or need to create multiple backup profiles that run on different schedules, duplicity or duply might be a better choice. Nevertheless, Déjà Dup does an excellent job at making data back up easy and hassle-free.

The little bit different Fedora 26 Release Party – Part 2

Posted by Sirko Kemter on July 23, 2017 12:52 PM

Today was part 2 of the preparing sessions for the Fedora Release Party at PNC on Saturday the 29th of July. So I did spent again the day at this school, I remember a discussion I had at Flock in Prague about the motivation of Asians. There was a saying that you have to motivate them to doing things, I did not agree at that time and I still have this opinion. Whenever I did work with the young people here they allways doing it with lot of passion, yes they are take longer to understand complex things and it is harder for them but they allways if they do something they do it with a lot of motivation. Like today, the first class did still work as the second class was already waiting before the door and the second class did not stop me, so we did overdraw the time for more as 1.5 hours and they still did work as I left them.
Its a big thing we try to achieve, I already told you in part one most of them have never seen a computer before so the knowledge is not that deep. So when it comes to the Python Classroom Feature, they come with question like what is Docker and what is Vagrant and what now is Containerization. Also on the other topics, mostly they use windows so the division between and Desktop Environment and the actually Operating System is not clear to them and must be explained. As we started plan this event we calculated 3 days a 4 hours for each class but through the lack of available computers, we have only 25 we can go with one class at once. What means first class only 2 hours and the second class is more lucky as we can overdraw. (I am sure PNC would be very happy about hardware donations). All teams are now in a stage that they have figured out what all the features are and what they do and have made at least a plan on paper, what they want to put onto the slides. On Tuesday we wanted to do normally the selection process but I fear we have at least to use one hour for each class to prepare the slides, which leaves us one hour to test the teams. Lets see if it works out, if not we might have to do it on Saturday in the morning.

In general I got some nice feedback from the students, yes Linux and Fedora is totally new to them, so that you have a complete environment installed with an Office Suite, video player, music player and all of the things you might need for your daily needs in working with a computer is for some amazing and they did like it. To the LXQT spin, well there is one problem GNOME has a nice application called Software on LXQT you have to use the terminal, for this environment its better there would be a graphical software tool, they are scared of the terminal. We did use the terminal a bit and the most was amazed what you can do with it and how easy it is at all. But a GUI tool would be better to put on to the LXQT spin, especially this DE aims for to ranges of people that ones with older hardware (would be here very often the case) and people who wants more ressources for work. I hope Christian finds a good way to install software in a graphical way on LXQT.
Back to Release Party topic, we will see how far we come on Tuesday or if we have to use the Saturday morning for the selection of the teams who will present.

P.S. I forgot to make a group picture with the first class today as there was a flying change, so just half of the students are shown. The other one I will do on Tuesday.

to be continued…

More GIMP effects

Posted by Julita Inca Chiroque on July 23, 2017 07:54 AM

Since I have to prepare new material and slides to upcoming conferences… I couldn’t help doing some effects, I have seen on GIMP, with myself and the two Linux projects I belong.

The first one, regarded Fedora, was learning from here:And the second one, for GNOME <3. This experience was based on this video.I could not believe how easy was learning GIMP 😉


Filed under: FEDORA, GNOME Tagged: fedora, GIMP, gimp effects, GNOME, Julita Inca, Julita Inca Chiroque, learning GIMP, linux

Security and privacy are the same thing

Posted by Josh Bressers on July 23, 2017 12:36 AM
Earlier today I ran across this post on Reddit
Security but not Privacy (Am I doing this right?)

The poster basically said "I care about security but not privacy".

It got me thinking about security and privacy. There's not really a difference between the two. They are two faces of the same coin but why isn't always obvious in today's information universe. If a site like Facebook or Google knows everything about you it doesn't mean you don't care about privacy, it means you're putting your trust in those sites. The same sort of trust that makes passwords private.

The first thing we need to grasp is what I'm going to call a trust boundary. I trust you understand trust already (har har har). But a trust boundary is less obvious sometimes. A security (or privacy) incident happens when there is a breach of the trust boundary. Let's just dive into some examples to better understand this.

A web site is defaced
In this example the expectation is the website owner is the only person or group that can update the website content. The attacker crossed a trust boundary that allowed them to make unwanted changes to the website.

Your credit card is used fraudulently
It's expected that only you will be using your credit card. If someone gets your number somehow and starts to make purchases with your card, how they got the card crosses a trust boundary. You could easily put this example in the "privacy" bucket if you wanted to keep them separate, it's likely your card was stolen due to lax security at one of the businesses you visited.

Your wallet is stolen
This one is tricky. The trust boundary is probably your pocket or purse. Maybe you dropped it or forgot it on a counter. Whatever happened the trust boundary is broken when you lose control of your wallet. An event like this can trickle down though. It could result in identity theft, your credit card could be used. Maybe it's just about the cash. The scary thing is you don't really know because you lost a lot of information. Some things we'd call privacy problems, some we'd call security problems.

I use a confusing last example on purpose to help prove my point. The issue is all about who do you trust with what. You can trust Facebook and give them tons of information, many of us do. You can trust Google for the same basic reasons. That doesn't mean you don't care about privacy, it just means you have put them inside a certain trust boundary. There are limits to that trust though.

What if Facebook decided to use your personal information to access your bank records? That would be a pretty substantial trust boundary abuse. What if your phone company decided to use the information they have to log into your Facebook account?

A good password isn't all that different from your credit card number. It's a bit of private information that you share with one or more other organizations. You are expecting them not to cross a trust boundary with the information you gave them.

The real challenge is to understand what trust boundaries you're comfortable with. What do you share with who? Nobody is an island, we must exist in an ecosystem of trust. We all have different boundaries of what we will share. That's quite all right. If you understand your trust boundary making good security/privacy decisions becomes a lot easier.

They say information is the new oil. If that's true then trust must be the currency.

What is happening in Fedora?

Posted by Tonet Jallo on July 22, 2017 07:34 PM

The last week we had a Fedora Activity Day for LATAM Ambassadors, it was in Cusco – Perú, so, why was celebrated this event?

I can tell you why in some words, new Fedora people (people, not fedorapeople.org) don’t know  how to do things inside the community, how to collaborate, how to request sponsorship, how to be aware when spending Fedora resources… etc, and… the old people are busy now and can’t spend much time in Fedora.

It was an Alex Oviedo’s (alexove) initiative and we had six representatives for LATAM countries x3mboy from Chile, asoliard from Argentina, josereyesjdi from Panama, itamarjp from Brazil, searchsam from Nicaragua, me (tonet666p) from Perú and bexelbie from Czech Republic (yes, is not LATAM but he was helping us).

Under three days we were analyzing what we were doing wrong, so, in summary, we made a FODA matrix about the community (Fortalezas, Oportunidades, Debilidades, Amenazas in spanish) it means Strenghts, Opportunities, Weakneses and Threats. Following is the matrix.

Strengths Weaknesses
  • Diversity (about people skills)
  • Strong community
  • Initiative
  • Tools availability
  • We don’t offer right solutions to users
  • Language barrier
  • Cultural differences
  • Ignorance about rules
  • Lack of presence in the universities
  • Bad budget use
  • Lack of communication
Opportunities
  • Promote collaboration between the others teams who have communication with the ambassadors team
  • Canalize the fedora users suggestions and needs to the right community teams
  • Propose improvements to the community infrastructure to improve the work of the ambassadors team
  • Promote Fedora and the community in the university and basic education areas focusing on the increase of users and then collaborators
  • Promote spaces for strengthening communication to improve cultural and language relations.
  • Strengthen the presence of the Fedora community in the Latin American universities and schools.
  • Improve the process of managing community events and disseminate them.
  • Red Hat support
  • Presence in other upstream projects
  • Generated impact of Fedora
  • Universities support
  • Possibility for show Fedora to students from different specialities.
Threats
  •  Promote and participate in spanish support channels and other languages (Ask, IRC, Telegram, etc)
  • Design talks/workshops aimed to capture new users for Fedora (from other distros or Windows)
  • Establish corrective measures directed to collaborators who do not comply with the rules and/or inappropriate behaviors
  • Motivate active collaborators
  • Facilitate the means for inactive collaborators to rejoin the community
  • Make an collaboration and continuity environment of work when some collaborators have to stop
  • Collaborator’s economic activities
  • Microsoft communities and the use of Open Source term
  • Greater presence of Ubuntu in universities (users and laboratories)

So, to improve our deficiencies, we agree to do the middle painted cells.

Also, in the second day we made a better process for request sponsorship from Fedora, is the following:

Pre Event Tasks Post Events Tasks
  1. Create a wiki event page
  2. Create a Pagure ticket (at least 3 days before)
  3. Create the event in the Fedora Calendar
  4. Post it in the Magazine
  1. Make reports
    1. Community Blog (Event Owner)
    2. Report on Fedora Planet (Event Owner/attendees)
    3. Put a report link in the Event Wiki Page (Event Owner/attendees)
  2. Request reimbursement

If you will request a sponsoring, you need to follow this rules, every time. It will improve the visibility of ambassadors work, also make better events organized with time, also it will improve the visibility of Fedora over the internet.

In summary, this is all we work these days, we hope this serves the community.


DB Browser - tool for databases.

Posted by mythcat on July 22, 2017 06:20 PM
Also is a good tool for learning and test database querys.
About this tool the development team tells us:

DB Browser for SQLite is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite. It is for users and developers wanting to create databases, search, and edit data. It uses a familiar spreadsheet-like interface, and you don't need to learn complicated SQL commands. 
  • Controls and wizards are available for users to: 
  • Create and compact database files 
  • Create, define, modify and delete tables 
  • Create, define and delete indexes 
  • Browse, edit, add and delete records Search records 
  • Import and export records as text 
  • Import and export tables from/to CSV files Import and export databases from/to SQL dump files 
  • Issue SQL queries and inspect the results 
  • Examine a log of all SQL commands issued by the application
Under Fedora distro the packages is named : sqlitebrowser.
To install it just use this:
$ sudo dnf install sqlitebrowser
You can used also with this OS: Windows, MacOS X, OSX 10.8 (Mountain Lion) - 10.12 (Sierra), Linux (Arch Linux, Fedora and Ubuntu and Derivatives).

Fedora: telnet game - BatMUD.

Posted by mythcat on July 22, 2017 06:20 PM
This is a good game if you have a telnet and internet connection.
Just open your terminal, run the telnet command and type o to open this: batmud.bat.org 23.
The game has a official website.
The team tells us about this game:

What is BatMUD - scratching the surface 

One could go on and rant for hours and hours about the Game. If you're not familiar with BatMUD, don't worry - you won't even be after the first week of playing. The game's not easy, it was never intended to be. The first eyeful can be deceiving, especially as we live in the fully graphical world of commercially produced, hundred-million dollar budget behemoths. Our game, it's nothing like that; even though we tend to boast that it is more, and trust us - it is. A problem with the modern day games is that, eventually they become very dull or simply uninspiring. However, BatMUD's text-based approach it is different, somewhat to as reading a good book - it's all about your imagination. Hundreds of volunteer developers through the Decades have brought a special uniqueness to the Game, and new ones continue the Legacy to this day. We cater to almost everyone: the available options and playstyles are basically endless. It's Your Realm.

The java interface with my account, see:

QEMU - Devil Linux on Fedora 25.

Posted by mythcat on July 22, 2017 06:20 PM
QEMU (short for Quick Emulator) is a free and open-source hosted hypervisor that performs hardware virtualization QEMU is a hosted virtual machine monitor.You can install this software using dnf tool.
dnf install qemu.x86_64 
You can use any iso image from internet to run and test your distro linux. Just use this command:
I tested with Devil Linux iso without network ( the main reason was the settings of Devil Linux distro).
qemu-system-x86_64 -boot d -cdrom ~/devil-linux-1.8.0-rc2-x86_64/bootcd.iso --enable-kvm -m 2048
-netdev user,id=user.0
Some args of qemu tool:
- qemu-system-x86_64 is the option for x86 architecture (64 bit);
- boot and -d set options for booting and debug;
- the -cdrom option set the iso file path and file;
- the --enable-kvm enable Kernel Virtual Machine;
- the -m 2048 set memory;
- the -netdev user,id=user.0 that tells us about qemu to use the user mode network stack which requires no administrator privilege to run;  
About QEMU VLAN.
QEMU networking uses a networking technology that is like VLAN. The QEMU forward packets to guest operating systems that are on the same VLAN. Examples with qemu-kvm options:
-net nic,model=virtio,vlan=0,macaddr=00:16:3e:00:01:01 
-net tap,vlan=0,script=/root/ifup-br0,downscript=/root/ifdown-br0
-net nic,model=virtio,vlan=1,macaddr=00:16:3e:00:01:02
-net tap,vlan=1,script=/root/ifup-br1,downscript=/root/ifdown-br1
- net nic command defines a network adapter in the guest operating system. - net tap command defines how QEMU configures the host. You can disabling networking entirely:
-net none

FAD Latam 2017: x3mboy’s Event report

Posted by x3mboy on July 22, 2017 06:01 PM

Last week ( from 13 until 15 July 2017) I had the honour of attend the FAD Latam 2017 representing Chile, the country where I’m currently living. This FAD (Fedora Activity Day) was organized by Alex Oviedo, an Ambassador from Perú, with the intention of discuss several topics and problems that Latam community is facing. I’m going to divide this in 2 parts:

Organizing the event

At first, Alex come with this idea after a new that affect all Ambassadors in different ways: Council rip out the FUDCon lines from the yearly budget. A lot of emotions were dropped in meetings an emails. Having this out, Alex, as well as others, though that was our (Ambassadors) fault and then he came out with the idea of an activity to work in the internal problems, to cover the community part of our FUDCon.

At first it wasn’t easy, a lot of things needs to be done, a ticket, a properly wiki, also to convince other ambassadors that we need the event, and try to make them choose a representative per country. I try to help Alex as much as I can, but the accomplishment of the FAD is totally on him, that dedicate a lot of time and effort, communicating with the council, fixing the tickets, with FAmSCO, and finally putting 2 bids to choose where to run the event.

Right now, I’m very proud of him, he made something really good, almost without help, sending email. But mainly because he is worried about our Latam Community. Even fighting against time and flight prices. Finally we get the list of attendees:

  • Alex Irmel Oviedo Solis from Cusco, Peru
  • Tonet Jallo from Puno, Peru
  • Itamar Reis Peixoto from Uberlandia, Brasil
  • José Reyes from Panama City, Panama
  • Adrian Soliard from Cordoba, Argentina
  • Brian Exelbierd from Brno, Czech Republic
  • Samuel Gutiérrez from Managua, Nicaragua
  • And finally me: Eduard Lucena from Santiago de Chile, Chile

It was a hard journey, but we finally made it.

The event

The event was held in the Universidad Global del Cusco, where people was really interested in help a Linux Community in their city. We need to thanks their rector, who give us a warming welcome in a great welcome speech. After that we start by doing a SWOT analysis. A SWOT analysis is a methodology that allow team to identify identifying Strengths, Weaknesses, Opportunities and Threats, cross them and create strategies to improve the work an attack the weaknesses and threats detected, taking advantage of the strengths. We work during the first day we work identifying the 4 points:

Strengths

  • Diversity (diversity of technical skills)
  • Strong Community
  • Initiative
  • Availability of tools

Weaknesses

  • We don’t offer right solutions to users
  • Language barrier
  • Cultural differences
  • Ignorance of rules
  • Lack of presence in Colleges
  • Budget isn’t well managed
  • Lack of communication

Opportunities

  • Red Hat support
  • Presence on upstream lines in compatible projects
  • Impact generated for Fedora
  • College’s support
  • Possibility of show Fedora to students from different branches

Threats

  • Collaborator’s economic activities
  • Microsoft communities and use of the term “Open Source”
  • Greater presence of Ubuntu in colleges (users and labs)

We spent a lot of time talking about this topics, and we brainstorm a lot trying to figure it how to attack them. Finally, on the second day, we came with some strategies:

SWOT Analysis

SWOT Analysis

On the second day, we try to discuss about how to attack the events in a better way, to fit the new Fedora Mission Statement (thanks to Brian “bex” Exelbierd) and the Council point of view (again thanks to bex). Attacking in a better way the 4 questions that Council is now asking for funding events, and what these questions means. At this point, we understand what the no FUDCon in the budget means, and how the Council is looking for original ideas, and better and greater ways to people contribute and improve the Fedora Project. We came out with a new process, that will improve our internal communications and how we promote our events, taking advantage of our tools (fedocal and Fedora Magazine):

  1. Pre-event
    1. Create wiki event page
    2. Create a ticket
    3. Create event in fedocal
    4. Post in Magazine
  2. Post-Event
    1. Make reports
      1. CommBlog
      2. Wiki
    2. Request reimbursement

This will allow us to improve the visibility of the ambassadors work, both to the project and to the public.

On the last day we talk about budget and swag, where bex help us a lot, also Eduardo Echeverría (echevemaster) and Abdel Martinez (potty) join us in this discussion where we understand better how Council is funding the project this year, opening, for example, the door to fund other contributors aside ambassadors, trying to expand the vision of people about the Fedora Project. Talking about Swag, we hit the same wall that is always our great obstacule: Distribution. Even when we could find printing quality in our countries, the problem is that produce little quantities is expensive, also not all providers accepts the payment methods that the project have to pay, and not all contributors can pay and wait for reimbursement. Finally, we came out with a strategy that allow us to move to a central production, this suppose to allow us to save money and move those savings to distribution; and also, taking care about countries that have problems with mail and couriers, to produce swag in their places without worries, because the general case is solved and possibly with savings.

Special thanks to Neville Cross (yn1v), who takes a time to explain us how his community has been working and improving.

Final thoughts

It was a really long journey, with a lot of work, a lot of discussion, sometimes is hard to a group that diverse and with so many different point of views to make agreements and finally came with a solid plan, solid strategies and consolidate a great region work. I want to thank bex, who take the work and the time to come to our region and help us, his help is really appreciated and we hope he can come back soon. We really hope that a lot of people, and hopefully all regions, can use our work and the guidelines to help them improve their work.


Advogato has been archived

Posted by Mark J. Wielaard on July 22, 2017 04:11 PM

Advogato has been archived.

When I started working on Free Software advogato was the “social network” where people would keep their diaries (I don’t believe we called them blogs yet). I still remember how proud I was when people certified me as Apprentice.

A lot of people on Planet Classpath still have their diaries imported from Advogato. robilad, audriusa, saugart, rmathew, Anthony, kuzman, jvic, jserv, aph, twisti, Ringding please let me know if you found a new home for your diary.

The little bit different Fedora 26 Release Party – Part 1

Posted by Sirko Kemter on July 22, 2017 01:53 PM

Since Fedora 21 we had not really a Fedora Release Party here in Phnom Penh, Fedora 21 we did at Development Innovations which was wonderful supported by Greta Greathouse the head of this US AID driven institution. But well in Asia an not to mention non-foss oriented company showed a behavior like in Europe 25 years ago. So DI has a new boss, which is lesser FOSS oriented. So we had to find a new place. For Fedora 24 the time frame was to short for me to organize something and Fedora 25 was an epic fail on the Royal University Phnom Penh, which cancelled the party, an hour before it.
So this time I going with Passerelles Numerique as partner, which will not just provide the room and helps eqipement and this is what makes this Release Party a bit different from the “normal” ones.
First of all what is PNC, PNC is a french NGO which helps young under-educated Cambodians to start a career in IT. Its a bit different here as home in Europe, there students of economics are the lower class and the engineers are seen different. Here the rich studying economics or marketing, IT is not from such high value. What means most of the students come from the provinces and have never seen a computer before. I spent now some years here and looking around in the IT companies, I find mostly graduates of PNC, a lot of them went later on to university and did study IT. So it plays a vital role in Cambodias IT sector.

So what is now different, to a normal Release Party. Well on a normal one mostly the Fedora Ambassadors present and here it will be different. Yes I will present of course but there will be other talks which are held from the students. Well now you could think that I am a lazy guy, but it is the opposite. Before we will have next Saturday our Release Party I spent 3 days working with the students on their presentations.

Today the first day of this kind of Fedora Workshop was. There was two classes a 25 students of the first year. We started after an short introduction, with installing Fedora Workstation and the LXQT spin. It took a bit longer as planned, thanks to Murphy but we managed it. We picked 3 features the students will present GNOME 3.24, the LXQT spin and the Python Classroom. For GNOME and LXQT we have 4 teams a 5 students and for the Python Classroom 2 teams. After the installation, the students did start their research, what this feature is about and how to present it the best way. They will continue this for tomorrow the Sunday and on Tuesday there will be a test presentation before a jury.

to be continued…

Apply the STIG to even more operating systems with ansible-hardening

Posted by Major Hayden on July 21, 2017 05:38 PM

FortressTons of improvements made their way into the ansible-hardening role in preparation for the OpenStack Pike release next month. The role has a new name, new documentation and extra tests.

The role uses the Security Technical Implementation Guide (STIG) produced by the Defense Information Systems Agency (DISA) and applies the guidelines to Linux hosts using Ansible. Every control is configurable via simple Ansible variables and each control is thoroughly documented.

These controls are now applied to an even wider variety of Linux distributions:

  • CentOS 7
  • Debian 8 Jessie (new for Pike)
  • Fedora 25 (new for Pike)
  • openSUSE Leap 42.2+ (new for Pike)
  • Red Hat Enterprise Linux 7
  • SUSE Linux Enterprise 12 (new for Pike)
  • Ubuntu 14.04 Trusty
  • Ubuntu 16.04 Xenial

Any patches to the ansible-hardening role are tested against all of these operating systems (except RHEL 7 and SUSE Linux Enterprise). Support for openSUSE testing landed this week.

Work is underway to put the finishing touches on the master branch before the Pike release and we need your help!

If you have any of these operating systems deployed, please test the role on your systems! This is pre-release software, so it’s best to apply it only to a new server. Read the “Getting Started” documentation to get started with ansible-galaxy or git.

Photo credit: Wikipedia

The post Apply the STIG to even more operating systems with ansible-hardening appeared first on major.io.

SECURITY FOR THE SECURITY GODS! SANDBOXING FOR THE SANDBOXING THRONE

Posted by Bastien Nocera on July 21, 2017 04:53 PM
@GodTributes took over my title, soz.

Dude, where's my maintainer?

Last year, probably as a distraction from doing anything else, or maybe because I was asked, I started reviewing bugs filed as a result of automated flaw discovery tools (from Coverity to UBSan via fuzzers) being run on gdk-pixbuf.

Apart from the security implications of a good number of those problems, there was also the annoyance of having a busted image file bring down your file manager, your desktop, or even an app that opened a file chooser either because it was broken, or because the image loader for that format didn't check for the sanity of memory allocations.

(I could have added links to Bugzilla entries for each one of the problems above, but that would just make it harder to read)

Two big things happened in gdk-pixbuf 2.36.1, which was used in GNOME 3.24:

  • the removal of GdkPixdata as a stand-alone image format loader. We really don't want to load GdkPixdata files from sources other than generated sources or embedded data structures, and removing that loader closed off those avenues. We still ended up fixing a fair number of naive assumptions in helper functions though.
  • the addition of a thumbnailer for gdk-pixbuf supported images. Images would not be special-cased any more in gnome-desktop's thumbnailing code, making the file manager, the file chooser and anything else navigating directories full of broken and huge images more reliable.
But that's just the start. gdk-pixbuf continues getting bug fixes, and we carry on checking for overflows, underflows and just flows, breaks and beats in general.

Programmatic Thumbellina portrait-maker

Picture, if you will, a website making you download garbage files from the Internet, the ROM dump of a NES cartridge that wasn't properly blown on and digital comic books that you definitely definitely paid for.

That's a nice summary of the security bugs foisted upon GNOME in past year or so, even if, thankfully, we were ahead of the curve in terms of fixing those issues (the GStreamer NSF decoder bug was removed in 2013, the comics backend in evince was rewritten over a period of 2 years and committed in March 2017).

Still, 2 pieces of code were running on pretty much every file downloaded, on purpose or not, from the Internet: Tracker's indexers and the file manager's thumbnailers.

Tracker started protecting itself not long after the NSF vulnerability, even if recent versions of GStreamer weren't vulnerable, as we mentioned.

That left the thumbnailers. Some of those are first party, like the gdk-pixbuf, and those offered by core applications (Evince, Videos), written by GNOME developers (yours truly for both epub/mobi and Nintendo DS).

They're all good quality code I'd vouch for (having written or maintained quite a few of them), but they can rely on third-party libraries (say GStreamer, poppler, or libarchive), have naive or insufficiently defensive code (gdk-pixbuf loaders,  GStreamer plugins) or, worst of all: THIRD-PARTY EXTENSIONS.

There are external plugins and extensions for image formats in gdk-pixbuf, for video and audio formats in GStreamer, and for thumbnailers pretty much anywhere. We can't control those, but the least we can do when they explode in a wet mess is make sure that the toilet door is closed.

Not even Nicholas Cage can handle this Alcatraz

For GNOME 3.26 (and today in git master), the thumbnailer stall will be doubly bolted by a Bubblewrap sandbox and a seccomp blacklist.

This closes a whole vector of attack for the GNOME Desktop, but doesn't mean we're completely out of the woods. We'll need to carry on maintaining and fixing security bugs in those libraries and tools we depend on, as GStreamer plugin bugs still affect Videos, gdk-pixbuf bugs still affect Photos and Eye Of Gnome, etc.

And there are limits to what those 2 changes can achieve. The sandboxing and syscall blacklisting avoids those thumbnailers writing anything but an image file in PNG format in a specific directory. There's no network, the filename of the original file is hidden and sanitised, but the thumbnailer could still create a crafted PNG file, and the sandbox doesn't work inside a sandbox! So no protection if the application running the thumbnailer is inside Flatpak.

In fine

GNOME 3.26 will have better security for thumbnailers, so you won't "need to delete GNOME Files".

But you'll probably want to be careful with desktops that forked our thumbnailing code, namely Cinnamon and MATE, which don't implement those security features.

The next step for the thumbnailers will be beefing up our protection against greedy thumbnailers (in terms of CPU and memory usage), and sharing the code better between thumbnailers.

Note for later, more images of cute animals.

Microsoft TechTalks in Prague - aka what else could go wrong?

Posted by Radka Janek on July 21, 2017 10:00 AM
In the morning.

This time around I was actually prepared with everything in github for interested people to check out after the talk and i was wearing my new Red Hat loves .NET shirt, what else would I need?

I woke up just before 7am, after really awful night - I couldn’t sleep, had a nosebleed, and in the morning I was freezing cold. I spent an hour in bathroom and then I woke Eric up. I packed up while he did his (much shorter) bathroom. We left the house at 9am with one bottle of water, and all the computery stuff - my notebook, mouse, keyboard, and whatever else was in my backpack which I take to work when I go by bike. And my purse with the usual contents (everything one could possibly need or wish for haha..)

We went to Red Hat office first, and stopped by to buy some breakfast and lunch/dinner for later. We got some sweet-ish type of bread thing. By the time we got to work my legs were in a lot of pain already - I got hurt last time we went out mountainbiking, I probably cracked a bone *shrugs*

We got to the office and I finally picked up a package that was waiting for me for a week or two, since I was a cripple and couldn’t walk for a week or so >_< …and he also helped me carry my IT stuffs up to my desk (i got new monitor.) Then we nommed our breakfast in a meeting room and then we set off to a bus station. By tram. In 30°C. Half an hour of Eww…

Prague
Radka in her new Red Hat loves .NET T-Shirt.

Our new T-Shirts are awesome ;)   (Photo taken at home, next to an awesome painting!)

The trip itself was fine since the bus has AC, but it was late 45 minutes. Great. We needed to pick up my new T-Shirt that the “awesome” shop failed to ship to me within the promised timeframe. I had a Red Hat loves .NET shirt made based on our design, because the ones that Red Hat is making wouldn’t make it to me in time for this conference. When we got there, it turned out, that they still did not even make it!! So we had another delay, another 15 minutes waiting for it. Oh and I forgot to mention that it was another 25 minutes by really boiling city bus… And to get to the venue another 15 minutes in one. I think that it was 32°C at that point, if not more.

We sat in a nearby restaurant to get 5 minutes of rest and something to drink to cool off, and I used their toilettes to change into my new T-Shirt. It looks good, but if you take a good look, they screwed it up. The letters are kinda jagged and the label is not exactly straight either. =(

…nevermind the T-Shirt, I’ll have a nice one when it gets here from US. OH it’s 17:20 - the conference is starting, I totally lost the track of time!

The talk
Radka talking in the Microsoft Prague.

Photo credit: wug.cz

We sat down in the first row and I prepared my stuff during the first talk - I was going second. The room was full. Not big, but full. It has a capacity of 150-ish people. Okay, I’m ready… ready to improvise anyway, eh?

So it’s the time. Oh turns out that the microphone is not ready. Nevermind, the room does have pretty good acoustics and I should be fine. People in the back did nod when I asked if they can hear me. (Edit a day later: I did damage my voice a little and it’s all messy now.)

The talk itself did not go well either. I plugged in a cable that was sticking out, some sort of ethernet situation you know. I did not notice that it didn’t actually connect into any network, it must have been unplugged on the other end - wherever that was heh. So halfway through my talk I found out that I’m not connected, when I couldn’t ssh into my Azure VM where I wanted to demonstrate simple ASP.NET Core application deployment with apache proxy and systemd running it. I had to quickly connect to wifi, where they had some log-in system with speaker password that wasn’t hidden, so i had to unplug the hdmi cable cause i couldn’t shut it off in the software (don’t ask why it didn’t work, I don’t know.)

Finished the rest fine, except that I forgot to ask if anyone had any questions after all that trouble :<

The conference room in Microsoft Prague.

Photo credit: wug.cz

I really liked the performance talk from Adam Sitnik, I fully recommend looking it up on youtube or something!

Afterparty?

After it was over we still had two hours before we had to catch metro and bus home so we joined Microsoft engineers in a pub. FINALLY! PROPER FOOD! …and a nice chat with them. Overall I was happy, got to meet more awesome people =)

And so it is late night and I’m sitting in a bus, writing this post. I’ve been awake for 20 hours, and it’s at least two more til we get home.

I hope to see you guys again in Brno on the 1st of August. Hopefully with a few less issues x.x

Cockpit 146

Posted by Cockpit Project on July 21, 2017 10:00 AM

Cockpit with Software Updates improvements and GCE support

Changing Fedora kernel configuration options

Posted by Fedora Magazine on July 21, 2017 08:00 AM

Fedora aims to provide a kernel with as many configuration options enabled as possible. Sometimes users may want to change those options for testing or for a feature Fedora doesn’t support. This is a brief guide to how kernel configurations are generated and how to best make changes for a custom kernel.

Finding the configuration files

Fedora generates kernel configurations using a hierarchy of files. Kernel options common to all architectures and configurations are listed in individual files under baseconfig. Subdirectories under baseconfig can override the settings as needed for architectures. As an example:

$ find baseconfig -name CONFIG_SPI
baseconfig/x86/CONFIG_SPI
baseconfig/CONFIG_SPI
baseconfig/arm/CONFIG_SPI
$ cat baseconfig/CONFIG_SPI
# CONFIG_SPI is not set
$ cat baseconfig/x86/CONFIG_SPI
CONFIG_SPI=y
$ cat baseconfig/arm/CONFIG_SPI
CONFIG_SPI=y

As shown above, CONFIG_SPI is initially turned off for all architectures but x86 and arm enable it.

The directory debugconfig contains options that get enabled in kernel debug builds. The file config_generation lists the order in which directories are combined and overridden to make configs. After you change a setting in one of the individual files, you must run the script build_configs.sh to combine the individual files into configuration files. These exist in kernel-$flavor.config.

When rebuilding a custom kernel, the easiest way to change kernel configuration options is to put them in kernel-local. This file is merged automatically when building the kernel for all configuration options. You can set options to be disabled (# CONFIG_FOO is not set), enabled (CONFIG_FOO=y), or modular (CONFIG_FOO=M) in kernel-local.

Catching and fixing errors in your configuration files

The Fedora kernel build process does some basic checks on configuration files to help catch errors. By default, the Fedora kernel requires that all kernel options are  explicitly set. One common error happens when enabling one kernel option exposes another option that needs to be set. This produces errors related to .newoptions, as an example:

+ Arch=x86_64
+ grep -E '^CONFIG_'
+ make ARCH=x86_64 listnewconfig
+ '[' -s .newoptions ']'
+ cat .newoptions
CONFIG_R8188EU
+ exit 1
error: Bad exit status from /var/tmp/rpm-tmp.6BXufs (%prep)

RPM build errors:
 Bad exit status from /var/tmp/rpm-tmp.6BXufs (%prep)

To fix this error, explicitly set the options (CONFIG_R8188EU in this case) in kernel-local as well.

Another common mistake is setting an option incorrectly. The kernel Kconfig dependency checker silently changes configuration options that are not what it expects. This commonly happens when one option selects another option, or has a dependency that isn’t satisfied. Fedora attempts a basic sanity check that the options specified in tree match what the kernel configuration engine expects. This may produce errors related to mismatches:

+ ./check_configs.awk configs/kernel-4.13.0-i686-PAE.config temp-kernel-4.13.0-i686-PAE.config
+ '[' -s .mismatches ']'
+ echo 'Error: Mismatches found in configuration files'
Error: Mismatches found in configuration files
+ cat .mismatches
Found CONFIG_I2C_DESIGNWARE_CORE=y  after generation, had CONFIG_I2C_DESIGNWARE_CORE=m in Fedora tree
+ exit 1

In this example, the Fedora configuration specified CONFIG_I2C_DESIGNWARE_CORE=m, but the kernel configuration engine set it to CONFIG_I2C_DESIGNWARE_CORE=y. The kernel configuration engine is ultimately what gets used, so the solution is either to change the option to what the kernel expects (CONFIG_I2C_DESIGNWARE_CORE=y in this case) or to further investigate what is causing the unexpected configuration setting.

Once the kernel configuration options are set to your liking, you can follow standard kernel build procedures to build your custom kernel.

Goldilocks Security: Bad, Won’t Work, and Plausible

Posted by Russel Doty on July 20, 2017 11:03 PM

Previous posts discussed the security challenge presented by IoT devices, using IP Video Cameras as an example. Now let’s consider some security alternatives:

Solution 1: Ignore Security

This is the most common approach to IoT security today. And, to a significant degree, it works. In the same way that ignoring fire safety usually works – only a few businesses or homes burn down each year!

Like fire safety, the risks from ignoring IoT security grow over time. Like fire safety, the cost of the relatively rare events can be catastrophic. Unlike fire safety, an IoT event can affect millions of entities at the same time.

And, unlike traditional IT security issues, IoT security issues can result in physical damage and personal injury. Needless to say, I do not recommend ignoring the issue as a viable approach to IoT security!

Solution 2: Secure the Cameras

Yes, you should secure IP cameras. They are computers sitting on your network – and should be treated like computers on your network! Best practices for IT security are well known and readily available. You should install and configure them securely, update them regularly, and monitor them continuously.

If you have a commercial implementation of an IP video security system you should have regular updates and maintenance of your system. You should be demanding strong security – both physical security and IT security – of the video security system.

You did have IT involved in selection, implementation and operation of the video security system, didn’t you? You did make security a key part of the selection process, just as you would for any other IT system, didn’t you? You are doing regular security scans of the video security system and monitoring all network traffic, aren’t you? Good, you have nothing to worry about!

If you are like many companies, you are probably feeling a bit nervous right now…

For home and small business customers, a secure the camera approach simply won’t work.

  • Customer ease of use expectations largely prevent effective security.
  • Customer knowledge and expertise doesn’t support secure configuration or updates to the system.
  • The IoT vendor business model doesn’t support security: Low cost, short product life, a great feature set, ease of use, and access over the Internet all conspire against security.
  • There is a demonstrated lack of demand for security. People have shown, by their actions and purchasing decisions, the effective security is not a priority. At least until there is a security breach – and then they are looking for someone to blame. And often someone to sue…

Securing the cameras is a great recommendation but generally will not work in practice. Unfortunately. Still, it should be a requirement for any Industrial IoT deployment.

Solution 3: Isolation

If ignoring the problem doesn’t work and fixing the problem isn’t viable, what is left? Isolation. If the IP cameras can’t be safely placed on the Internet, then isolate them from the Internet.

Such isolation will both protect the cameras from the Internet and protect the Internet from the cameras.

The challenge is that networked cameras have to be on the network to work.

Even though the cameras are designed to be directly connected to the Internet, they don’t have to be directly connected to the Internet. The cameras can be placed on a separate isolated network.

In my next post, I will go into detail on how to achieve this isolation using an IoT Gateway between the cameras and all the other systems.


New badge: Flock 2017 Organizer !

Posted by Fedora Badges on July 20, 2017 03:07 PM
Flock 2017 OrganizerThis badge is awarded to anyone who helped with planning and preparation for Flock 2017

New badge: Flock 2017 Attendee !

Posted by Fedora Badges on July 20, 2017 03:07 PM
Flock 2017 AttendeeYou attended Flock 2017, the Fedora Contributor Conference

Summer is coming

Posted by Josh Bressers on July 20, 2017 12:27 PM
I'm getting ready to attend Black Hat. I will miss BSides and Defcon this year unfortunately due to some personal commitments. And as I'm packing up my gear, I started thinking about what these conferences have really changed. We've been doing this every summer for longer than many of us can remember now. We make our way to the desert, we attend talks by what we consider the brightest minds in our industry. We meet lots of people. Everyone has a great time. But what is the actionable events that come from these things.

The answer is nothing. They've changed nothing.

But I'm going to put an asterisk next to that.

I do think things are getting better, for some definition of better. Technology is marching forward, security is getting dragged along with a lot of it. Some things, like IoT, have some learning to do, but the real change won't come from the security universe.

Firstly we should understand that the world today has changed drastically. The skillset that mattered ten years ago doesn't have a lot of value anymore. Things like buffer overflows are far less important than they used to be. Coding in C isn't quite what it once was. There are many protections built into frameworks and languages. The cloud has taken over a great deal of infrastructure. The list can go on.

The point of such a list is to ask the question, how much of the important change that's made a real difference came from our security leaders? I'd argue not very much. The real change comes from people we've never heard of. There are people in the trenches making small changes every single day. Those small changes eventually pile up until we notice they're something big and real.

Rather than trying to fix the big problems, our time is better spent ignoring the thought leaders and just doing something small. Conferences are important, but not to listen to the leaders. Go find the vendors and attendees who are doing new and interesting things. They are the ones that will make a difference, they are literally the future. Even the smallest bug bounty, feature, or pull request can make a difference. The end goal isn't to be a noisy gasbag, instead it should be all about being useful.



New to Fedora: wordgrinder

Posted by Ben Cotton on July 20, 2017 11:21 AM

Do you ever wish you had a word processor that just processed words? Font selection? Pah! Styling? Just a tiny bit, please. Or maybe you read Scott Nesbitt’s article on Opensource.com and thought “I’d like to try this!” If this sounds like you, then it may interest you to know that WordGrinder is now available on Fedora 25, 26, and Rawhide.

View of WordGrinder in a terminal

WordGrinder

I should clarify that it’s only available on some architectures (x86_64, i686, aarch64, and armv7hl). WordGrinder depends on luaJIT which is only available on those platforms.

This is my first new Fedora package, and I have to say I’m kind of proud of myself. I tried to volunteer someone else for it, but he didn’t know how to build RPMs so I ended up volunteering myself. In the process, I had to patch the upstream release to build on Fedora, and then patch my patch to get it to build on Rawhide. In true Fedora fashion, I submitted my patch upstream and it was accepted. So not only did I make a new package available, but I also made an improvement to a project written in a language that I don’t know.

Yay open source!

The post New to Fedora: wordgrinder appeared first on Blog Fiasco.

Three must haves in Fedora 26

Posted by Harish Pillay 9v1hp on July 20, 2017 08:57 AM

I’ve been using Fedora ever since it came out back in 2003. The developers of Fedora and the greater community of contributors have been doing a amazing job in incorporating features and functionality that subsequently has found its way into the downstream Red Hat Enterprise Linux distributions.

There are lots to cheer Fedora for. GNOME, NetworkManager, systemd and SELinux just to name a few.

Of all the cool stuff, I particularly like to call out three must haves.

a) Pomodoro – A GNOME extension that I use to ensure that I get the right amount of time breaks from the keyboard. I think it is a simple enough application that it has to be a must-have for all. Yes, it can be annoying that Pomodoro might prompt you to stop when you are in the middle of something, but you have the option to delay it until you are done. I think this type of help goes a long way in managing the well-being of all of us who are at our keyboards for hours.

b) Show IP: I really like this GNOME extension for it does give me at a glance any of a long list of IPs that my system might have. This screenshot shows ten different network end points and the IP number at the top is that of the Public IP of the laptop. While I can certainly use the command “ifconfig”, while I am on the desktop, it is nice to have it needed info tight on the screen.

 

 

c) usbguard: My current laptop has three USB ports and one SD card reader. When it is docked, the docking station has a bunch more of USB ports. The challenge with USB ports is that they are generally completely open ports that one can essentially insert any USB device and expect the system to act on it. While that is a convenience, the possibility of abuse isincreasing given rogue USB devices such as USB Killer, it is probably a better idea to deny, by default, all USB devices that are plugged into the machine. Fortunately, since 2007, the Linux kernel has had the ability to authorise USB devices on a device by device basis and the tool, usbguard, allows you to do it via the command line or via a GUI – usbguard-applet-qt. All in, I think this is another must-have for all users. It should be set up with default deny and the UI should be installed by default as well. I hope Fedora 27 onwards would be doing that.

So, thank you Fedora developers and contributors.

 

 


Still plugging away

Posted by Suzanne Hillman (Outreachy) on July 20, 2017 01:06 AM

Website

<figure></figure>

I’m playing around with Wix for my website, in part because it’s a giant pain to change things around in my (still official) Pelican-based website, and in part because it’s useful to have the very ‘what you see is what you get’ perspective that Wix offers. I’m still deciding where a good point between ‘offer an overview’ — missing from the pelican version — and ‘not enough details’ — true of much of the Wix version right now — is.

<figure><figcaption>Pelican version of my site</figcaption></figure>

For the moment, any major changes that I think are important to include, I’m trying out in Wix first, and then figuring them out in Pelican. That which is frustrating me most right now is the apparent lack of grid support in Pelican, since that would make so many things look nicer and be easier to follow — indeed, that’s why I don’t have much overview in Pelican right now.

I’m hoping I can get Pelican-alchemy to work as a theme, as it appears to support bootstrap, which itself supports grids. Unfortunately, I can’t figure out how to get it to stop ignoring the settings I have in the style.css file. And, because it’s not as professional-looking without that, and it’s hard to see what things look like there without publishing them first, it’s slow going to figure out. I just want a clean style and grids!

Alternately, I need to continue to move things over to Wix and just give up on Pelican. But it’s a lot of work. And slow. Which is why I’m trying to get obvious wins over to Pelican in the meantime.

Projects

I’ll be meeting up with the person who is working in Querki next week, to get a decent basic understanding of his goals and needs, as well as to figure out the reasoning behind some of the current decisions.

I need to get in touch with people about doing a contextual interview with them about putting their recycling out for pickup. This is for the project I’m working on with the Northeastern student.

I’m also hoping to get a contextual interview with the developer who originally had concerns about user dropdowns. He has provided some screenshots of the kinds of places he runs into the problem, so I need to integrate those into our shared google doc, and figure out some next steps if he’s not willing to do a contextual interview.

I also need to grab some time to continue with my review of the accessibility document in patternfly.

Job Hunting

I am thoroughly confused about the status of my RedHat application. Theoretically, I was supposed to hear something after 5 days when I went through Mo for applying. Of course, I was also supposed to have had three applications through her, and only one managed to actually associate with her name. As of right now, it still says ‘manager review’ — whatever that means. That’s better than the other two, which say “no longer under consideration”. Confusingly, the job titles are all very different from what I actually applied for (the one I’m “under review” for talks about doing development, which… not so much).

I’ve also got an application in with Wayfair, whose UX team is fairly large and has openings at multiple levels of skill. We shall see.

I was contacted by someone at Onward Search, yet another UX recruiting agency. He seemed pretty impressed with my background, and optimistic about being able to find me some possibilities. We’ll see — I’m working with a _lot_ of UX recruiting companies at this point.

Use a DoD smartcard to access CAC enabled websites

Posted by Fedora Magazine on July 19, 2017 04:41 PM

By now you’ve likely heard the benefits of two factor authentication. Enabling multi-factor authentication can increase the security of accounts you use to access various social media websites like Twitter, Facebook, or even your Google Account. This post is going to be about a bit more.

The U.S. Armed Services spans millions of military and civilian employees. If you’re a member of these services, you’ve probably been issued a DoD CAC smartcard to access various websites. With the smartcard comes compatibility issues, specific instructions tailored to each operating system, and a host of headaches. It’s difficult to find reliable instructions to access military websites from Linux operating systems. This article shows you how to set up your Fedora system to login to DoD CAC enabled websites.

Installing and configuring OpenSC

First, install the opensc package:

sudo dnf install -y opensc

This package provides the necessary middleware to interface with the DoD Smartcard. It also includes tools to test and debug the functionality of your smartcard.

With that installed, next set it up under the Security Devices section of Firefox. Open the menu in Firefox, and navigate to Preferences -> Advanced.

In the Certificates tab, select Security Devices. From this page select the Load button on the right side of the page. Now set a module name (“OpenSC” will work fine) and use this screen to browse to the location of the shared library you need to use.

Browse to the /lib64/pkcs11/ directory, select opensc-pkcs11.so, and click Open. If you’re currently a “dual status” employee, you may wish to select the onepin-opensc-pkcs11.so shared library. If you have no idea what “dual status” means, carry on and simply select the former package.

Click OK to finish the process.

Now you can navigate to your chosen DoD CAC enabled site and login. You’ll be prompted to enter the PIN for your CAC, then select a certificate to use. If you’re logging into a normal DoD website, select the Authentication certificate. If you’re logging into a webmail service such as https://web.mail.mil, select the Digital Signing certificate. NOTE: “Dual status” personnel should use the Authentication certificate.

New version of buildah 0.2 released to Fedora.

Posted by Dan Walsh on July 19, 2017 01:01 PM
New features and bugfixes in this release

Updated Commands
buildah run
     Add support for -- ending options parsing
     Add a way to disable PTY allocation
     Handle run without an explicit command correctly
Buildah build-using-dockerfile (bud)
    Ensure volume points get created, and with perms
buildah containers
     Add a -a/--all option - Lists containers not created by buildah.
buildah Add/Copy
     Support for glob syntax
buildah commit
     Add flag to remove containers on commit
buildah push
     Improve man page and help information
buildah export:
    Allows you to export a container image
buildah images:
    update commands
    Add JSON output option
buildah rmi
    update commands
buildah containers
     Add JSON output option

New Commands
buildah version
     Identify version information about the buildah command
buildah export
     Allows you to export a containers image

Updates
Buildah docs: clarify --runtime-flag of run command
Update to match newer storage and image-spec APIs
Update containers/storage and containers/image versions


Holidays

Posted by Remi Collet on July 19, 2017 06:34 AM

My holidays start today, time for me to take some rest, in offline mode.

So, the repository won't be updated before the 1st of August.

Getting Ready for ‎GUADEC 2017

Posted by Julita Inca Chiroque on July 19, 2017 05:14 AM

Only a few days left until the GUADEC 2017 takes place in Manchester! 😀

 

Thanks so much to the GNOME Foundation for placing its trust and confidence in my abilities in involvement and commitment to the community over the past seven years.    So, this time I will talk about the ways of reaching newcomers during the last year.

It is also a pleasure to me to help in the organization of the GNOME Games to my friend Sam Thursfield as part of the celebration of the Twentieth Anniversary Party of GNOME. Ballons are my favorite tools to connect people and I will definitely carry with me lots of them! Please be prepared with History of GNOME and some author of GNOME apps fot the “trivias”.

This is a special ocassion! I will definitely share Pisco from Peru and I am packing in advance(to not forget), my power adapter plug converter, ticket flights, another card as backup for pictures, coins of the pound sterling, my passport and a pack of eye eyelashes 😉

See you then GNOME! Can’t wait to see again lovely GNOME people! ❤


Filed under: FEDORA, GNOME Tagged: ballons, fedora, GNOME, Gnome foundation, GNOME people, GUADEC, GUADEC 2017, Julita Inca, Julita Inca Chiroque, Manchester, Sam organizer, talk at GUADEC, trip

Episode 56 - Devil's Advocate and other fuzzy topics

Posted by Open Source Security Podcast on July 18, 2017 08:50 PM
Josh and Kurt talk about forest fires, fuzzing, old time Internet, and Net Neutrality. Listen to Kurt play the Devil's Advocate and manage to change Josh's mind about net neutrality.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/5551879/height/90/width/640/theme/custom/autonext/no/thumbnail/yes/autoplay/no/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="640"></iframe>

Show Notes



F25-20170418 updated isos Released

Posted by Ben Williams on July 18, 2017 06:25 PM

We the Fedora Respins-SIG are happy to announce new F25-20170718 Updated Lives. (with Kernel 4.11.8-200).

This will be the Final Set of updated isos for Fedora 25. We are converting our builders to start providing Updated Fedora 26 isos in the near future.

With F25 we are now using Livemedia-creator to build the updated lives.

To build your own please look at  https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

With this new build of F25 Updated Lives will save you about 850 M of updates after install.

As always the isos can be found at http://tinyurl.com/Live-respins2


Easy way to fix non functional ctrl key

Posted by Luya Tshimbalanga on July 18, 2017 05:18 PM
Ctrl key buttons refused to work on laptop?
  • Tried pressing Ctrl + Alt + Fn? Mixed result.
  • Reboot hardware? No dice.
  • Pressing Ctrl + Left click on Touchpad? Worked

I am not sure what exactly caused the problem as the issue surprisingly affects more models than expected.